id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
54,161,313 | https://en.wikipedia.org/wiki/LHS%206343 | LHS 6343 is a star system in the northern constellation of Lyra. It appears exceedingly faint with a combined apparent magnitude of 13.435. Based on its stellar properties, the system is thought to be about 119.4 light-years (36.6 parsecs) away.
LHS 6343 is a binary star with two red dwarfs, designated LHS 6343 A and B, respectively. A brown dwarf orbits LHS 6343 A at a close distance, and periodically transits it. The brown dwarf, designated LHS 6343 C, is located within the brown dwarf desert, a zone around stars where very few brown dwarfs have been discovered.
The system was in the field of view of the Kepler spacecraft, and was monitored continuously for possible planets transiting the star, although the transits were found to be caused by LHS 6343 C.
Properties
LHS 6343 is a visual binary. Both stars are red dwarf stars that are much less massive compared to the Sun—the primary is 36% the mass of the Sun and the secondary, 29.2% the mass of the Sun. The two stars have been individually resolved using adaptive optics, showing an angular separation of 0.55, corresponding to a distance of about 20 astronomical units (AU).
The brown dwarf LHS 6343 C orbits the primary star LHS 6343 A at a distance of only 0.0797 AU every 12.7 days. It is about 5 billion years old, and models suggest the brown dwarf has a surface temperature of 1130 K. The system hierarchy is similar to NLTT 41135, another red dwarf binary with a brown dwarf orbiting one of the stars.
Possible planet
LHS 6343 may have a massive planet within the system. In 2012, transit-timing variation was analyzed for any possible substellar companions that may be perturbing the brown dwarf from its normal orbit. Such an object would be less massive than Jupiter and its orbital period would be 3.5 to 8 times larger than that of the brown dwarf. However, the hypothetical perturber's existence has not been confirmed and warrants more observations of the system.
References
Further reading
Lyra
M-type main-sequence stars
Brown dwarfs
Binary stars
T-type brown dwarfs
959
J19101435+4657261 | LHS 6343 | Astronomy | 473 |
505,455 | https://en.wikipedia.org/wiki/Firedamp | Firedamp is any flammable gas found in coal mines, typically coalbed methane. It is particularly found in areas where the coal is bituminous. The gas accumulates in pockets in the coal and adjacent strata and when they are penetrated the release can trigger explosions. Historically, if such a pocket was highly pressurized, it was termed a "bag of foulness".
Name
Damp is the collective name given to all gases (other than air) found in coal mines in Great Britain and North America.
As well as firedamp, other damps include blackdamp (nonbreathable mixture of carbon dioxide, water vapour and other gases); whitedamp (carbon monoxide and other gases produced by combustion); poisonous, explosive stinkdamp (hydrogen sulfide), with its characteristic rotten-egg odour; and the insidiously lethal afterdamp (carbon monoxide and other gases) which are produced following explosions of firedamp or coal dust.
Etymology
Often hyphenated as fire-damp, this term for a flammable type of underground mine gas in first part derives via the Old English fyr, and from the proto-Germanic fūr for "fire" (the origin of the same word in Dutch and German, with similar original spellings in Old Saxon, Frisian, and Norse, as well as Middle Dutch and Old High German). In the second part, the meaning of "damp" (most commonly understood to imply humidity) presents evidence of having been separated from that newer, irrelevant meaning at least by the first decade of the 18th century, where the original relevant meaning of "vapor" also derives from a Proto-Germanic origin, dampaz, which gave rise to its immediate English predecessor, the Middle Low German damp (with no record of an Old English intermediary). As with the derivation of the first, the proto-Germanic dampaz gave rise to many other cognates, including the Old High German damph, the Old Norse dampi, and the modern German Dampf, the last of which still translates as "vapor".
Contribution to mine deaths
Firedamp is explosive at concentrations between 4% and 16%, with most explosions occurring at around 10%. It caused many deaths in coal mines before the invention of the Geordie lamp and Davy lamp. Even after the safety lamps were brought into common use, firedamp explosions could still be caused by sparks produced when coal contaminated with pyrites was struck with metal tools. The presence of coal dust in the air increased the risk of explosion with firedamp and could cause explosions even in the absence of firedamp. The Tyneside coal mines in England had the deadly combination of bituminous coal contaminated with pyrites and there was a great number of deaths in accidents caused by firedamp explosions, including 102 dead at Wallsend in 1835.
The problem of firedamp in mines had been brought to the attention of the Royal Society by 1677 and in 1733 James Lowther reported that as a shaft was being sunk for a new pit at Saltom near Whitehaven there had been a major release when a layer of black stone had been broken through into a coal seam. Ignited with a candle, it had given a steady flame "about half a Yard in Diameter, and near two Yards high". The flame being extinguished and a wider penetration through the black stone made, reigniting of the gas gave a bigger flame, a yard in diameter and about three yards high, which was extinguished only with difficulty. The blower was panelled off from the shaft and piped to the surface, where more than two and a half years later it continued as fast as ever, filling a large bladder in a few seconds. The society members elected Sir James Fellow but were unable to come up with any solution nor improve on the assertion (eventually found to be incorrect) of Carlisle Spedding, the author of the paper, that "this sort of Vapour, or damp Air, will not take Fire except by Flame; Sparks do not affect it, and for that Reason it is frequent to use Flint and Steel in Places affected with this sort of Damp, which will give a glimmering Light, that is a great Help to the Workmen in difficult Cases."
A great step forward in countering the problem of firedamp came when safety lamps, intended to provide illumination whilst being incapable of igniting firedamp, were proposed by both George Stephenson and Humphry Davy in response to accidents such as the Felling mine disaster near Newcastle upon Tyne, which killed 92 people on 25 May 1812. Davy experimented with brass gauze, determining the maximum size of the gaps and the optimum wire thickness to prevent a flame passing through the gauze. If a naked flame was thus enclosed totally by such a gauze, then methane could pass into the lamp and burn safely above the flame. Stephenson's lamp (the "Geordie lamp") worked on a different principle: the flame was enclosed by glass; air access to the flame was through tubes sufficiently narrow that the flame could not burn-back in incoming firedamp and the exiting gases were too low in oxygen to allow the enclosed flame to reach the surrounding atmosphere. Both principles were combined in later versions of safety lamps.
Even after the widespread introduction of the safety lamp, explosions continued because the early lamps were fragile and easily damaged. For example the iron gauze on a Davy lamp needed to lose only one wire to become unsafe. The light was also very poor (compared with a naked flame) and there were continuous attempts to improve the basic design. The height of the cone of burning methane in a flame safety lamp can be used to estimate the concentration of the gas in the local atmosphere. It was not until the 1890s that safe and reliable electric lamps became available in collieries.
The Firedamp whistle was developed by Fritz Haber in 1913, as a prophylactic indicator of firedamp, but calibration in a working colliery ultimately proved impractical.
See also
Firedamp whistle
Whitedamp
Blackdamp
Stinkdamp
Afterdamp
Glossary of coal mining terminology
Abercarn colliery disaster
Coalbed methane
Darr Mine Disaster
Gresford Disaster
Maypole Colliery disaster
Mining accident
Udston mining disaster
References
"Experiments Show How Gas Explodes in a Mine", Popular Science monthly, February 1919, Unnumbered page, Scanned by Google Books: https://books.google.com/books?id=7igDAAAAMBAJ&pg=PT21
Coal mining
Fuel gas
Mine safety
Natural gas safety | Firedamp | Chemistry | 1,333 |
14,913,890 | https://en.wikipedia.org/wiki/Marine%20automobile%20engine | Marine automobile engines are types of automobile petrol- or diesel engines that have been specifically modified for use in the marine environment. The differences include changes made for the operating in a marine environment, safety, performance, and for regulatory requirements. The act of modifying is called 'marinisation'.
Background
All of the "Big 3" American auto companies have had engines marinised at some point. Chrysler is notable, because the company marinised engines in-house through Chrysler Marine, as well as selling engines to third parties such as Indmar or Pleasurecraft Marine.
General Motors marine automobile engines are based on a gasoline truck engine. That means four-bolt main bearing caps instead of just two; sometimes the crankshaft is forged steel and the pistons an upgraded aluminum alloy. Most importantly the camshaft profile is different with the overlap ground to 112 degrees instead of 110. Expansion plugs are bronze to better fight corrosion. The head gasket's metal O-ring is also more corrosion resistant.
Examples of the opposite of a marinised car engine also exist, e.g. the 6,2 or 6,5 liter Detroit Diesel V8 engine found in Chevrolet and GMC utility vehicles was originally a marine engine adapted for automotive use.
Safety modifications
Electrical systems
Starter motors and alternators have internal screens to minimize spark egress.
Fuel systems (petrol/gasoline engines)
Fuel pumps are constructed such that if their diaphragm ruptures, the excess fuel will be directed into the carburettor.
Carburetors do not allow overflow into the boat engine compartment.
Spark arrestor are installed on the engine's air intake (carburetor or electronic fuel injector). The arrestor is a wire mesh screen that cools any internal flame or spark created by back-fire, preventing it from igniting fuel vapours inside the engine compartment.
Fuel systems (diesel engines)
Cooling systems
Engines are water-cooled, drawing raw water through a pickup at the bottom of the boat. In an open cooling configuration, the raw water is circulated directly through the engine and exits after passing through jackets around the exhaust manifolds, while in a closed cooling configuration anti-freeze circulates through the engine and raw water is pumped into a heat exchanger. In both cases hot water is released into the exhaust system and blown out with the engine exhaust gasses.
The transmission oil cooler is cooled by raw water.
Performance modifications
Distribution
The distributor does not have a vacuum advance. Vacuum advances are normally actuated at high rpm/low load situations, which rarely occur in the marine environment: under normal operation, a high rpm generally means a high engine load.
Lubrication
Lubricating oil is cooled in a shell-and-tube type heat exchanger by raw water.
The oil sump is bigger and often has a different shape, so as not to affect the boat's stability.
References
External links
[https://trincamarine.com marine transmission
] from the U.S. Coast Guard
Marine Diesel Engines from Volkswagen Marine
Internal combustion piston engines
Marine engines | Marine automobile engine | Technology | 624 |
102,182 | https://en.wikipedia.org/wiki/Celestial%20mechanics | Celestial mechanics is the branch of astronomy that deals with the motions of objects in outer space. Historically, celestial mechanics applies principles of physics (classical mechanics) to astronomical objects, such as stars and planets, to produce ephemeris data.
History
Modern analytic celestial mechanics started with Isaac Newton's Principia (1687). The name celestial mechanics is more recent than that. Newton wrote that the field should be called "rational mechanics". The term "dynamics" came in a little later with Gottfried Leibniz, and over a century after Newton, Pierre-Simon Laplace introduced the term celestial mechanics. Prior to Kepler, there was little connection between exact, quantitative prediction of planetary positions, using geometrical or numerical techniques, and contemporary discussions of the physical causes of the planets' motion.
Laws of planetary motion
Johannes Kepler as the first to closely integrate the predictive geometrical astronomy, which had been dominant from Ptolemy in the 2nd century to Copernicus, with physical concepts to produce a New Astronomy, Based upon Causes, or Celestial Physics in 1609. His work led to the laws of planetary orbits, which he developed using his physical principles and the planetary observations made by Tycho Brahe. Kepler's elliptical model greatly improved the accuracy of predictions of planetary motion, years before Newton developed his law of gravitation in 1686.
Newtonian mechanics and universal gravitation
Isaac Newton is credited with introducing the idea that the motion of objects in the heavens, such as planets, the Sun, and the Moon, and the motion of objects on the ground, like cannon balls and falling apples, could be described by the same set of physical laws. In this sense he unified celestial and terrestrial dynamics. Using his law of gravity, Newton confirmed Kepler's laws for elliptical orbits by deriving them from the gravitational two-body problem, which Newton included in his epochal Philosophiæ Naturalis Principia Mathematica in 1687.
Three-body problem
After Newton, Joseph-Louis Lagrange attempted to solve the three-body problem in 1772, analyzed the stability of planetary orbits, and discovered the existence of the Lagrange points. Lagrange also reformulated the principles of classical mechanics, emphasizing energy more than force, and developing a method to use a single polar coordinate equation to describe any orbit, even those that are parabolic and hyperbolic. This is useful for calculating the behaviour of planets and comets and such (parabolic and hyperbolic orbits are conic section extensions of Kepler's elliptical orbits). More recently, it has also become useful to calculate spacecraft trajectories.
Henri Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). Poincaré showed that the three-body problem is not integrable. In other words, the general solution of the three-body problem can not be expressed in terms of algebraic and transcendental functions through unambiguous coordinates and velocities of the bodies. His work in this area was the first major achievement in celestial mechanics since Isaac Newton.
These monographs include an idea of Poincaré, which later became the basis for mathematical "chaos theory" (see, in particular, the Poincaré recurrence theorem) and the general theory of dynamical systems. He introduced the important concept of bifurcation points and proved the existence of equilibrium figures such as the non-ellipsoids, including ring-shaped and pear-shaped figures, and their stability. For this discovery, Poincaré received the Gold Medal of the Royal Astronomical Society (1900).
Standardisation of astronomical tables
Simon Newcomb was a Canadian-American astronomer who revised Peter Andreas Hansen's table of lunar positions. In 1877, assisted by George William Hill, he recalculated all the major astronomical constants. After 1884 he conceived, with A.M.W. Downing, a plan to resolve much international confusion on the subject. By the time he attended a standardisation conference in Paris, France, in May 1886, the international consensus was that all ephemerides should be based on Newcomb's calculations. A further conference as late as 1950 confirmed Newcomb's constants as the international standard.
Anomalous precession of Mercury
Albert Einstein explained the anomalous precession of Mercury's perihelion in his 1916 paper The Foundation of the General Theory of Relativity. General relativity led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy.
Examples of problems
Celestial motion, without additional forces such as drag forces or the thrust of a rocket, is governed by the reciprocal gravitational acceleration between masses. A generalization is the n-body problem, where a number n of masses are mutually interacting via the gravitational force. Although analytically not integrable in the general case, the integration can be well approximated numerically.
Examples:
4-body problem: spaceflight to Mars (for parts of the flight the influence of one or two bodies is very small, so that there we have a 2- or 3-body problem; see also the patched conic approximation)
3-body problem:
Quasi-satellite
Spaceflight to, and stay at a Lagrangian point
In the case (two-body problem) the configuration is much simpler than for . In this case, the system is fully integrable and exact solutions can be found.
Examples:
A binary star, e.g., Alpha Centauri (approx. the same mass)
A binary asteroid, e.g., 90 Antiope (approx. the same mass)
A further simplification is based on the "standard assumptions in astrodynamics", which include that one body, the orbiting body, is much smaller than the other, the central body. This is also often approximately valid.
Examples:
The Solar System orbiting the center of the Milky Way
A planet orbiting the Sun
A moon orbiting a planet
A spacecraft orbiting Earth, a moon, or a planet (in the latter cases the approximation only applies after arrival at that orbit)
Perturbation theory
Perturbation theory comprises mathematical methods that are used to find an approximate solution to a problem which cannot be solved exactly. (It is closely related to methods used in numerical analysis, which are ancient.) The earliest use of modern perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: Newton's solution for the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun.
Perturbation methods start with a simplified form of the original problem, which is carefully chosen to be exactly solvable. In celestial mechanics, this is usually a Keplerian ellipse, which is correct when there are only two gravitating bodies (say, the Earth and the Moon), or a circular orbit, which is only correct in special cases of two-body motion, but is often close enough for practical use.
The solved, but simplified problem is then "perturbed" to make its time-rate-of-change equations for the object's position closer to the values from the real problem, such as including the gravitational attraction of a third, more distant body (the Sun). The slight changes that result from the terms in the equations – which themselves may have been simplified yet again – are used as corrections to the original solution. Because simplifications are made at every step, the corrections are never perfect, but even one cycle of corrections often provides a remarkably better approximate solution to the real problem.
There is no requirement to stop at only one cycle of corrections. A partially corrected solution can be re-used as the new starting point for yet another cycle of perturbations and corrections. In principle, for most problems the recycling and refining of prior solutions to obtain a new generation of better solutions could continue indefinitely, to any desired finite degree of accuracy.
The common difficulty with the method is that the corrections usually progressively make the new solutions very much more complicated, so each cycle is much more difficult to manage than the previous cycle of corrections. Newton is reported to have said, regarding the problem of the Moon's orbit "It causeth my head to ache."
This general procedure – starting with a simplified problem and gradually adding corrections that make the starting point of the corrected problem closer to the real situation – is a widely used mathematical tool in advanced sciences and engineering. It is the natural extension of the "guess, check, and fix" method used anciently with numbers.
Reference frame
Problems in celestial mechanics are often posed in simplifying reference frames, such as the synodic reference frame applied to the three-body problem, where the origin coincides with the barycenter of the two larger celestial bodies. Other reference frames for n-body simulations include those that place the origin to follow the center of mass of a body, such as the heliocentric and the geocentric reference frames. The choice of reference frame gives rise to many phenomena, including the retrograde motion of superior planets while on a geocentric reference frame.
Orbital mechanics
See also
Astrometry is a part of astronomy that deals with measuring the positions of stars and other celestial bodies, their distances and movements.
Astrophysics
Celestial navigation is a position fixing technique that was the first system devised to help sailors locate themselves on a featureless ocean.
Developmental Ephemeris or the Jet Propulsion Laboratory Developmental Ephemeris (JPL DE) is a widely used model of the solar system, which combines celestial mechanics with numerical analysis and astronomical and spacecraft data.
Dynamics of the celestial spheres concerns pre-Newtonian explanations of the causes of the motions of the stars and planets.
Dynamical time scale
Ephemeris is a compilation of positions of naturally occurring astronomical objects as well as artificial satellites in the sky at a given time or times.
Gravitation
Lunar theory attempts to account for the motions of the Moon.
Numerical analysis is a branch of mathematics, pioneered by celestial mechanicians, for calculating approximate numerical answers (such as the position of a planet in the sky) which are too difficult to solve down to a general, exact formula.
Creating a numerical model of the solar system was the original goal of celestial mechanics, and has only been imperfectly achieved. It continues to motivate research.
An orbit is the path that an object makes, around another object, whilst under the influence of a source of centripetal force, such as gravity.
Orbital elements are the parameters needed to specify a Newtonian two-body orbit uniquely.
Osculating orbit is the temporary Keplerian orbit about a central body that an object would continue on, if other perturbations were not present.
Retrograde motion is orbital motion in a system, such as a planet and its satellites, that is contrary to the direction of rotation of the central body, or more generally contrary in direction to the net angular momentum of the entire system.
Apparent retrograde motion is the periodic, apparently backwards motion of planetary bodies when viewed from the Earth (an accelerated reference frame).
Satellite is an object that orbits another object (known as its primary). The term is often used to describe an artificial satellite (as opposed to natural satellites, or moons). The common noun ‘moon’ (not capitalized) is used to mean any natural satellite of the other planets.
Tidal force is the combination of out-of-balance forces and accelerations of (mostly) solid bodies that raises tides in bodies of liquid (oceans), atmospheres, and strains planets' and satellites' crusts.
Two solutions, called VSOP82 and VSOP87 are versions one mathematical theory for the orbits and positions of the major planets, which seeks to provide accurate positions over an extended period of time.
Notes
References
Forest R. Moulton, Introduction to Celestial Mechanics, 1984, Dover,
John E. Prussing, Bruce A. Conway, Orbital Mechanics, 1993, Oxford Univ. Press
William M. Smart, Celestial Mechanics, 1961, John Wiley.
J.M.A. Danby, Fundamentals of Celestial Mechanics, 1992, Willmann-Bell
Alessandra Celletti, Ettore Perozzi, Celestial Mechanics: The Waltz of the Planets, 2007, Springer-Praxis, .
Michael Efroimsky. 2005. Gauge Freedom in Orbital Mechanics. Annals of the New York Academy of Sciences, Vol. 1065, pp. 346-374
Alessandra Celletti, Stability and Chaos in Celestial Mechanics. Springer-Praxis 2010, XVI, 264 p., Hardcover
Further reading
Encyclopedia:Celestial mechanics Scholarpedia Expert articles
External links
Astronomy of the Earth's Motion in Space, high-school level educational web site by David P. Stern
Newtonian Dynamics Undergraduate level course by Richard Fitzpatrick. This includes Lagrangian and Hamiltonian Dynamics and applications to celestial mechanics, gravitational potential theory, the 3-body problem and Lunar motion (an example of the 3-body problem with the Sun, Moon, and the Earth).
Research
Marshall Hampton's research page: Central configurations in the n-body problem
Artwork
Celestial Mechanics is a Planetarium Artwork created by D. S. Hessels and G. Dunne
Course notes
Professor Tatum's course notes at the University of Victoria
Associations
Italian Celestial Mechanics and Astrodynamics Association
Simulations
Classical mechanics
Astronomical sub-disciplines
Astrometry | Celestial mechanics | Physics,Astronomy | 2,794 |
25,824,263 | https://en.wikipedia.org/wiki/Side-approximation%20theorem | In geometric topology, the side-approximation theorem was proved by . It implies that a 2-sphere in R3 can be approximated by polyhedral 2-spheres.
References
Geometric topology
Theorems in topology | Side-approximation theorem | Mathematics | 43 |
41,183,836 | https://en.wikipedia.org/wiki/Bottromycin | Bottromycin is a macrocyclic peptide with antibiotic activity. It was first discovered in 1957 as a natural product isolated from Streptomyces bottropensis. It has been shown to inhibit methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococci (VRE) among other Gram-positive bacteria and mycoplasma. Bottromycin is structurally distinct from both vancomycin, a glycopeptide antibiotic, and methicillin, a beta-lactam antibiotic.
Bottromycin binds to the A site of the ribosome and blocks the binding of aminoacyl-tRNA, therefore inhibiting bacterial protein synthesis. Although bottromycin exhibits antibacterial activity in vitro, it has not yet been developed as a clinical antibiotic, potentially due to its poor stability in blood plasma. To increase its stability in vivo, some bottromycin derivatives have been explored.
The structure of bottromycin contains a macrocyclic amidine as well as a thiazole ring. The absolute stereochemistry at several chiral centers has been determined as of 2009. In 2012, a three-dimensional solution structure of bottromycin was published. The solution structure revealed that several methyl groups are on the same face of the structure.
Bottromycin falls within the ribosomally synthesized and post-translationally modified peptide class of natural product.
History
Bottromycin was first isolated from Streptomyces bottropensis in 1957. It has since been identified in at least two other members of the genus Streptomyces; members of Streptomyces are known to be prolific producers of secondary metabolites. Bottromycin has a unique structure, consisting of the macrocyclic amidine linkage and four β-methylated amino acids. Bottromycin blocks aminoacyl tRNA binding to the ribosome by binding to the A site of the 50s subunit. Although bottromycin was discovered over 50 years ago, there was a lack of research following initial studies on bottromycin until recent years. The lack of research is potentially a result of bottromycin's low stability in blood plasma. However, the unique structure and mode of action have recently made bottromycin a more target for drug development, especially given the rise of antibiotic resistance.
Mechanism of action
The mechanism of action of bottromycin was confirmed nearly 20 years following the discovery of bottromycin. Bottromycin functions as an antibiotic through inhibition of protein synthesis. It blocks aminoacyl tRNA binding to the ribosome by binding to the A site of the 50s subunit. This results in release of aminoacyl tRNA from the ribosome and premature termination of protein synthesis. A comparison of other antibiotics known to bind to the A site of the ribosome, including micrococcin, tetracycline, streptomycin, and chloramphenicol, suggested that only bottromycin and chloramphenicol caused release of aminoacyl tRNA from the ribosome. Of those antibiotics, only micrococcin is also a macrocyclic peptide.
Structure determination
Bottromycin is produced naturally as a series of products differing in methylation patterns. All products contain valine and phenylalanine methylation. Bottromycin A2 is singly methylated on proline, bottromycin B lacks methylation on proline, and bottromycin C contains a doubly methylated proline.
A partial structure of bottromycin was reported shortly after the initial discovery of bottromycin. The first structural studies relied on traditional methods of analysis. Its peptide-like structure, including the presence of glycine and valine, was first suggested by a combination of acidic hydrolysis, acetylation, ninhydrin staining, and paper chromatography, among other experiments. The presence of a thiazole ring, along with an adjacent β-methylated phenylalanine, was established by ninhydrin staining, potassium permanganate oxidation, and comparison to synthetic standards. A methyl ester substituent was reported in 1958. The same study also reported that the Kunz hydrolysis product lacking a methyl ester was biologically inactive. Nakamura and colleagues later reported that bottromycin contained tert-leucine and cis-3-methylproline. They also proposed a linear iminohexapeptide structure.
These early structural studies were not followed up until recent years with the renewed interest in bottromycin. The structure was confirmed in the 1980s and 1990s to be a cyclic iminopeptide based on NMR studies, with a linear side chain connected to the macrocycle via an amidine linkage.
Its absolute stereochemistry, however, was not characterized until 2009. Stereochemistry at carbon 18 and 25 was proposed by comparing predicted conformers obtained using molecular dynamics to experimental constraints obtained through NMR experiments. Stereochemistry at carbon 43 was confirmed by comparing 1H NMR of authentic hydrolysis product to a chemically synthesized sample of the same fragment. Finally, optical rotation, 1H NMR, and HRMS experiments of chemically synthesized bottromycin matched that of biologically produced bottromycin.
The three-dimensional solution structure of bottromycin A2 was solved by NMR in 2012. The overall structure was obtained with good resolution (RMSD 0.74±0.59 Å), with a RMSD of 0.09±0.06 Å for the macrocycle. In this study, it was proposed that the methylated proline residue contributed to the restricted conformation of the macrocycle. The methylated proline and β-OMe alanine residues were found to be on the same face of bottroymycin A2 and it was suggested that this characteristic contributed to binding of bottromycin to the ribosomal A site.
Biosynthesis
The production of bottromycin by S. bottropensis and S. scabies, as well as the production of a bottromycin analog termed bottromycin D, has been studied. It was independently confirmed in 2012 by multiple groups that bottromycin is produced as a ribosomal peptide natural product that it subsequently post-translationally modified. Before this, it was unclear whether bottromycin was produced by nonribosomal peptide synthetase machinery (NRPS). The presence of amino acids other than the 20 proteinogenic amino acids is often a feature of NRPS products because NRPS machinery can directly incorporate other amino acids, among other chemical building blocks. Ribosomal peptide synthesis, which is the same machinery that produces all proteins found in the cell, is limited to the 20 proteinogenic amino acids. However, bottromycin was found to be a highly modified ribosomal peptide by a combination of genome mining and gene deletion studies.
In ribosomal peptide synthesis, the final product results from modifications to a linear peptide starting material translated by the ribosome from an mRNA transcript. In S. scabies the precursor peptide, termed BtmD, is a 44-amino acid peptide. The precursor peptide is termed BmbC in S. bottropensis. The amino acids forming the bottromycin core are residues 2-9 in BtmD: Gly-Pro-Val-Val-Val-Phe-Asp-Cys. In bottromycin D, the sequence is Gly-Pro-Ala-Val-Val-Phe-Asp-Cys, and the precursor peptide is termed BstA. BstA shares high sequence homology with BtmD in the follower peptide region. Unlike other ribosomal peptide natural products, which are normally synthesized with a leader peptide that is cleaved, bottromycin is synthesized with a follower peptide. The presence of a follower peptide was identified by bioinformatic analysis of the bottromycin biosynthetic cluster.
The complete biosynthetic gene cluster for bottromycin has been identified. It is predicted to contain 13 genes, including the precursor peptide (notation will follow Crone and colleagues; other studies had similar results). One of the genes in the cluster, btmL, is proposed to be a transcriptional regulator. Another gene, btmA, is proposed to export bottromycin. The remaining ten genes are expected to modify the precursor peptide btmD from a linear peptide to the final macrocyclic product.
A biosynthetic pathway has been hypothesized based on proposed gene functions (see figure). btmM, with homology to Zn+2 aminopeptidases, is predicted to cleave the N-terminal methionine residue, which is not present in the bottromycin final product. btmE and btmF both contain YcaO-like domains. It is believed that one Although it is unclear which enzyme is responsible for which step, it is hypothesized that one catalyzes macrocyclic amidine formation while the other catalyzes thiazoline formation. btmJ, encoding an enzyme with cytochrome P450 homology, may oxidize the thiazoline to the thiazole. btmH or btmI both have homology to hydrolytic enzymes (α/β hydrolase and metallo-dependent hydrolase, respectively) may catalyze follower peptide hydrolysis. An alternative proposed role for btmH or btmI is to function as a cyclodehydratase in macrocyclization. Gene deletion studies failed to elucidate the function of other proteins within the cluster.
Methyltransferases in the biosynthetic cluster
Bioinformatic analysis identified four methyltransferases within the cluster. Bioinformatics suggest that btmB, is an O-methyltransferase, while the other three, btmC, G and K, are radical S-adenosyl methionine (SAM) methyltransferases. The radical SAM methyltransferases are believed to β-methylate amino acid residues within the precursor peptide. btmC is believed to methylate phenylalanine, btmG is believed to methylate both valines, and btmK is believed to methylate proline based on gene deletion studies.
The three putative radical SAM methyltransferases encoded within the pathway are interesting for both mechanistic and biosynthetic reasons. Radical SAM methyltransferases are likely to methylate substrates by an unusual mechanism. Biosynthetically, β-methylations of amino acids are highly unusual in natural products. Polytheonamide B, a peptide natural product produced by a marine symbiont, is the only other structurally characterized example of direct β-methylation of a peptide natural product. The proposed methyl transfer from a SAM-utilizing enzyme was supported by earlier feeding studies with labeled methionine; labeled methionine is used because methionine is converted into SAM within cells. Even further, this study used stereospecifically labeled methionine ([methyl-(2H-3H)]-(2S, methyl-R)-methionine) to show that methylation occurred with a net retention of stereochemistry at the methyl group. The author speculated that net retention indicated a radical mechanism with a B12 intermediate. Radical transfer with a Cobalamin B12 cofactor and SAM has been shown with the few characterized radical SAM methyltransferases. Although the evidence points to radical β-methylation during bottromycin biosynthesis, it remains to be seen whether bioinformatic hypothesis and feeding studies will be supported by in vitro activity assays.
The Val3Ala substitution in bottromycin D does not change the β-methylation pattern between bottromycin A2 and D because Val3 is the only valine not methylated in bottromycin A2. As such, there are still three predicted radical SAM dependent enzymes in the bottromycin D biosynthetic cluster: bstC, bstF, and bstJ.
As of 2013, all published biosynthetic studies have been bioinformatic or cell-based. No biochemical assays directly demonstrating protein function have yet been published. It is likely that in vitro mechanistic studies to better elucidate the biosynthetic pathway will be forthcoming.
Total synthesis
The total synthesis of bottromycin was accomplished in 2009. The synthesis was achieved in 17 steps. Although bottromycin is a peptide-based natural product, it contains an unusual macrocycle and thiazole heterocycle, so that the total synthesis could not be accomplished using traditional solid-phase peptide synthesis. The synthesis was accomplished using a combination of peptide coupling and other methods. To obtain the primary thia-β-Ala-OMe intermediate, a sequence of condensation, Mannich reaction, and palladium-catalyzed decarboxylation steps were performed. This intermediate was prepared stereoselectively. To obtain the amidine linkage, a tripeptide intermediate was coupled to a phthaloyl-protected thioamide via mercury-mediated condensation using mercury (II) trifluoromethanesulfonate () to yield a branched amidine intermediate. To obtain the final product macrocycle, macrolactamization of the amidine-containing intermediate was required. Macrolactamization was performed with 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDCI) and yielded the final product, bottromycin A2. To confirm that the synthesized bottromycin A2 had the same stereochemistry as natural bottromycin A2, the product was studied by optical rotation, 1H and 13C NMR, IR, and HRMS. The data was found to match that of isolated bottromycin A2. Further, the synthetic sample of bottromycin was also found to have antibacterial activity against both MRSA and VRE, although quantitative data was not reported. A full rendering of the synthetic scheme may be seen under the collapsed synthetic scheme link.
In 2012, an alternative synthesis of the bottromycin macrocyclic ring system and amidine linkage was reported. The synthesis was achieved in 10 steps. Unlike the previous synthesis, Ackerman and colleagues synthesized a linear peptide and achieved intramolecular amidine formation using an S-methylated endothiopeptide. The endothiopeptide was obtained by a thio-Ugi reaction. The resulting macrocycle was obtained as a racemic mixture at the amidine linkage. The full synthetic scheme may be viewed under the collapsed synthetic scheme link.
Derivatives
Following the total synthesis of bottromycin, Kobayashi and colleagues synthesized a series of bottromycin derivatives and evaluated their anti-MRSA and anti-VRE activity. Only derivatives of the methyl ester moiety were explored, as they found that the methyl ester was both important for antibacterial activity and unstable in blood plasma. A series of seventeen derivatives were synthesized, with derivatives falling into three general categories: amide derivatives, urea derivatives, and ketone derivatives. All analogs except the carboxylic acid and hydrazide analogs were derivatized from isolated bottromycin A2 using an activated azide ester. The derivatives were tested against six Gram-positive bacterial strains: Staphylococcus aureus FDA209P, S. aureus Smith, MRSA HH-1, MRSA 92-1191, Enterococcus faecalis NCTC12201, and E. faecalis NCTC12203 (both VRE).
Bottromycin A2 had low micromolar activity against all the strains tested, ranging from an MIC of 0.5 μg/mL in E. faecalis NCTC12203 to 2 μg/mL in MRSA HH-1. The amide and urea derivative families were found to have weaker antibacterial activity than bottromycin A2 against S. aureus, MRSA, and VRE. The MIC values for the amide and urea derivatives were generally four times greater than those for bottromycin A2. They were, however, significantly more stable in mouse plasma than bottromycin A2. Bottromycin A2 completely degraded in mouse plasma after 10 minutes and exhibited 0% residual activity after exposure to rat serum. Only one derivative had lower than 50% residual activity. In contrast, many derivatives retained a significant percentage of residual anti-MRSA activity following exposure to serum. Thioester intermediates to the ketone derivatives were found to be unstable, exhibiting 0% residual activity, although they had improved antibacterial activity, exhibiting sub-micromolar MIC values. The propyl ketone was found to be the most promising derivative of all the analogs obtained, both exhibiting antibacterial activity against the bacterial strains tested and stability in plasma, retaining 100% residual activity. The MIC values obtained for the propyl derivative were the same as those found for bottromycin A2 except in the case of NCTC12201, which had an MIC of 2 μg/mL for the derivative and an MIC of 1 μg/mL for bottromycin A2. A summary of MIC values for tested bacterial strains is shown below.
Even the least active bottromycin derivatives exhibited greater anti-VRE activity than vancomycin, which was used as a control antibiotic in this study. The propyl derivative and bottromycin A2 had similar antimicrobial activity to linezolid, a synthetic antibiotic active against Gram-positive bacteria including MRSA and VRE, across all the bacterial strains studied. Overall, the results of this study suggested that further modifications of bottromycin may lead to a more stable, effective antibiotic.
A natural derivative of bottromycin, bottromycin D, has also been identified. It is produced in a marine Streptomyces species, strain WMMB272. Although the methyl ester is still present in bottromycin D, one of the macrocyclic valines is mutated to an alanine. The minimum inhibitory concentration (MIC) for bottromycin D was determined and found to be only slightly less active than bottromycin A2 (2 μg/mL for bottromycin D vs. 1 μg/mL for bottromycin A2). The authors postulated that greater conformational flexibility of bottromycin D may be responsible for its lower activity.
No further antibacterial studies of synthetic or biosynthetic bottromycin derivatives have been reported in the literature as of 2013. The search for efficacious analogs will be enabled by bottromycin’s status as a ribosomal peptide. Analogs may be explored biosynthetically by changing the sequence of the precursor peptide; a change in amino acid sequence will lead directly to a modified bottromycin structure.
Clinical potential
As of 2013, bottromycin has not been approved for any clinical applications, nor has it been tested in humans. The in vivo stability of bottromycin must be improved before it can be considered as a drug candidate. Work by Kobayashi and colleagues has already begun to address this issue, but more work may be in progress. The need to find new antibiotics to combat antibiotic resistance means that biologic and synthetic interest in bottromycin will likely continue. A combination of biologic and synthetic techniques may yield both an efficacious and stable bottromycin analog for development as a potential drug candidate.
See also
Antibiotic
Streptomyces
Secondary metabolite
Peptide
MRSA
Vancomycin-resistant Enterococcus
References
Peptides
Antibiotics
Streptomyces
2-Thiazolyl compounds
Total synthesis | Bottromycin | Chemistry,Biology | 4,218 |
13,793,747 | https://en.wikipedia.org/wiki/Group%20method%20of%20data%20handling | Group method of data handling (GMDH) is a family of inductive algorithms for computer-based mathematical modeling of multi-parametric datasets that features fully automatic structural and parametric optimization of models.
GMDH is used in such fields as data mining, knowledge discovery, prediction, complex systems modeling, optimization and pattern recognition. GMDH algorithms are characterized by inductive procedure that performs sorting-out of gradually complicated polynomial models and selecting the best solution by means of the external criterion. The last section of contains a summary of the applications of GMDH in the 1970s.
Other names include "polynomial feedforward neural network", or "self-organization of models". It was one of the first deep learning methods, used to train an eight-layer neural net in 1971.
Mathematical content
Polynomial regression
This section is based on.
This is the general problem of statistical modelling of data: Consider a dataset , with points. Each point contains observations, and one target to predict. How to best predict the target based on the observations?
First, we split the full dataset into two parts: a training set and a validation set. The training set would be used to fit more and more model parameters, and the validation set would be used to decide which parameters to include, and when to stop fitting completely.
The GMDH starts by considering degree-2 polynomial in 2 variables. Suppose we want to predict the target using just the parts of the observation, and using only degree-2 polynomials, then the most we can do is this:where the parameters are computed by linear regression. Now, the parameters depend on which we have chosen, and we do not know which we should choose, so we choose all of them. That is, we perform all such polynomial regressions:obtaining polynomial models of the dataset.
We do not want to accept all the polynomial models, since it would contain too many models. To only select the best subset of these models, we run each model on the validation dataset, and select the models whose mean-square-error is below a threshold. We also write down the smallest mean-square-error achieved as .
Suppose that after this process, we have obtained a set of models. We now run the models on the training dataset, to obtain a sequence of transformed observations: . The same algorithm can now be run again.
The algorithm continues, giving us . As long as each is smaller than the previous one, the process continues, giving us increasingly deep models. As soon as some , the algorithm terminates. The last layer fitted (layer ) is discarded, as it has overfit the training set. The previous layers are outputted.
More sophisticated methods for deciding when to terminate are possible. For example, one might keep running the algorithm for several more steps, in the hope of passing a temporary rise in .
In general
Instead of a degree-2 polynomial in 2 variables, each unit may use higher-degree polynomials in more variables:
And more generally, a GMDH model with multiple inputs and one output is a subset of components of the base function (1):
where fi are elementary functions dependent on different sets of inputs, ai are coefficients and m is the number of the base function components.
External criteria
External criteria are optimization objectives for the model, such as minimizing mean-squared error on the validation set, as given above. The most common criteria are:
Criterion of Regularity (CR) – least mean squares on a validation set.
Least squares on a cross-validation set.
Criterion of Minimum bias or Consistency – squared difference between the estimated outputs (or coefficients vectors) of two models fit on the A and B set, divided by squared predictions on the B set.
Idea
Like linear regression, which fits a linear equation over data, GMDH fits arbitrarily high orders of polynomial equations over data.
To choose between models, two or more subsets of a data sample are used, similar to the train-validation-test split.
GMDH combined ideas from: black box modeling, successive genetic selection of pairwise features, the Gabor's principle of "freedom of decisions choice", and the Beer's principle of external additions.
Inspired by an analogy between constructing a model out of noisy data, and sending messages through a noisy channel, they proposed "noise-immune modelling": the higher the noise, the less parameters must the optimal model have, since the noisy channel does not allow more bits to be sent through.
The model is structured as a feedforward neural network, but without restrictions on the depth, they had a procedure for automatic models structure generation, which imitates the process of biological selection with pairwise genetic features.
History
The method was originated in 1968 by Prof. Alexey G. Ivakhnenko in the Institute of Cybernetics in Kyiv.
Period 1968–1971 is characterized by application of only regularity criterion for solving of the problems of identification, pattern recognition and short-term forecasting. As reference functions polynomials, logical nets, fuzzy Zadeh sets and Bayes probability formulas were used. Authors were stimulated by very high accuracy of forecasting with the new approach. Noise immunity was not investigated.
Period 1972–1975. The problem of modeling of noised data and incomplete information basis was solved. Multicriteria selection and utilization of additional priory information for noiseimmunity increasing were proposed. Best experiments showed that with extended definition of the optimal model by additional criterion noise level can be ten times more than signal. Then it was improved using Shannon's Theorem of General Communication theory.
Period 1976–1979. The convergence of multilayered GMDH algorithms was investigated. It was shown that some multilayered algorithms have "multilayerness error" – analogous to static error of control systems. In 1977 a solution of objective systems analysis problems by multilayered GMDH algorithms was proposed. It turned out that sorting-out by criteria ensemble finds the only optimal system of equations and therefore to show complex object elements, their main input and output variables.
Period 1980–1988. Many important theoretical results were received. It became clear that full physical models cannot be used for long-term forecasting. It was proved, that non-physical models of GMDH are more accurate for approximation and forecast than physical models of regression analysis. Two-level algorithms which use two different time scales for modeling were developed.
Since 1989 the new algorithms (AC, OCC, PF) for non-parametric modeling of fuzzy objects and SLP for expert systems were developed and investigated. Present stage of GMDH development can be described as blossom out of deep learning neuronets and parallel inductive algorithms for multiprocessor computers. Such procedure is currently used in deep learning networks.
GMDH-type neural networks
There are many different ways to choose an order for partial models consideration. The very first consideration order used in GMDH and originally called multilayered inductive procedure is the most popular one. It is a sorting-out of gradually complicated models generated from base function. The best model is indicated by the minimum of the external criterion characteristic. Multilayered procedure is equivalent to the Artificial Neural Network with polynomial activation function of neurons. Therefore, the algorithm with such an approach usually referred as GMDH-type Neural Network or Polynomial Neural Network. Li showed that GMDH-type neural network performed better than the classical forecasting algorithms such as Single Exponential Smooth, Double Exponential Smooth, ARIMA and back-propagation neural network.
Combinatorial GMDH
Another important approach to partial models consideration that becomes more and more popular is a combinatorial search that is either limited or full. This approach has some advantages against Polynomial Neural Networks, but requires considerable computational power and thus is not effective for objects with a large number of inputs. An important achievement of Combinatorial GMDH is that it fully outperforms linear regression approach if noise level in the input data is greater than zero. It guarantees that the most optimal model will be founded during exhaustive sorting.
Basic Combinatorial algorithm makes the following steps:
Divides data sample at least into two samples A and B.
Generates subsamples from A according to partial models with steadily increasing complexity.
Estimates coefficients of partial models at each layer of models complexity.
Calculates value of external criterion for models on sample B.
Chooses the best model (set of models) indicated by minimal value of the criterion.
For the selected model of optimal complexity recalculate coefficients on a whole data sample.
In contrast to GMDH-type neural networks Combinatorial algorithm usually does not stop at the certain level of complexity because a point of increase of criterion value can be simply a local minimum, see Fig.1.
Algorithms
Combinatorial (COMBI)
Multilayered Iterative (MIA)
GN
Objective System Analysis (OSA)
Harmonical
Two-level (ARIMAD)
Multiplicative–Additive (MAA)
Objective Computer Clusterization (OCC);
Pointing Finger (PF) clusterization algorithm;
Analogues Complexing (AC)
Harmonical Rediscretization
Algorithm on the base of Multilayered Theory of Statistical Decisions (MTSD)
Group of Adaptive Models Evolution (GAME)
Software implementations
FAKE GAME Project — Open source. Cross-platform.
GEvom — Free upon request for academic use. Windows-only.
GMDH Shell — GMDH-based, predictive analytics and time series forecasting software. Free Academic Licensing and Free Trial version available. Windows-only.
KnowledgeMiner — Commercial product. Mac OS X-only. Free Demo version available.
PNN Discovery client — Commercial product.
Sciengy RPF! — Freeware, Open source.
wGMDH — Weka plugin, Open source.
R Package – Open source.
R Package for regression tasks – Open source.
Python library of MIA algorithm - Open source.
Python library of basic GMDH algorithms (COMBI, MULTI, MIA, RIA) - Open source.
References
Further reading
A.G. Ivakhnenko. Heuristic Self-Organization in Problems of Engineering Cybernetics, Automatica, vol.6, 1970 — p. 207-219.
S.J. Farlow. Self-Organizing Methods in Modelling: GMDH Type Algorithms. New-York, Bazel: Marcel Decker Inc., 1984, 350 p.
H.R. Madala, A.G. Ivakhnenko. Inductive Learning Algorithms for Complex Systems Modeling. CRC Press, Boca Raton, 1994.
External links
Library of GMDH books and articles
Group Method of Data Handling
Computational statistics
Artificial neural networks
Classification algorithms
Regression variable selection
Soviet inventions | Group method of data handling | Mathematics | 2,161 |
35,046,764 | https://en.wikipedia.org/wiki/Rottlerin | Rottlerin (mallotoxin) is a polyphenol natural product isolated from the Asian tree Mallotus philippensis. Rottlerin displays a complex spectrum of pharmacology.
Effects
Uncoupler of oxidative phosphorylation
Rottlerin has been shown to be an uncoupler of mitochondrial oxidative phosphorylation.
Potassium channel opener
Rottlerin is a potent large conductance potassium channel (BKCa++) opener. BKCa++ is found in the inner mitochondrial membrane of cardiomyocytes. Opening these channels is beneficial for post-ischemic changes in vasodilation. Other BKCa++ channel openers are reported to limit the mitochondrial calcium overload due to ischemia. Rottlerin is also capable of reducing oxygen radical formation.
Other BKCa++ channel openers (NS1619, NS11021 and DiCl-DHAA) have been reported to have cardio-protective effects after ischemic-reperfusion injury. There were reductions in mitochondrial Ca++ overload, mitochondrial depolarization, increased cell viability and improved function in the whole heart.
Mallotoxin is also a hERG potassium channel activator.
Role in cardioplegia reperfusion
Clements et al. reported that rottlerin improves the recovery of isolated rat hearts perfused with buffer after cold cardioplegic arrest. A majority of patients recover but some develop a cardiac low-output syndrome attributable in part to depressed left ventricular or atrial contractility, which increases chance of death.
Contractility and vascular effects
Rottlerin increases in isolated heart contractility independent of its vascular effects, as well as enhanced perfusion through vasomotor activity. The activation of BKCa++ channels by rottlerin relaxes coronary smooth muscle and improves myocardial perfusion after cardioplegia.
Myocardial stunning is associated with oxidant radical damage and calcium overload. Contractile abnormalities can occur through oxidant-dependent damage and also through calcium overload in the mitochondria resulting in mitochondrial damage. BKCa++ channels reside in the inner mitochondrial membrane and their activation is proposed to increase K+ accumulation in mitochondria. This limits influx into mitochondria, reducing mitochondrial depolarization and permeability transition pore opening. This may result in less mitochondrial damage and therefore greater contractility since there is a decrease in apoptosis compared to no stimulation of BKCa++ channels.
Akt activation
Rottlerin also enhances the cardioplegia-induced phosphorylation of Akt on the activation residue Thr308. Akt activation modulates mitochondrial depolarization and the permeability transition pore. Clements et al. found that Akt functions downstream of the BKCa++ channels and its activation is considered beneficial after ischemic-reperfusion injury. It is unclear what the specific role of Akt may play in modulating of myocardial function after rottlerin treatment of cardioplegia. More research needs to be done to examine if Akt is necessary to improve cardiac function when rottlerin is administered.
Antioxidant properties
The antioxidant properties of rottlerin have been demonstrated but it is unclear whether the effects are because of BKCa++ channel opening or an additional mechanism of rottlerin. There was no oxygen dependent damage found by rottlerin in the study conducted by Clements et al.
Ineffective PKCδ selective inhibitor
Rottlerin has been reported to be a PKCδ inhibitor. PKCδ has been implicated in depressing cardiac function and cell death after ischemia-reperfusion injury as well as promoting vascular smooth muscle contraction and decreasing perfusion. However, the role of rottlerin as a specific PKCδ inhibitor has been questioned. There have been several studies using rottlerin as a PKCδ selective inhibitor based on in vitro studies, but some studies showed it did not block PKCδ activity and did block other kinase and non-kinase proteins in vitro. Rottlerin also uncouples mitochondria at high doses and results in depolarization of the mitochondrial membrane potential. It was found to reduce ATP levels, activate 5'-AMP-activated protein kinase and affect mitochondrial production of reactive oxygen species (ROS). It is difficult to say that rottlerin is a selective inhibitor of PKCδ since there are biological and biochemical processes that are PKCδ –independent that may affect outcomes. A proposed mechanism of why rottlerin was found to inhibit PKCδ is that it decreased ATP levels and can block PKCδ tyrosine phosphorylation and activation.
Sources
The Kamala tree, also known as Mallotus philippensis, grows in Southeast Asia. The fruit of this tree is covered with a red powder called kamala, and is used locally to make dye for textiles, syrup and used as an old remedy for tape-worm, because it has a laxative effect. Other uses include afflictions with the skin, eye diseases, bronchitis, abdominal disease, and spleen enlargement but scientific evidence is not present.
References
Phloroglucinols
HERG activator
Potassium channel openers
Plant toxins | Rottlerin | Chemistry | 1,113 |
30,148,162 | https://en.wikipedia.org/wiki/Self-drying%20concrete%20technology | Self-drying concrete technology is found in certain cementitious patching and leveling materials and tile-setting mortars used in the flooring industry. Self-drying technology allows the cement mix to consume all of its mix water while curing, eliminating the need for excess water to evaporate prior to installing flooring. Traditional floor coverings, such as VCT, sheet vinyl, carpet and ceramic tile, can be installed before the material is completely dry and as soon as it hardens, which typically happens in the first two hours after placement.
Traditional concrete has a water:cement ratio of about 0.5, which refers to the weight of the water divided by the weight of the cement. A water:cement ratio of 0.5 provides good workability while keeping the amount of excess water in the mix fairly low. Without at least this much extra water, the concrete would be too dry to place.
The chemical reaction of Portland cement and water that is known as hydration, which is necessary for the strengthening of the concrete, requires a water:cement ratio of only about 0.25. With a water:cement ratio of 0.5, there is twice the amount of water in the concrete mix than what is needed for hydration. This excess water needs to evaporate before flooring can be installed. Note: The magical number of 28 days defines only the designed strength of the concrete but has nothing to do with the dryness of it. E.g. A 10-year-old concrete slab can contain more moisture than a 28-day-old slab! Conversely, a self-drying concrete blend consumes all of its mix water with a water:cement ratio of up to 0.6, maintaining good workability while allowing flooring to be installed before it is completely dry.
There are also cement products that are partially self-drying, meaning that they use a high percentage of their mix water for hydration as opposed to using 100% of it. This type of product might be used when the flooring does not need to be installed the same day but must still be installed more quickly than traditional concrete would allow. For instance, products that are 80% self-drying allow flooring to be installed the next day, typically after a 16-hour cure.
Self-drying technology was developed by Ardex in Germany and was introduced in the United States in 1978.
Concrete
Composite materials | Self-drying concrete technology | Physics,Engineering | 490 |
46,625,050 | https://en.wikipedia.org/wiki/Digital5 | Digital5 is the online programming division of TV5 Network Inc. and currently headed by Chot Reyes, who also served as the chief of Sports5.
It produces content (also partnering with some productions) that will can be viewed on the network's online portals. Digital5's content include lifestyle, travel, news, business, sports, comedy, etc. It also managed the news portal, InterAksyon.com, together with News5. Digital5 also produced shows for GG Network, the first online network catered for electronic gamers.
Digital5 Programs
These are the programs produced by Digital5 (past, ongoing and upcoming) with its description
D5.studio
Aside from original programs, online re-runs of past and present TV5 shows are also uploaded in the website. Whenever possible, select Digital5 Programs are brought to TV5 as catch-up episodes (with short length online videos of the shows being combined for television and rearranged with playout to/from commercial breaks).
Baon Fix (Host: Patti Grandidge / Description: Quick Tips on Making a "Baon")
Bloom (Hosts: Mika Martinez, Maggie Wilson / Genre: Women Magazine)
Clash of Class (Description: Battle and Comparison)
Good Times with Mo: The Podcast (Hosts: Mo Twister and various co-hosts / Genre: Talk show on love & sex)
Jinrilationships: A Survival Guide to the Dating Life (Host: Jinri Park / Genre: Romantic-comedy)
Like A Bossing (Host: Anthony Pangilinan / Description: Magazine show about Entrepreneurs)
Kwentong Barbero (Mang Ponso / Genre: Typical comedy)
Phenoms (Starring: Kiefer Ravena and Alyssa Valdez / Genre: Reality)
Spinnr Sessions (Genre: Live Music Sessions from various music artists)
Tanods (Starring: Martin Escudero, Jun Sabayton, Bea Benedicto, Jinri Park / Genre: Sitcom)
Forever Sucks (Starring: Jasmine Curtis-Smith, JC Santos, Ian Batherson / Genre: Drama)
Rock U (Genre: Animated series)
Bolero Rap Battles (Genre: Rap Battle League)
News5.com.ph
Kontrabando/Duty, Devotion and Service (Hosts: Ramon Bautista, Lourd de Veyra, Jun Sabayton and RA Rivera with Generoso Cupal, Bea Benedicto, Bart Bartolome, Epe Salas and Mackhie Suela (occasionally with Nikki Veron Cruz and Angel Francisco) / Genre: News Satire)
NewsRoom 5 (Hosts: Branden Milla and Bea Benedicto / Description: Human Interest Stories)
Sports5.ph
Philippine Basketball Association and Philippine Superliga games live streaming are also available in this website, with no commercial breaks.
The Bro Show (Hosts: Jason Webb, Richard del Rosario and Mico Halili / Description: Sports talk show)
Kicksplorer (Host: James Velasquez / Description: Kicks & Shoes Review)
Nth Degree (Hosts: Dominic Uy and Kevin Limjoco / Description: Consumer Reviews)
No Holds Barred: We Ask The Questions, You Get The Answers (Host: Quinito Henson / Description: In-depth interviews with sports personalities)
On Cam (Hosts: Apple David, Mara Aquino and Carla Lizardo / Description: Inside Look on the Player's Inner Sides)
Pinoy Wrestling Revolution
SELfie! (Host: Sel Guevara / Description: Sports on Social Media)
Sports5 Pre Game Show (aired before the PBA games)
The Perfect Round (Hosts: Dominic Uy and Cookie La'O / Description: Golf News)
They Call Me Coach (Host: Chot Reyes / Description: Basic Techniques on Coaching and interviews with renowned coaches from different sports)
GG Network
GG Stream Team
References
External links
TV5
Sports5
News5 Everywhere
TV5 Network
Streaming television
Internet properties established in 2015
2015 establishments in the Philippines | Digital5 | Technology | 806 |
42,352,871 | https://en.wikipedia.org/wiki/Hidden%20states%20of%20matter | A hidden state of matter is a state of matter which cannot be reached under ergodic conditions, and is therefore distinct from known thermodynamic phases of the material. Examples exist in condensed matter systems, and are typically reached by the non-ergodic conditions created through laser photo excitation.
Short-lived hidden states of matter have also been reported in crystals using lasers. Recently a persistent hidden state was discovered in a crystal of Tantalum(IV) sulfide (TaS2), where the state is stable at low temperatures.
A hidden state of matter is not to be confused with hidden order, which exists in equilibrium, but is not immediately apparent or easily observed.
Using ultrashort laser pulses impinging on solid state matter, the system may be knocked out of equilibrium so that not only are the individual subsystems out of equilibrium with each other but also internally. Under such conditions, new states of matter may be created which are not otherwise reachable under equilibrium, ergodic system evolution.
Such states are usually unstable and decay very rapidly, typically in nanoseconds or less. The difficulty is in distinguishing a genuine hidden state from one which is simply out of thermal equilibrium.
Probably the first instance of a photoinduced state is described for the organic molecular compound TTF-CA, which turns from neutral to ionic species as a result of excitation by laser pulses. However, a similar transformation is also possible by the application of pressure, so strictly speaking the photoinduced transition is not to a hidden state under the definition given in the introductory paragraph. A few further examples are given in ref.
Photoexcitation has been shown to produce persistent states in vanadates and manganite materials,
leading to filamentary paths of a modified charge ordered phase which is sustained by a passing current. Transient superconductivity was also reported in cuprates.
A photoexcited transition to an H state
A hypothetical schematic diagram for the transition to an H state by photo excitation is shown in the Figure (After ). An absorbed photon causes an electron from the ground state G to an excited state E (red arrow). State E rapidly relaxes via Franck-Condon relaxation to an intermediate locally reordered state I. Through interactions with others of its kind, this state collectively orders to form a macroscopically ordered metastable state H, further lowering its energy as a result. The new state has a broken symmetry with respect to the G or E state, and may also involve further relaxation compared to the I state. The barrier EB prevents state H from reverting to the ground state G. If the barrier is sufficiently large compared to thermal energy kBT, where kB is the Boltzmann constant, the H state can be stable indefinitely.
References
Condensed matter physics
Engineering thermodynamics
Phases of matter | Hidden states of matter | Physics,Chemistry,Materials_science,Engineering | 591 |
26,538,160 | https://en.wikipedia.org/wiki/The%20Wild%20Animal%20Sanctuary | The Wild Animal Sanctuary is a animal sanctuary in Keenesburg, Colorado, United States. The sanctuary specializes in rescuing and caring for large predators which are being ill-treated, for which their owners can no longer care, or which might otherwise be euthanized. It is a 501(c)(3) nonprofit organization and a state and federally licensed zoological facility.
Created in 1980, The Wild Animal Sanctuary is situated on grassland northeast of Denver, and has helped over 1,000 animals since it first opened. By early 2022, home to more than 550 animals, and 192 staff and volunteers to take care, the group announced the purchase of a major addition, a 9,004-acre ranch near Springfield, Colorado.
Mission
The stated mission of the sanctuary is "to rescue captive large Carnivores who have been abused, abandoned, illegally kept or exploited; to create for them a wonderful life for as long as they live; and to educate about the causes and solutions to the Captive Wildlife Crisis." The sanctuary states that there are many large carnivores living outside the zoo system in the United States, including 4000 or so tigers living as pets in Texas alone, and many of these come from the black market trade in exotic animals.
History
Pat Craig started The Wild Animal Sanctuary when he took in a jaguar cub that he kept on a licensed facility on his family's farm outside Boulder, Colorado. The animals were soon moved to Lyons, Colorado to provide additional space. After 8 years in Lyons, a limestone quarry was opened nearby, forcing the sanctuary to move.
In 2005, largely because of relief efforts for Hurricane Katrina and the Indonesia tsunami, donations to the sanctuary decreased significantly, and by mid-2006 the staff thought the sanctuary would need to close. In order to help raise money, the sanctuary was opened to the public, started selling donated merchandise, brought in more volunteers, and started a program of sponsoring individual animals.
By 2007 the sanctuary was using of its site for rescued animal habitats. By 2010 it had of habitats.
In February 2011, the sanctuary, in cooperation with Animal Defenders International, Bob Barker, and the Bolivian government (which had recently enacted legislation outlawing performing wild animals), received 25 lions from circuses in Bolivia that had typically housed the lions in crates for transport. The sanctuary built a fabric covered structure isolated from the main facility to house the lions while they got acclimated to the Colorado climate, and while four outdoor enclosures were being prepared. The first of the prides, picked because they "appear to be close-knit and familiar with each other", was first allowed access to their large outdoor habitat on April 14, 2011.
The additional of enclosures created for the Bolivian lions took up the remainder of the existing sanctuary property, but later in 2011, a donation allowed the sanctuary to purchase another , expanding the site to and providing space for future expansion of the habitats.
In May 2012 the sanctuary completed the "Mile Into the Wild" walkway. This walkway has let visitors view and photograph the animals in their large habitat areas from above, and at the time connected the original holding area and education building with the Bolivian Lion House and a new parking lot that was built for the increasing number of visitors. In addition, the county paved the road to the sanctuary, making access easier.
Facilities
The sanctuary aims to eventually get all of their animals into large acreage habitats. It designed with a central compound for receiving new animals and starting their recuperation and acclimation into these larger habitats. The lower floor of the main compound houses tigers, and the upper floor is an education center. The animals in this area have indoor/outdoor enclosures with play structures, and heated areas for the winter. The common pool area where tigers can take turns playing includes a waterfall.
As of 2013, the sanctuary had 26 species-specific habitats of each that are home to over 330 African lions, tigers, bears, leopards, cougars, timberwolves and other large carnivores. Habitats include pools for swimming and underground dens that stay at a constant temperature year round. Visitors can view these animals in their natural habitats from elevated walkways accessed from the education center.
In the summer of 2016 a new 48,000 square foot welcome center was opened and an additional 1/2-mile of elevated walkway was added. On October 31, 2016, Guinness World Records certified that the Wild Animal Sanctuary's elevated walkway was the world's longest footbridge at , supplanting the Poughkeepsie Railroad Bridge.
In spring 2018, with the Keenesburg site deemed at capacity, and no local expansion options, partly due to expanding oil and gas operations, the sanctuary announced the purchase of an additional property of 9,004 acres in the southeastern part of the state. The new $7M property, dubbed "The Wild Animal Refuge", is between the towns of La Junta and Springfield, mainly in Baca County, the rest in Las Animas County. In contrast to the plains of Keenesburg, the new location has a variety of wild terrain, including pine forest, canyons, caves and rocky areas. As of January 2022, the sanctuary has constructed more than 30 large acreage habitats at the Refuge and currently cares for more than 150 rescued Lions, Tigers, Bears, Wolves and other animals there. The Refuge is not open to the public due to its remote location; the Keenesburg location will remain the public education and outreach center. To help pay for the new property, the sanctuary launched the Founder Program to offer various incentives for new donations.
In 2020, the AZA-accredited International Exotic Animal Sanctuary in Boyd, Texas joined the Wild Animal Sanctuary network when their owner retired. The International Exotic Animal Sanctuary was subsequently renamed "The Wild Animal Sanctuary-Texas".
In 2022 the sanctuary was able to pay off the $7M Refuge property through its Founder Program and funds derived through a Conservation Easement. The sanctuary also became involved with the Bureau of Land Management's (BLM) efforts to round up wild horses in northwest Colorado by responding to supporters requests for help. The sanctuary began to adopt many of the captured horse in order to protect their freedom and provide them with vast spaces to roam at their Springfield, CO facility. At the same time, the sanctuary began a state-wide search for property large enough to provide refuge to much larger number of wild horses, which are also known as Mustangs. By the end of 2022, the sanctuary had more than 30 rescued wild horses roaming freely at the Wild Animal Refuge, and was under contract to purchase a former 22,500-acre cattle ranch located near Craig, Colorado.
In January 2023, the sanctuary completed the 22,500-acre purchase and began preparing the property to house upwards of 500 rescued horses. Plans include allowing rescued mustangs to roam freely with little human interaction and to implement a similar Founder Program to help pay for the new purchase. Horses are expected to begin populating the property as early as May 2023.
Estimates in 2007 were that 25,000 or more wild animals were living in captivity outside the zoo system in the United States. Many of these are mistreated or abused, and many are being kept illegally. The Wild Animal Sanctuary believes that education about these animals is critical to informing the public and helping to provide better conditions for the animals. The visitor center at the sanctuary provides information about these animals, and the sanctuary has speakers who do presentations for a variety of organizations.
Notes
External links
Conservation projects
Endangered species
Cat conservation organizations
Wildlife sanctuaries of the United States
Tourist attractions in Weld County, Colorado
Zoos in Colorado
Wildlife rehabilitation and conservation centers
Protected areas of Weld County, Colorado
Animal sanctuaries
1980 establishments in Colorado | The Wild Animal Sanctuary | Biology | 1,571 |
61,297,896 | https://en.wikipedia.org/wiki/K2-58 | K2-58 (also designated as EPIC 206026904) is a G-type main-sequence star in the constellation of Aquarius, approximately 596 light-years from the Solar System. The star is metal-rich, having 155% of the Solar abundance of elements heavier than helium. The star is located in a region where a hypothetical observer in the K2-58 system can see Venus transiting the sun.
Planetary system
The planetary system has three confirmed exoplanets, named K2-58 b, K2-58 c, and K2-58 d, discovered in 2016.
References
Aquarius (constellation)
Planetary systems with three confirmed planets
K-type main-sequence stars
J22151722-1402593
Planetary transit variables | K2-58 | Astronomy | 159 |
14,312,774 | https://en.wikipedia.org/wiki/Stretching%20field | In applied mathematics, stretching fields provide the local deformation of an infinitesimal circular fluid element over a finite time interval ∆t. The logarithm of the stretching (after first dividing by ∆t) gives the finite-time Lyapunov exponent λ for separation of nearby fluid elements at each point in a flow. For periodic two-dimensional flows, stretching fields have been shown to be closely related to the mixing of a passive scalar concentration field. Until recently, however, the extension of these ideas to systems that are non-periodic or weakly turbulent has been possible only in numerical simulations.
Dynamical systems | Stretching field | Physics,Mathematics | 127 |
60,164,751 | https://en.wikipedia.org/wiki/Arp%20166 | Arp 166 is a pair of interacting elliptical galaxies approximately 225 million light-years away from Earth in the constellation of Triangulum. The two galaxies, NGC 750 and NGC 751, are listed together as Arp 166 in the Atlas of Peculiar Galaxies (in the category Galaxies with diffuse filaments).
Observational history
Arp 166 was discovered by German-born British astronomer William Herschel on September 12, 1784, but he did not resolve this close pair of galaxies, therefore he described it as a single object NGC 750.
Arp 166 was first seen as a double by Irish engineer and astronomer Bindon Stoney on October 11, 1850, who used Lord Rosse's 72" telescope. The second galaxy from this pair, which is smaller and fainter than NGC 750, was catalogued as NGC 751.
Interacting galaxies
At least 100,000,000 years have passed since the moment of the first strong tidal perturbation between these two galaxies. Both galaxies are characterized by strong tidal interactions and distortions, and they are still in the process of efficient tidal interaction.
The distance between the centers of this pair is 21", or 10 kpc in projection. Both galaxies have almost identical central radial velocities. While NGC 750 exhibits nearly flat radial velocity curves, the radial velocity curves of NGC 751 are characterized by large variations of more than 100 km s−1 along the slit.
A large, diffuse tidal tail extends 20 arcsec (10 kpc) to the north-east of the pair.
References
External links
Arp 166
Triangulum
Astronomical objects discovered in 1784
Arp 166
166 | Arp 166 | Astronomy | 329 |
44,060,607 | https://en.wikipedia.org/wiki/Elgar%20Fleisch | Elgar Fleisch (born January 22, 1968, in Bregenz, Austria) is an Austrian/Swiss Professor of Information and Technology Management at ETH Zurich and the University of St. Gallen. Besides his academic career, Elgar Fleisch is also locally known as a singer, songwriter and musician. He is part of the duo Fleisch & Fleisch and has recorded nine albums together with his brother Gerald.
Biography
Elgar Fleisch graduated 1987 in mechanical engineering at the HTL Bregenz, studied information systems at the University of Vienna, and received his PhD in 1993 in Machine Learning. In 1994 he completed his postdoctoral studies at the Institute of Information Management at the University of St. Gallen (HSG) on enterprise networks.
In 1996 Elgar Fleisch interrupted his postdoctoral research for one year and founded IMG Americas. In 2000, he became assistant professor at the University of St. Gallen. Since 2002 Elgar has been full professor at the Institute of Technology Management at the University of St. Gallen (ITEM-HSG). In 2004 he was also appointed to ETH Zurich, where he holds the Chair of Information Management at the Department of Management, Technology and Economics.
Elgar Fleisch spent his sabbaticals at the Massachusetts Institute of Technology and at Dartmouth College. He is a co-founder of several spin-off and start-up companies as well as a member of the supervisory boards of Robert Bosch GmbH, Stuttgart, Germany, Mobiliar Versicherungsgesellschaft AG, Bern, Switzerland and UNIQA Insurance Group AG, Vienna, Austria. Prof. Fleisch is also a member of the Board of Trustees of the Gebert Rüf Foundation, Basel.
Research
Since 1999, Prof. Fleisch's research interest focused on the merger of the physical and digital worlds to an Internet of Things. Together with his team, he pursues the goal of understanding this fusion with a specific focus on technology, applications and implications. In addition, he aims to develop new Internet of Things technologies and applications for the benefit of the economy and society.
Prof. Fleisch has organized his research into several research laboratories, each spanning both universities and combining technology and social sciences. Most projects take place in close cooperation with industry. Elgar Fleisch and his team have published their results in over 500 scientific papers.
Publications
ResearchGate: https://www.researchgate.net/profile/Elgar_Fleisch
Google Scholar: https://scholar.google.com/citations?hl=de&user=9CEJKM4AAAAJ
References
External links
Elgar Fleisch at ETH Zürich
Elgar Fleisch at University of St. Gallen
1968 births
Academic staff of ETH Zurich
Living people
Austrian business theorists
Swiss business theorists
Information systems researchers | Elgar Fleisch | Technology | 586 |
70,129 | https://en.wikipedia.org/wiki/Mistletoe | Mistletoe is the common name for obligate hemiparasitic plants in the order Santalales. They are attached to their host tree or shrub by a structure called the haustorium, through which they extract water and nutrients from the host plant. There are hundreds of species which mostly live in tropical regions.
The name mistletoe originally referred to the species Viscum album (European mistletoe, of the family Santalaceae in the order Santalales); it is the only species native to the British Isles and much of Europe. A related species with red fruits, rather than white, Viscum cruciatum, occurs in Southwest Spain and Southern Portugal, as well as in Morocco in North Africa and in southern Africa. There is also a wide variety of species in Australia. The genus Viscum is not native to North America, but Viscum album was introduced to Northern California in 1900.
The eastern mistletoe native to North America, Phoradendron leucarpum, belongs to a distinct genus of the family Santalaceae.
European mistletoe has smooth-edged, oval, evergreen leaves borne in pairs along the woody stem, and waxy, white berries that it bears in clusters of two to six. The eastern mistletoe of North America is similar, but has shorter, broader leaves and longer clusters of ten or more berries.
Over the centuries, the term mistletoe has been broadened to include many other species of parasitic plants with similar habits, found in other parts of the world, that are classified in different genera and families such as the Misodendraceae of South America and the mainly southern hemisphere tropical Loranthaceae.
Etymology
The word 'mistletoe' derives from the older form 'mistle' adding the Old English word (twig). 'Mistle' is from Common Germanic (cf. Old High German , Middle High German , Old English , Old Norse ). Further etymology is uncertain, but may be related to the Germanic base for 'mash'.
Online Etymology Dictionary claims a similar theory, noting: "The alteration of the ending... is perhaps from a mistaking of the final -n for a plural suffix after tan fell from use as a separate word, but Oxford finds it a natural evolution in West Saxon based on stress."
Groups
Parasitism has evolved at least twelve times among the vascular plants. Molecular data show the mistletoe habit has evolved independently five times within the Santalales—first in the Misodendraceae, but also in the Loranthaceae and three times in the Santalaceae (in the former Santalalean families Eremolepidaceae and Viscaceae, and the tribe Amphorogyneae).
The largest family of mistletoes, the Loranthaceae, has 73 genera and more than 900 species. Subtropical and tropical climates have markedly more mistletoe species; Australia has 85, of which 71 are in Loranthaceae, and 14 in Santalaceae.
Life cycle
Mistletoe species grow on a wide range of host trees, some of which experience side effects including reduced growth, stunting, and loss of infested outer branches. A heavy infestation may also kill the host plant. Viscum album successfully parasitizes more than 200 tree and shrub species.
All mistletoe species are hemiparasites because they do perform some photosynthesis for some period of their life cycle. However, in some species its contribution is very nearly zero. For example, some species, such as Viscum minimum, that parasitize succulents, commonly species of Cactaceae or Euphorbiaceae, grow largely within the host plant, with hardly more than the flower and fruit emerging. Once they have germinated and attached to the circulatory system of the host, their photosynthesis reduces so much that it becomes insignificant.
Most of the Viscaceae bear evergreen leaves that photosynthesise effectively, and photosynthesis proceeds within their green, fleshy stems as well. Some species, such as Viscum capense, are adapted to semi-arid conditions and their leaves are vestigial scales, hardly visible without detailed morphological investigation. Therefore, their photosynthesis and transpiration only take place in their stems, limiting their demands on the water supply of its host, but also limiting their intake of carbon dioxide for photosynthesis. Accordingly, their contribution to the metabolic balance of its host becomes trivial and the idle parasite may become quite yellow or golden as it grows, having practically given up photosynthesis.
At another extreme, other species have vigorous green leaves. Not only do they photosynthesize actively, but a heavy infestation of mistletoe plants may take over whole host tree branches, sometimes killing practically the entire crown and replacing it with their own growth. In such a tree the host is relegated purely to the supply of water and mineral nutrients and the physical support of the trunk. Such a tree may survive as a Viscum community for years; it resembles a totally unknown species unless one examines it closely, because its foliage does not look like that of any tree. An example of a species that behaves in this manner is Viscum continuum.
A mistletoe seed germinates on the branch of a host tree or shrub, and in its early stages of development it is independent of its host. It commonly has two or even four embryos, each producing its hypocotyl, that grows toward the bark of the host under the influence of light and gravity, and potentially each forming a mistletoe plant in a clump. Possibly as an adaptation to assist in guiding the process of growing away from the light, the adhesive on the seed tends to darken the bark. On having made contact with the bark, the hypocotyl, with only a rudimentary scrap of root tissue at its tip, penetrates it, a process that may take a year or more. In the meantime the plant is dependent on its own photosynthesis. Only after it reaches the host's conductive tissue may it begin to rely on the host for its needs. Later, it forms a haustorium that penetrates the host tissue and takes water and nutrients from the host plant.
Species more or less obligate include the leafless quintral, Tristerix aphyllus, which lives deep inside the sugar-transporting tissue of a spiny cactus, appearing only to show its tubular red flowers, and the genus Arceuthobium (dwarf mistletoe; Santalaceae) that has reduced photosynthesis; as an adult, it manufactures only a small proportion of the sugars it needs from its own photosynthesis, but as a seedling actively photosynthesizes until a connection to the host is established.
Some species of the largest family, Loranthaceae, have small, insect-pollinated flowers (as with Santalaceae), but others have spectacularly showy, large, bird-pollinated flowers.
Most mistletoe seeds are spread by birds who eat the 'seeds' (in actuality drupes). Of the many bird species that feed on them, the mistle thrush is the best-known in Europe, the phainopepla in southwestern North America, and Dicaeum flowerpeckers in Asia and Australia. Depending on the species of mistletoe and the species of bird, the seeds are regurgitated from the crop, excreted in their droppings, or stuck to the bill and causing the bird to have to wipe it off onto a branch. The seeds are coated with a sticky material called viscin. Some viscin remains on the seed and when it touches a stem, it sticks tenaciously. The viscin soon hardens and attaches the seed firmly to its future host, where it germinates and its haustorium penetrates the sound bark.
Specialist mistletoe eaters have adaptations that expedite the process; some pass the seeds through their unusually shaped digestive tracts so fast that a pause for defecation of the seeds is part of the feeding routine. Others have adapted patterns of feeding behavior; the bird grips the fruit in its bill and squeezes the sticky-coated seed out to the side. The seed sticks to the beak and the bird wipes it off onto the branch and consumes the remainder of the fruit. An example of a bird with this adapted method is the blackcap (Sylvia atricapilla).
Biochemically, viscin is a complex adhesive mix containing cellulosic strands and mucopolysaccharides.
Once a mistletoe plant is established on its host, it usually is possible to save a valuable branch by pruning and judicious removal of the wood invaded by the haustorium, if the infection is caught early enough. Some species of mistletoe can regenerate if the pruning leaves any of the haustorium alive in the wood.
Toxicity
There are 1500 species of mistletoe, varying widely in toxicity to humans; the European mistletoe (Viscum album) is more toxic than the American mistletoe (Phoradendron serotinum).
The primary active toxic compounds in American mistletoe are phoratoxins (in Phoradendron) and their effects can include blurred vision, diarrhea, nausea, and vomiting, although these rarely occur. Their primary mechanism of action is through disruption of cell membranes which causes lysis and cell death at high concentrations.
In European mistletoe (Viscum), viscumin is the more dangerous active toxin. It acts by irreversibly inhibiting ribsomal protein synthesis in cells, which leads to the death of the affected cell, tissue damage in the area of exposure from mass cell death in the very short term, with the potential for organ failure and death depending on the level of exposure. Early symptoms depend mostly on the route of exposure as the first cells it contacts (thus the first to have their protein synthesis deactivated by it) will be the first to die. Its toxic effects take place through the same mechanism as ricin and other ribosome-inactivating proteins but it enters the cells by a different mechanism than ricin and is toxic even to cultured ricin-resistant cells.
Mistletoe has been used historically in medicine for its supposed value in treating arthritis, high blood pressure, epilepsy, and infertility.
Ecological importance
Mistletoes are often considered pests that kill trees and devalue natural habitats, but some species have recently been recognized as ecological keystone species, organisms that have a disproportionately pervasive influence over their community. A broad array of animals depend on mistletoe for food, consuming the leaves and young shoots, transferring pollen between plants and dispersing the sticky seeds. In western North America their juicy berries are eaten and spread by birds (notably the phainopepla) while in Australia the mistletoebird behaves similarly. When eaten with the fruit, some seeds pass unharmed through their digestive systems, emerging in extremely sticky droppings which the bird deposits on tree branches, where some may stick long enough to germinate. As the plants mature, they grow into masses of branching stems that suggest the popular name "witches' brooms".
The dense evergreen witches' brooms formed by the dwarf mistletoes (Arceuthobium species) of western North America also make excellent locations for roosting and nesting of the northern spotted owl and the marbled murrelet. In Australia the diamond firetail and painted honeyeater are recorded as nesting in different mistletoes.
A study of mistletoe in junipers concluded that more juniper berries sprout in stands where mistletoe is present, as the mistletoe attracts berry-eating birds who also eat juniper berries.
Cultural importance
Mistletoe is relevant to several cultures. Pagan cultures regarded the white berries as symbols of male fertility, with the seeds resembling semen. The Celts, particularly, saw mistletoe as the semen of Taranis, while the Ancient Greeks referred to mistletoe as "oak sperm". Also in Roman mythology, mistletoe was used by the hero Aeneas to reach the underworld.
Mistletoe may have played an important role in Druidic mythology in the Ritual of Oak and Mistletoe, although the only ancient writer to mention the use of mistletoe in this ceremony was Pliny. Evidence taken from bog bodies makes the Celtic use of mistletoe seem medicinal rather than ritual. It is possible that mistletoe was originally associated with human sacrifice and only became associated with the white bull after the Romans banned human sacrifices.
The Romans associated mistletoe with peace, love, and understanding and hung it over doorways to protect the household.
In the advent of the Christian era, mistletoe in the Western world became associated with Christmas as a decoration under which lovers are expected to kiss, as well as with protection from witches and demons. Mistletoe continued to be associated with fertility and vitality through the Middle Ages, and by the eighteenth century it had also become incorporated into Christmas celebrations around the world. The custom of kissing under the mistletoe is referred to as popular among servants in late eighteenth-century England.
The serving class of Victorian England is credited with perpetuating the tradition. The tradition dictated that a man was allowed to kiss any woman standing underneath mistletoe, and that bad luck would befall any woman who refused the kiss. One variation on the tradition stated that with each kiss a berry was to be plucked from the mistletoe, and the kissing must stop after all the berries had been removed.
From at least the mid-nineteenth century, Caribbean herbalists of African descent have referred to mistletoe as "god-bush". In Nepal, diverse mistletoes are used for a variety of medical purposes, particularly for treating broken bones.
Mistletoe is the floral emblem of the U.S. state of Oklahoma and the flower of the UK county of Herefordshire. Every year, the UK town of Tenbury Wells holds a mistletoe festival and crowns a 'Mistletoe Queen'.
See also
Witch's broom, a growth of the host plant's own tissue, rather than a parasite in itself
Festive ecology
Kissing bough
Viscum album
References
External links
Parasitic Plant Connection. See families Misodendraceae, Loranthaceae, Santalaceae, and Viscaceae
Introduction to Parasitic Flowering Plants by Nickrent & Musselman
Phoradendron serotinum images at bioimages.vanderbilt.edu
Scientific Studies, Research and Clinical Trials on Mistletoe Treatment in Cancer
Deck the halls with wild, wonderful mistletoe, West Virginia Department of Agriculture
ANBG: Mistletoe Accessed 22 January 2018.
Christmas plants
Medicinal plants
Parasitic plants
Santalales
Symbols of Oklahoma
Winter traditions
Plant common names | Mistletoe | Biology | 3,089 |
1,963,552 | https://en.wikipedia.org/wiki/Neosalvarsan | Neosalvarsan is a synthetic chemotherapeutic that is an organoarsenic compound. It became available in 1912 and superseded the more toxic and less water-soluble Salvarsan as an effective treatment for syphilis. Because both of these arsenicals carried considerable risk of side effects, they were replaced for this indication by penicillin in the 1940s.
Both Salvarsan and Neosalvarsan were developed in the laboratory of Paul Ehrlich in Frankfurt, Germany. Their discoveries were the result of the first organized team effort to optimize the biological activity of a lead compound through systematic chemical modifications. This scheme is the basis for most modern pharmaceutical research. Both Salvarsan and Neosalvarsan are prodrugsthat is, they are metabolised into the active drug in the body.
Although, like Salvarsan, it was originally believed to contain an arsenic-arsenic double bond, this is now known to be incorrect for Salvarsan. Presumably, Neosalvarsan also exists as a mixture of differently sized rings with arsenic-arsenic single bonds.
References
Sulfinates
Antibiotics
Organoarsenic compounds
Phenols
Paul Ehrlich
German inventions | Neosalvarsan | Biology | 243 |
39,611,379 | https://en.wikipedia.org/wiki/Fragile%20X-associated%20primary%20ovarian%20insufficiency | Fragile X-associated primary ovarian insufficiency (FXPOI) is the most common genetic cause of premature ovarian failure in women with a normal karyotype 46,XX. The expansion of a CGG repeat in the 5' untranslated region of the FMR1 gene from the normal range of 5-45 repeats to the premutation range of 55-199 CGGs leads to risk of FXPOI for ovary-bearing individuals. About 1:150-1:200 women in the US population carry a premutation. Women who carry an FMR1 premutation have a roughly 20% risk of being diagnosed with FXPOI, compared to 1% for the general population, and an 8-15% risk of developing the neurogenerative tremor/ataxia disorder (FXTAS). FMR1 premutation women are also at increased risk of having a child with a CGG repeat that is expanded to >200 repeats (a full mutation). Individuals with a full mutation, unlike the premutation, produce little to no mRNA or protein from the FMR1 gene and have fragile X syndrome as a result.
Clinical diagnosis
Primary ovarian insufficiency requires that a diagnosis be made prior to the age of 40, since it is considered premature relative to the average age of menopause of 51 in the US. The two criteria are the repeated elevation of the follicle stimulating hormone (FSH), which increases dramatically when a woman enters menopause, and the loss of menstruation for at least 4–6 months. In FMR1 premuation carriers, the likelihood of receiving a clinical diagnosis of FXPOI is about 20% and increased FSH levels and altered menstrual cycles become particularly evident between 30 and 40 years of age. Even if menses are lost, women diagnosed with FXPOI may experience a spontaneous "escape" ovulation. This means that there is some chance for conception, around 10%, even if menstruation has been absent for extended periods in women with FXPOI. Women planning to conceive before the cessation of periods are often encouraged to consult a genetic counselor or medical geneticist to understand their individual risk for having a child with fragile X syndrome.
Genetics
The FMR1 premutation is commonly identified using reflexive genetic testing after identification of a child with fragile X syndrome found in a family. This genetic diagnosis accounts for 10-15% of women who will receive a FXPOI diagnosis. Women may also experience infertility and receive genetic testing in the course of reproductive care. Roughly 1-3% of FXPOI cases are identified through this process.
FXPOI is the most common known genetic cause of ovarian insufficiency for women with a normal chromosome number (46,XX) and accounts for 5-10% of these cases of premature ovarian failure. Not all women who are carriers for an FMR1 premutation allele, an expansion of the CGG repeat in the FMR1 gene to 55-199 repeats, will be diagnosed with FXPOI. About 20% of premutation carriers will be diagnosed, but this risk represents a significant increase over the general population who have a roughly 1% risk of POI. Women with highest risk of POI have 70-100 CGG repeats, meaning there is a non-linear association between CGG-repeat size and FXPOI risk. This relationship is different than the linear association seen between CGG repeat size and age of onset of FXTAS. Other variations in premutation alleles, like AGG interruptions within the CGG repeats, are not correlated with risk of a FXPOI diagnosis. The AGG interruptions are correlated with the risk that the premutation-length allele could expand in the oocyte, or egg cell, and lead to a child with fragile X syndrome.
Risk of developing other premutation-related diagnoses
Though the greatest risks for female carriers of an FMR1 premuation are developing POI and having a child with fragile X syndrome, there are other possible neurological and neuropsychiatric conditions that may occur. Roughly 8-15% of female premutation carriers will develop the late-onset neurodegenerative tremor/ataxia disorder FXTAS. More recently, increased interest in neurological features and cognition of female premutation carriers has suggested a broader range of neuropsychiatric conditions associated with premutation-sized CGG repeats. Women with an FMR1 premutation exhibit higher incidences of depression, anxiety, autoimmune dysfunction, and neuromuscular pain. The prevalence of depression and anxiety in premutation females is, notably, higher than that observed in premutation males. Interpretation of studies that examine women with a premutation, who may have complex and challenging needs, is difficult since it is unclear whether the premutation leads to increased anxiety and depression or it is increased environmental stressors within the home. Research to understand these differences between males and females with the same genetic change is developing, but there are no studies that point to a definitive driver of these differences.
Expansion of a premutation to full mutation
An additional challenge for women with an FMR1 premutation is determining the risk for having a child with fragile X syndrome. Epidemiological data show that the risk of a premutation allele (55-199 CGGs) expanding to a full mutation (>200 CGGs) increases as the length of the CGG tract grows. This risk assessment for expansion to a full mutation is critical for women, but not men, with a premutation since the premutation allele in males does not show large expansion and transmission through generations. Expansion from a premutation to full mutation only occurs within the egg cells of a female premutation carrier. Additional factors influencing the stability of FMR1 premutation alleles are the presence of AGG interruptions. The loss of 1 or 2 AGG interruptions in the 5' region of the CGG repeat allele leads to increased likelihood that a premutation will expand to a full mutation from one generation to the next. Studies have also indicated that increasing maternal age may be a contributor to increased risk of expansion of a premutation to a full mutation, but the molecular mechanism through which this expansion process occurs are not yet understood.
Resources
Individuals with FMR1-related disorders and families have access to several communities to find support groups and information about ongoing research and new therapies. Concise information designed for patients, families, or people outside of medical fields are available. The National Fragile X Foundation is a private foundation that focuses on raising awareness about fragile X-associated disorders, research, and treatments. The FRAXA Research Foundation is another resource for families and is primarily focused on funding and helping amplify research and treatment options for fragile X to the community.
References
External links
Genetic diseases and disorders
Reproduction
Women's health
Genetic anomalies
X-linked dominant disorders
Endocrine diseases | Fragile X-associated primary ovarian insufficiency | Biology | 1,431 |
7,619,293 | https://en.wikipedia.org/wiki/Acting%20out | In the psychology of defense mechanisms and self-control,
acting out is the performance of an action considered bad or anti-social. In general usage, the action performed is destructive to self or to others. The term is used in this way in sexual addiction treatment, psychotherapy, criminology and parenting. In contrast, the opposite attitude or behaviour of bearing and managing the impulse to perform one's impulse is called acting in.
The performed action may follow impulses of an addiction (e.g. drinking, drug taking or shoplifting). It may also be a means designed (often unconsciously or semi-consciously) to garner attention (e.g. throwing a tantrum (ataque) or behaving promiscuously). Acting out may inhibit the development of more constructive responses to the feelings in question.
In analysis
Freud considered that patients in analysis tended to act out their conflicts in preference to remembering them – repetition compulsion. The analytic task was then to help "the patient who does not remember anything of what he has forgotten and repressed, but acts it out" to replace present activity by past memory.
Otto Fenichel added that acting out in an analytic setting potentially offered valuable insights to the therapist; but was nonetheless a psychological resistance in as much as it deals only with the present at the expense of concealing the underlying influence of the past. Lacan also spoke of "the corrective value of acting out", though others qualified this with the proviso that such acting out must be limited in the extent of its destructive/self-destructiveness.
Annie Reich pointed out that the analyst may use the patient by acting out in an indirect countertransference, for example to win the approval of a supervisor.
Interpretations
The interpretation of a person's acting out and an observer's response varies considerably, with context and subject usually setting audience expectations.
In parenting
Early years, temper tantrums can be understood as episodes of acting out. As young children will not have developed the means to communicate their feelings of distress, tantrums prove an effective and achievable method of alerting parents to their needs and requesting attention.
As children develop they often learn to replace these attention-gathering strategies with more socially acceptable and constructive communications. In adolescent years, acting out in the form of rebellious behaviors such as smoking, shoplifting and drug use can be understood as "a cry for help." Such pre-delinquent behavior may be a search for containment from parents or other parental figures. The young person may seem to be disruptive – and may well be disruptive – but this behaviour is often underpinned by an inability to regulate emotions in some other way.
In addiction
In behavioral or substance addiction, acting out can give the addict the illusion of being in control. Many people with addiction, either refuse to admit they struggle with it, or some don't even realize they have an addiction. For most people, when their addiction is addressed, they become defensive and act out. This can be a result of multiple emotions including shame, fear of judgement, or anger. It's important to be patient and understanding towards those with addiction, and to realize that most people want to break free from the symptoms and baggage that come with addiction, but don't know how or where to start. There are many preventative measures and programs than can help those who personally struggle with addiction, or for those who have a friend or family member who has an addiction.
In criminology
Criminologists debate whether juvenile delinquency is a form of acting out, or rather reflects wider conflicts involved in the process of socialization.
Alternatives
Acting out painful feelings may be contrasted with expressing them in ways more helpful to the patient, e.g. by talking out, expressive therapy, psychodrama or mindful awareness of the feelings. Developing the ability to express one's conflicts safely and constructively is an important part of impulse control, personal development and self-care.
See also
References
Further reading
Franz Alexander, 'The Neurotic Character'. International Journal of Psychoanalysis XI, 1930.
External links
Schellekes, S. About acting out at https://www.hebpsy.net, 2007.
Acting out Psychological Term From http://www.betipulnet.co.il
Psychology
Acting Up is Not "Acting-Out" Dr George Simon at CounsellingResource.com
"Projective Identification, Countertransference, and the Struggle for Understanding Over Acting Out" Robert T. Waska, M.S., MFCC, Journal of Psychotherapy Practice and Research 8:155-161, April 1999
Sophie de Mijolla-Mellor, 'Acting out/Acting-in'
Self-help
Acting out More complete explanation from a psychological perspective.
Acting out Understanding acting out from outsiders and insider's perspectives, suggestions for developing positive potential from acting out traits.
Parenting
Acting out
Barriers to critical thinking
Criminology
Defence mechanisms
Forensic psychology
Problem behavior
Youth
Youth rights | Acting out | Biology | 1,025 |
10,242,665 | https://en.wikipedia.org/wiki/Chia-Chiao%20Lin | Chia-Chiao Lin (; 7 July 1916 – 13 January 2013) was a Chinese-born American applied mathematician and Institute Professor at the Massachusetts Institute of Technology.
Lin made major contributions to the theory of hydrodynamic stability, turbulent flow, mathematics, and astrophysics.
Biography
Lin was born in Beijing with ancestral roots in Fuzhou. In 1937 Lin graduated from the department of physics, National Tsinghua University in Beijing.
After graduation he was a teaching assistant in the Tsinghua University physics department. In 1939 Lin won a Boxer Indemnity Scholarship and was initially supported to study in the United Kingdom. However, due to World War II, Lin and several others were sent to North America by ship. Unluckily, Lin's ship was stopped in Kobe, Japan, and all students had to return to China.
In 1940, Lin finally reached Canada and studied at the University of Toronto from which he earned his M.Sc. In 1941. Lin continued his studies in the United States and received his PhD from the California Institute of Technology in 1944 under Theodore von Kármán. His PhD thesis provided an analytic method to solve a problem in the stability of parallel shearing flows, which was the subject of Werner Heisenberg's PhD thesis.
Lin also taught at Caltech between 1943 and 1945. He taught at Brown University between 1945 and 1947. Lin joined the faculty of the Massachusetts Institute of Technology in 1947. Lin was promoted to professor at MIT in 1953 and became an Institute Professor of MIT in 1963. He was President of the Society for Industrial and Applied Mathematics from 1972 to 1974. Lin retired from MIT in 1987.
In 2002, he moved back to China and helped found the Zhou Pei-Yuan Center for Applied Mathematics (ZCAM) at Tsinghua University. He died in Beijing in 2013, aged 96.
Honors and awards
During his career Lin has received many prizes and awards, including:
The first Fluid Dynamics Prize (from the American Physical Society, in 1979)
The 1976 NAS Award in Applied Mathematics and Numerical Analysis
The 1975 Timoshenko Medal
The 1973 Otto Laporte Award
Caltech's Distinguished Alumni Award
Lin was a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and the American Philosophical Society, cited in the American Men and Women of Science. and a Fellow of the American Association for the Advancement of Science. Lin was elected Academician of Academia Sinica in 1958, and became a Foreign Member of the Chinese Academy of Sciences in 1994.
References
External links
Lin's profile
1916 births
2013 deaths
Members of the United States National Academy of Sciences
Foreign members of the Chinese Academy of Sciences
Fellows of the American Association for the Advancement of Science
Fellows of the American Physical Society
Educators from Beijing
Mathematicians from Beijing
Chinese emigrants to the United States
20th-century American mathematicians
Massachusetts Institute of Technology faculty
Academic staff of Tsinghua University
Tsinghua University alumni
Boxer Indemnity Scholarship recipients
University of Toronto alumni
California Institute of Technology alumni
California Institute of Technology faculty
Brown University faculty
Fluid dynamicists
Members of Academia Sinica
Presidents of the Society for Industrial and Applied Mathematics
Florida State University faculty
Chinese mathematicians
Members of the American Philosophical Society | Chia-Chiao Lin | Chemistry | 640 |
3,113,432 | https://en.wikipedia.org/wiki/Alpha%20Hydri | Alpha Hydri, Latinized from α Hydri, is the second brightest star in the southern circumpolar constellation of Hydrus. It is readily visible to the naked eye in locations south of 28°N with an apparent visual magnitude of +2.9. It is sometimes informally known as the Head of Hydrus. This should not be confused with Alpha Hydrae (Alphard) in the constellation Hydra. Alpha Hydri is one of only three stars in the constellation Hydrus that are above the fourth visual magnitude. This star can be readily located as it lies to the south and east of the prominent star Achernar in the constellation Eridanus.
Based upon parallax measurements from the Hipparcos mission, Alpha Hydri is located at a distance of about from Earth. This subgiant star is three times larger and twice as massive as the Sun, with a stellar classification of F0 IV. It is about 810 million years old and is radiating 21 times the Sun's luminosity from its outer atmosphere at an effective temperature of 7,087 K. Alpha Hydri emits X-rays similar to Altair. The space velocity components of this star are [U, V, W] = .
Naming
In Chinese caused by adaptation of the European southern hemisphere constellations into the Chinese system, (), meaning Snake's Head, refers to an asterism consisting of α Hydri and β Reticuli. Consequently, α Hydri itself is known as (, .)
References
F-type subgiants
Hydrus
Hydri, Alpha
Durchmusterung objects
0083
012311
009236
0591 | Alpha Hydri | Astronomy | 354 |
6,802,179 | https://en.wikipedia.org/wiki/Netrin%20receptor%20DCC | Netrin receptor DCC, also known as DCC, or colorectal cancer suppressor is a protein which in humans is encoded by the DCC gene. DCC has long been implicated in colorectal cancer and its previous name was Deleted in colorectal carcinoma. Netrin receptor DCC is a single transmembrane receptor.
Since it was first discovered in a colorectal cancer study in 1990, DCC has been the focus of a significant amount of research. DCC held a controversial place as a tumour suppressor gene for many years, and is well known as an axon guidance receptor that responds to netrin-1.
More recently DCC has been characterized as a dependence receptor, and many hypotheses have been put forward that have revived interest in DCC'''s candidacy as a tumour suppressor gene, as it may be a ligand-dependent suppressor that is frequently epigenetically silenced.
Background
Early studies of colorectal tumours found that allelic deletions of segments of chromosome 18q occur in a very high percentage of colorectal cancers. DCC was initially cloned out of the region and put forth as a putative tumour suppressor gene, though nothing was known about its function at the time. The DCC gene was examined for the genetic changes found with most other tumour suppressor genes, but it was found to have a comparatively low frequency of somatic mutation. Several years later DCC was shown to encode a transmembrane receptor protein that mediated the effects of netrin-1 on axon outgrowth.
Soon after the protein product was confirmed, DCC knockout mice were created. As DCC−/− mutations are rapidly fatal due to a lack of nervous system development, DCC+/− mice were assessed for increased tumour development over two years, and no increase in tumour predisposition was detected.
The discovery of a specific function for DCC that seemed to have little to do with cell cycle control, the low somatic mutation rate and the absence of cancer predisposition in DCC heterozygotes were fairly discouraging evidence for DCC's putative tumour suppressor status. This caused focus to shift to DCC's role in axon guidance for a time, until one study implicated DCC in regulation of cell death. As the 18q chromosomal deletions were never resolved to be related solely to another gene, DCC was rapidly reaccepted as a candidate. Recent research into the mechanisms of DCC signaling and in-vitro studies of DCC modifications have solidified DCC's tumour suppressor position, and have begun to integrate DCCs divergent functions as both an axon guidance molecule and a tumour suppressor into a single concept.
Structure
The DCC gene is located at 18q21.3, and has a total of 57 possible exons and 43 possible introns. This theoretically results in 13 correctly sliced, putatively good proteins. The typical DCC protein has one signal peptide motif and eleven domains, including multiple immunoglobulin-like domains, a transmembrane domain, and several fibronectin type 3 domains.
DCC has extracellular binding sites for both netrin-1 and heparin. Heparin sulphate is believed to also be present during neural growth as a type of co-factor for axon guidance. Intracellularly, DCC has been shown to have a caspase-3 proteolysis site at Asp 1290.
DCC and neogenin, two of the netrin-1 receptors, have recently been shown to have sites for tyrosine phosphorylation (at Y1420 on DCC) and are likely interacting with Src family kinases in regulating responses to netrin-1.
DCC as a dependence receptor
Historically, cellular receptors have been thought to be activated when bound to their ligand, and are relatively inactive when no ligand is present. A number of receptors have been found that do not fit into this conceptual mould, and DCC is one of them. These receptors are active both with ligand bound and unbound, but the signals transmitted are different when the receptors are ligand bound. Collectively, this type of receptor is known as a dependence receptor because the unbound pathway is usually apoptotic, meaning that cell survival depends on ligand presence. Other receptors also show this functional profile, including p75NTR, the androgen receptor, RET, several integrins and Patched.
While not the first dependence receptor pair discovered, DCC and netrin-1 are an often quoted example of a dependence receptor system. When DCC is present on the membrane and bound to netrin-1, signals are conveyed that can lead to proliferation and cell migration. In the absence of netrin-1, DCC signaling has been shown to induce apoptosis. Only in the absence of DCC is there an absence of downstream signaling. There are therefore three possible signaling states for dependence receptors: on (ligand-bound, migration and proliferation), off (ligand-unbound, apoptosis inducing) and absent (lack of signal).
Developmental and neurological roles
DCC's role in commissural axon outgrowth is perhaps its best characterized. In the developing spinal cord, commissural neurons located dorsally extend axons ventrally using a mechanism dependent on a ventral midline structure, the floor plate. A gradient of netrin-1 is produced from the floor plate, which allows orientation of the extending axons, aiding the development of the dorsal-ventral axis of the brain and spinal column. A variety of receptors are present on the axon surface which either repel or attract axons to the midline. When membrane DCC is stimulated by netrin-1, it promotes axon progression towards the midline.
There are several other molecules also involved in the guidance of axons to and across the midline. The slit proteins have repulsive functions, as opposed to netrins, and are mediated by the transmembrane protein Robo. Axonal growth cones that are attracted to the midline by netrin/DCC signaling eventually cross the floor plate. When this occurs they lose responsiveness to netrin and become repulsed by slit-Robo signaling. This is accomplished by the formation of a DCC-Robo complex, which inhibits attractive netrin/DCC signals while allowing slit-Robo signals. Netrin also has other receptors, the UNC-5 family. The UNC5 receptors have repellant migratory responses to netrin binding, and have similar effects to the slit-Robo system.
The intracellular signaling responses to netrin-1 are not yet well understood, even in neurobiology studies. Several phosphorylation events have been established, as have the involvement of several src family kinases and small GTPases, but the sequence of events has not yet been determined. DCC is also required to be recruited to lipid rafts for axon outgrowth and apoptotic signaling.
DCC is developmentally regulated, being present in most fetal tissues of the body at higher levels than what is found in adult tissues. DCC and netrin have been found to be specifically involved in the secondary migration of neural crest cells into the pancreas and developing gut structures, and may prove to be vital to other areas during fetal growth.
Role in cancer
One of the most frequent genetic abnormalities that occur in advanced colorectal cancer is loss of heterozygosity (LOH) of DCC in region 18q21.DCC in a receptor for netrin-1 and is currently believed by some to be a conditional tumour suppressor gene, meaning that it normally prevents cell growth when in the absence of netrin-1. DCC elimination is not believed to be a key genetic change in tumour formation, but one of many alterations that can promote existing tumour growth. DCC's possible role in migration of cancerous cells is in the process of being characterized.
While recent results make it fairly likely that DCC is involved in the biology of several cancers, the extent of its involvement and the details of how it works are still being studied.
Normal function in tumour suppression and apoptosis
When not bound to netrin-1, an intracellular domain of DCC is cleaved by a caspase, and induces apoptosis in a caspase-9-dependent pathway. This domain does not correspond to a known caspase recruitment motif or death sequence domain, but is required to initiate apoptosis. It has been theorized that the domain acts as a scaffold to recruit and activate caspase-9 and caspase-3. This DCC apoptosis pathway is not dependent on either the mitochondrial apoptosis pathway or the death receptor/caspase-8 pathway. In the absence of ligand, DCC interacts with caspase-9 (likely via an unidentified adaptor protein) and promotes the assembly of a caspase-activating complex. This causes the activation of caspase-3 through caspase-9, and initiates apoptosis without the formation of an apoptosome or cytochrome c release. This implies that DCC regulates a novel pathway for caspase activation, and that it is one that is apoptosome-independent.
To put this into a biological systems context, some physiology is required. In the gastrointestinal tract, epithelial cells proliferate and die rapidly. The division of these cells occurs at the base of villi, and cells are pushed upwards by subsequent divisions to the tip where they enter apoptosis and shed off into the lumen. Netrin-1 is produced in the base of the villi, so a gradient of netrin is present that is weakest at the tip. In normal physiology, the presence of netrin-1 inhibits DCC-mediated cell death until the epithelial cell reaches the tip of the villus, where the now unbound DCC causes the cell to enter apoptosis. In a cancer state, the absence of DCC prevents the gradient from having an effect on the cell, making it more likely to continue to survive.
DCC's role as a tumour suppressor is tied to its dependence receptor characteristics. DCC induces cell death on epithelial cells when no netrin-1 is bound. Besides from loss of heterozygosity of DCC, this mechanism of apoptosis can also be avoided in malignant processes by overexpression of netrin-1.
As an oncogene
DCC can be considered a conditional tumour suppressor gene as well as a conditional oncogene. When DCC is present and not activated by netrin it is proapoptotic, and represses tumour formation. When DCC is present and netrin-activated it promotes cell survival, acting as an oncoprotein. Netrin-activated DCC is known to activate the CDC42-RAC1 and MAPK1/3 pathways, both of which are activated in cancer and promote tumour development.
Mechanism of deletion
It was originally believed that there were two major pathways in colorectal cancer formation. The first was a chromosomal instability pathway thought to be responsible for the adenoma to carcinoma progression, which was characterized by loss of heterozygosity (LOH) on chromosome 5q, 17p and 18q. The second pathway was believed to be the microsatellite instability pathway, which is characterized by increases or decreases in the number of tandem repeats of simple DNA sequences. This type of instability is associated with some specific mutations, including genes involved with DNA mismatch repair and surprisingly, transforming growth factor-beta. More recently, those in the field of colorectal cancer have acknowledged that cancer formation is far more complex, but cancer related genes still tend to be categorized as chromosomal or microsatellite instability genes.
DCC would fall into the chromosomal instability category. The chromosomal region of 18q has shown consistent LOH for nearly twenty years. Approximately 70% of primary colorectal cancers display LOH in this region, and the percentage increases when comparing early to advanced cancers. This increase in DCC loss in advanced cancer may indicate that DCC loss is more important to tumour progression than tumour formation. However, region 18q is not the location of DCC alone, and many studies are in conflict when reporting whether 18q LOH is attributable to DCC or other tumour suppressor candidates in the neighbouring areas. Many reviews refuse to comment on DCC due to its history of conflicting information, stating that more study is required.
Chromosome 18 LOH tends to occur in clusters. One major cluster is at 18q21, which agrees with the location of DCC. This cluster includes the marker D18S51, and is flanked by the D18S1109 and D18S68 loci. This segment spans 7.64cM, which is a relatively large section of DNA that could easily encompass more than one tumour suppressor gene.
A significant difference between DCC expression and 18q21 LOH was detected in 1997. Studies found that more tumours had reduced DCC expression than could be explained by LOH or MSI, indicating that another mechanism was at work. This observation was likely explained when epigenetic analysis was performed.
Epigenetics
Loss of DCC in colorectal cancer primarily occurs via chromosomal instability, with only a small percent having epigenetic silencing involved.
Epigenetic silencing of DCC by promoter hypermethylation has shown to be a significant factor in other cancer types. In head and neck squamous cell carcinoma, 77.3% of tumour samples presented DCC promoter hypermethylation versus 0.8% in non-cancerous saliva samples. Similar results have been seen in breast cancer, acute lymphoblastic leukemia, and several others.
Use in pharmacogenetics
DCC has found to be a useful prognostic marker for late stage colorectal carcinoma in some studies, but unhelpful in others. Currently the American Society of Clinical Oncology does not recommend using DCC as a marker due to insufficient classification data.
A recent review of over two dozen 18q LOH-survival studies concluded that there was a significant amount of inconsistency between the data sets. They concluded that loss of 18q remains a marker for poor prognosis, and that DCC status has the potential to define a group of patients who may benefit from specific treatment regimes.
Metastasis
The increase in loss of heterozygosity percentages of chromosome 18q21 have long suggested that DCC may be involved in the progression of benign adenomas to malignant carcinomas. DCC has recently been found to suppress metastasis in an experimental environment, but a mechanism for this has not yet been proposed.
Pharmacology
At this junction, DCC is not a pharmaceutical target. As DCC is not overexpressed in cancer and is present throughout the body, so it is not considered a good target for most types of cancer drugs.
DCC is expressed at very low levels through most of the body but at higher levels in many areas of the brain, particularly in dopamine neurons. Recently it has been shown that a sensitizing treatment regimen of amphetamines causes markedly increased levels of DCC and UNC-5 expression on neuron cell bodies. This may indicate that netrin-1 receptors are involved in the lasting effects of exposure to stimulant drugs like amphetamine, and may have some therapeutic value in the field of drug tolerance.
Interactions
Deleted in Colorectal Cancer has been shown to interact with:
APPL1,
Androgen receptor,
Caspase 3,
MAZ,
NTN1 and
PTK2.
History
DCC's biological role in cancer has had a long, controversial history. Although DCC has been studied for many years, a significant amount of the data collected is contradictory and much of the focus has been on getting clear picture of the basics.
When the genetic abnormalities that occur in advanced colorectal cancer were first identified, one of the most frequent events was found to be loss of heterozygosity (LOH) of region 18q21. One of the first genes sequenced in this region was DCC, and it was subsequently analyzed for tumour suppressor activity. However, the lack of somatic DCC mutations made it seem likely that the nearby SMAD2 and SMAD4 genes were the reason for 18q21 LOH. The fact that DCC heterozygotes had no increased rates of cancer, even when crossed with mice carrying Apc'' mutations, solidified this viewpoint. The finding that DCC was a receptor for netrin-1 involved in axon guidance initially moved research away from DCC in cancer. It was later realized that DCC may be involved in directing cell motility, which has direct implications for metastatic cancer.
The first direct evidence for DCC as a tumour suppressor gene was published in 1995. Researchers found that addition of DCC to an immortalized cell line suppressed tumorigenicity rather definitively. However no mechanism for this suppression was obvious, and it took several years to propose one.
Nearly ten years after DCC was discovered, studies were published that showed that DCC was involved in apoptosis. Instead of studying loss of DCC as was commonly done, the authors looked at human embryonic kidney cells transfected with DCC. They found an increase in apoptosis that corresponded to DCC expression, which was completely eliminated when netrin-1 was co-transfected or simply added to the media.
When it was understood that DCC apoptosis may also be overcome by netrin-1 overexpression, colorectal cancers were assessed for netrin-1 overexpression, and a small but significant percent of these cancers were found to vastly overexpress the molecule.
References
Further reading
External links
KEGG pathway for colorectal cancer
KEGG pathway for axon guidance
Brain Briefings website - article on axon guidance
BC Cancer Agency - information on colorectal cancer
Receptors
Genes on human chromosome 18 | Netrin receptor DCC | Chemistry | 3,831 |
77,678,036 | https://en.wikipedia.org/wiki/Dorothy%20Pile | Dorothy Lilian Pile (26 July 1902 – 1 February 1993) was a British metallurgist, first woman to be admitted to the Institution of Metallurgists and past president of the Women's Engineering Society.
Early life
Dorothy Lilian Pile was born in Yorkshire on 26 July 1902.
Career
In 1920 Pile's first job was as at the Midland Laboratory Guild Ltd. where her father was the chief metallurgist. Her role was in the chemical laboratory as an assistant working on physical testing and metallography before she became more involved in sheet metal research.
In 1949, Pile was appointed as a metallurgist at the Design and Research Centre of the Gold, Silver and Jewellery Trade, in London and later became an industrial liaison officer.
Professional memberships
Pile was the first woman to become a member of the Institution of Metallurgists in 1946. Later in 1983 she also became the first woman to be awarded honorary fellowship. As a thank you she presented the institution with a presidential tankard which is still held by the IOM3 Historical Collection.
Pile was an active member of the Birmingham Metallurgical Association and in 1949 she was elected president. Pile was the first woman to become president of any British metallurgical societies.
Pile became the president of the Women's Engineering Society (WES) in 1954, succeeding Ella Mary Collin in the role. Pile's successor as president was Kathleen Mary Cook. Pile presented WES with a President's Medal on 29th August 1964, featuring the organisation's logo at the time in green enamel.
Pile had various other roles and memberships to industrial societies and would often be the only woman in attendance at society dinners. She is known to have been referred to as the "metallurgical aunt" at such events.
References
1902 births
1993 deaths
British women engineers
Metallurgists
Women's Engineering Society | Dorothy Pile | Chemistry,Materials_science | 381 |
26,974,141 | https://en.wikipedia.org/wiki/Auerbach%27s%20lemma | In mathematics, Auerbach's lemma, named after Herman Auerbach, is a theorem in functional analysis which asserts that a certain property of Euclidean spaces holds for general finite-dimensional normed vector spaces.
Statement
Let (V, ||·||) be an n-dimensional normed vector space. Then there exists a basis {e1, ..., en} of V such that
||ei|| = 1 and ||ei|| = 1 for i = 1, ..., n,
where {e1, ..., en} is a basis of V* dual to {e1, ..., en}, i. e. ei(ej) = δij.
A basis with this property is called an Auerbach basis.
If V is an inner product space (or even infinite-dimensional Hilbert space) then this result is obvious as one may take for {ei} any orthonormal basis of V (the dual basis is then {(ei|·)}).
Geometric formulation
An equivalent statement is the following: any centrally symmetric convex body in has a linear image which contains the unit cross-polytope (the unit ball for the norm) and is contained in the unit cube (the unit ball for the norm).
Corollary
The lemma has a corollary with implications to approximation theory.
Let V be an n-dimensional subspace of a normed vector space (X, ||·||). Then there exists a projection P of X onto V such that ||P|| ≤ n.
Proof
Let {e1, ..., en} be an Auerbach basis of V and {e1, ..., en} corresponding dual basis. By Hahn–Banach theorem each ei extends to f i ∈ X* such that
||f i|| = 1.
Now set
P(x) = Σ f i(x) ei.
It's easy to check that P is indeed a projection onto V and that ||P|| ≤ n (this follows from triangle inequality).
References
Joseph Diestel, Hans Jarchow, Andrew Tonge, Absolutely Summing Operators, p. 146.
Joram Lindenstrauss, Lior Tzafriri, Classical Banach Spaces I and II: Sequence Spaces; Function Spaces, Springer 1996, , p. 16.
Reinhold Meise, Dietmar Vogt, Einführung in die Funktionalanalysis, Vieweg, Braunschweig 1992, .
Przemysław Wojtaszczyk, Banach spaces for analysts. Cambridge Studies in Advancod Mathematics, Cambridge University Press, vol. 25, 1991, p. 75.
Banach spaces
Lemmas in analysis | Auerbach's lemma | Mathematics | 585 |
3,148,933 | https://en.wikipedia.org/wiki/IUPAC%20nomenclature%20of%20inorganic%20chemistry | In chemical nomenclature, the IUPAC nomenclature of inorganic chemistry is a systematic method of naming inorganic chemical compounds, as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in Nomenclature of Inorganic Chemistry (which is informally called the Red Book). Ideally, every inorganic compound should have a name from which an unambiguous formula can be determined. There is also an IUPAC nomenclature of organic chemistry.
System
The names "caffeine" and "3,7-dihydro-1,3,7-trimethyl-1H-purine-2,6-dione" both signify the same chemical compound. The systematic name encodes the structure and composition of the caffeine molecule in some detail, and provides an unambiguous reference to this compound, whereas the name "caffeine" simply names it. These advantages make the systematic name far superior to the common name when absolute clarity and precision are required. However, for the sake of brevity, even professional chemists will use the non-systematic name almost all of the time, because caffeine is a well-known common chemical with a unique structure. Similarly, H2O is most often simply called water in English, though other chemical names do exist.
Single atom anions are named with an -ide suffix: for example, H− is hydride.
Compounds with a positive ion (cation): The name of the compound is simply the cation's name (usually the same as the element's), followed by the anion. For example, NaCl is sodium chloride, and CaF2 is calcium fluoride.
Cations of transition metals able to take multiple charges are labeled with Roman numerals in parentheses to indicate their charge. For example, Cu+ is copper(I), Cu2+ is copper(II). An older, deprecated notation is to append -ous or -ic to the root of the Latin name to name ions with a lesser or greater charge. Under this naming convention, Cu+ is cuprous and Cu2+ is cupric. For naming metal complexes see the page on complex (chemistry).
Oxyanions (polyatomic anions containing oxygen) are named with -ite or -ate, for a lesser or greater quantity of oxygen, respectively. For example, is nitrite, while is nitrate. If four oxyanions are possible, the prefixes hypo- and per- are used: hypochlorite is ClO−, perchlorate is .
The prefix bi- is a deprecated way of indicating the presence of a single hydrogen ion, as in "sodium bicarbonate" (NaHCO3). The modern method specifically names the hydrogen atom. Thus, NaHCO3 would be pronounced sodium hydrogen carbonate.
Positively charged ions are called cations and negatively charged ions are called anions. The cation is always named first. Ions can be metals, non-metals or polyatomic ions. Therefore, the name of the metal or positive polyatomic ion is followed by the name of the non-metal or negative polyatomic ion. The positive ion retains its element name whereas for a single non-metal anion the ending is changed to -ide.
Example: sodium chloride, potassium oxide, or calcium carbonate.
When the metal has more than one possible ionic charge or oxidation number the name becomes ambiguous. In these cases the oxidation number (the same as the charge) of the metal ion is represented by a Roman numeral in parentheses immediately following the metal ion name. For example, in uranium(VI) fluoride the oxidation number of uranium is 6. Another example is the iron oxides. FeO is iron(II) oxide and Fe2O3 is iron(III) oxide.
An older system used prefixes and suffixes to indicate the oxidation number, according to the following scheme:
Thus the four oxyacids of chlorine are called hypochlorous acid (HOCl),
chlorous acid (HOClO), chloric acid (HOClO2) and perchloric acid (HOClO3), and their respective conjugate bases are hypochlorite, chlorite, chlorate and perchlorate ions. This system has partially fallen out of use, but survives in the common names of many chemical compounds: the modern literature contains few references to "ferric chloride" (instead calling it "iron(III) chloride"), but names like "potassium permanganate" (instead of "potassium manganate(VII)") and "sulfuric acid" abound.
Traditional naming
Simple ionic compounds
An ionic compound is named by its cation followed by its anion. See polyatomic ion for a list of possible ions.
For cations that take on multiple charges, the charge is written using Roman numerals in parentheses immediately following the element name. For example, Cu(NO3)2 is copper(II) nitrate, because the charge of two nitrate ions () is 2 × −1 = −2, and since the net charge of the ionic compound must be zero, the Cu ion has a 2+ charge. This compound is therefore copper(II) nitrate. In the case of cations with a +4 oxidation state, the only acceptable format for the Roman numeral 4 is IV and not IIII.
The Roman numerals in fact show the oxidation number, but in simple ionic compounds (i.e., not metal complexes) this will always equal the ionic charge on the metal. For a simple overview see , for more details see selected pages from IUPAC rules for naming inorganic compounds .
List of common ion names
Monatomic anions:
chloride
sulfide
phosphide
Polyatomic ions:
ammonium
hydronium
nitrate
nitrite
hypochlorite
chlorite
chlorate
perchlorate
sulfite
sulfate
thiosulfate
hydrogen sulfite (or bisulfite)
hydrogen carbonate (or bicarbonate)
carbonate
phosphate
hydrogen phosphate
dihydrogen phosphate
chromate
dichromate
borate
arsenate
oxalate
cyanide
thiocyanate
permanganate
Hydrates
Hydrates are ionic compounds that have absorbed water. They are named as the ionic compound followed by a numerical prefix and -hydrate. The numerical prefixes used are listed below (see IUPAC numerical multiplier):
mono-
di-
tri-
tetra-
penta-
hexa-
hepta-
octa-
nona-
deca-
For example, CuSO4·5H2O is "copper(II) sulfate pentahydrate".
Molecular compounds
Inorganic molecular compounds are named with a prefix (see list above) before each element. The more electronegative element is written last and with an -ide suffix. For example, H2O (water) can be called dihydrogen monoxide. Organic molecules do not follow this rule. In addition, the prefix mono- is not used with the first element; for example, SO2 is sulfur dioxide, not "monosulfur dioxide". Sometimes prefixes are shortened when the ending vowel of the prefix "conflicts" with a starting vowel in the compound. This makes the name easier to pronounce; for example, CO is "carbon monoxide" (as opposed to "monooxide").
Common exceptions
The "a" of the penta- prefix is not dropped before a vowel. As the IUPAC Red Book 2005 page 69 states, "The final vowels of multiplicative prefixes should not be elided (although 'monoxide', rather than 'monooxide', is an allowed exception because of general usage)."
There are a number of exceptions and special cases that violate the above rules. Sometimes the prefix is left off the initial atom: I2O5 is known as iodine pentaoxide, but it should be called diiodine pentaoxide. N2O3 is called nitrogen sesquioxide (sesqui- means ).
The main oxide of phosphorus is called phosphorus pentaoxide. It should actually be diphosphorus pentaoxide, but it is assumed that there are two phosphorus atoms (P2O5), as they are needed in order to balance the oxidation numbers of the five oxygen atoms. However, people have known for years that the real form of the molecule is P4O10, not P2O5, yet it is not normally called tetraphosphorus decaoxide.
In writing formulas, ammonia is NH3 even though nitrogen is more electronegative (in line with the convention used by IUPAC as detailed in Table VI of the red book). Likewise, methane is written as CH4 even though carbon is more electronegative (Hill system).
Nomenclature of Inorganic Chemistry
Nomenclature of Inorganic Chemistry, commonly referred to by chemists as the Red Book, is a collection of recommendations on IUPAC nomenclature, published at irregular intervals by the IUPAC. The last full edition was published in 2005, in both paper and electronic versions.
See also
IUPAC nomenclature
IUPAC nomenclature of organic chemistry
List of inorganic compounds
Water of crystallization
IUPAC nomenclature of inorganic chemistry 2005 (the Red Book)
Nomenclature of Organic Chemistry (the Blue Book)
Quantities, Units and Symbols in Physical Chemistry (the Green Book)
Compendium of Chemical Terminology (the Gold Book)
Compendium of Analytical Nomenclature (the Orange Book)
References
External links
IUPAC website - Nomenclature
IUPAC (old site) Red Book
IUPAC (old site) Red Book - PDF (2005 Recommendations)
Recommendations 2000-Red Book II (incomplete)
IUPAC (old site) Nomenclature Books Series (commonly known as the "Colour Books")
ChemTeam Highschool Tutorial
Chemical nomenclature
Inorganic chemistry
Chemistry reference works | IUPAC nomenclature of inorganic chemistry | Chemistry | 2,056 |
33,025,450 | https://en.wikipedia.org/wiki/KT90 | The KT90 is a vacuum tube used in audio applications. Typically, it is used in hi-fi or electric guitar amplifier applications. KT90 was developed by Elektronska Industrija Niš (EI). KT90 is designed by Blagomir Bukumira, a leading engineer at EI.
Features
The KT90, or in full, "Kinkless Tetrode 90", is a beam power tetrode and features the same octal socket as its smaller variant, the KT88. It may therefore be used as a substitute, given appropriate re-biasing when used in push-pull configuration.
The KT90 is currently manufactured by Electro-Harmonix, who claim that, despite its different construction, it possesses similar sound characteristics to the EL34 valve. Semi-formal research has been conducted by U.K. supplier Watford Valves who have published a test report. (This research is described here as "semi-formal" because it consists primarily of listening evaluations which may be subjective, rather than electrical analyses of performance parameters in either numerical or graphical form.)
References
Vacuum tubes
Guitar amplification tubes | KT90 | Physics | 243 |
47,474,443 | https://en.wikipedia.org/wiki/Alfonso%20Farina | Alfonso Farina (born January 25, 1948) is an Italian electronic engineer and former industry manager. He is most noted for the development of the track while scan techniques for radars and generally for the development of a wide range of signal processing techniques used for sensors where tracking plays an essential role. He is author of about 1000 publications. His work was aimed to a synergistic cooperation between industry and academy.
Biography
Alfonso Farina was born in Petrella Salto, a small town near Rieti in 1948. He obtained a doctoral – laurea - degree in electronic engineering on 1973 at University La Sapienza in Rome. In 1974 he joined Selenia, a Finmeccanica company then become Selex ES. Here he held the role of director of the analysis of integrate systems unit and then chief engineer of large business systems division. More recently, he has been the senior VP CTO of the company and then senior advisor to the CTO. From 1979 to 1985 he also was professor ("incaricato") of radar techniques at the University of Naples.
He retired in October 2014 but, currently, works as a consultant.
Work
The activity of Alfonso Farina spans a wide range of arguments in the area of radars and sensors. His pioneering work on track while scan, now widely used in all radars, was recounted in a classical set of two books
that due to their widespread relevance have gone published also with Russian and Chinese translations. A more recent publication by him also accounts for ideas and applications on adaptive radar signal processing.
He has also been the contributor to the article on ECCM, invited by Merrill Skolnik, in the second edition of the Radar Handbook (Ch. 9)
and the third (Ch. 24)
Together with Artenio Russo, he has generalised the well-known Swerling target fluctuation cases these being special cases.
Together with Sergio Barbarossa, he introduced time-frequency distributions in the analysis of synthetic-aperture radar signals The methods are useful, in particular, for the detection and imaging of objects moving on the Earth, observed from airborne or spaceborne synthetic aperture radars. The approach was later extended to multi-antenna systems, giving rise to space-time-frequency processing.
He is considered the "father" of Italian industrial PCL radar. From 2004 to 2014, he led the team of engineers in conceiving, designing and implementing successive generations of improved PCL radar systems, extensively tested over several years.
Together with Hernandez and Ristic, he extended the theory and calculation of Posterior Cramer-Rao Lower Bound (PCRLB) to the realistic case of detection probability less than 1 and probability of false alarm greater than 0, with practical applications to target tracking.
Together with Luigi Chisci and Giorgio Battistelli he has developed target tracking for radar systems.
In the recent decade, he has contributed to exploit his competence on signal processing in favor of cyber security of integrated systems.
He has been the organizer and general chairman of 2008 IEEE-AESS Radar Conference held in Rome. This was the first time that such conference has been held outside US since its inception on 1974.
Since 2017 till 2023 he is the Chair of Italy Section Chapter, IEEE AESS-10.
Since 2017 (three-year term), he has been in the Editorial Board of the IEEE Signal Processing Magazine.
He is Visiting Professor at University College of London and Cranfield University in UK.
Since 2014, he works as a consultant.
Recently, he gave an interview for the IEEE Aerospace and Electronic Systems Magazine, with Fulvio Gini hosting, recounting of his professional achievements and more.
In October 2018 he was interviewed at Rai Storia for the "70° anniversario di Leonardo Company" ("70th anniversary of Leonardo Company").
He is active in research on quantum radar. Recently, he has been an associate editor of IEEE Aerospace and Electronic Systems Magazine for a special issue on quantum radar, published in two parts, together with Marco Frasca and Bhashyam Balaji.
Currently, he is ranked in the list of 2% top scientists in the World.
He is President of the Radar & Sensors Academy of Leonardo S.p.A. Electronic Division.
He is President of the Underwater and Sensor Systems Academy of Leonardo S.p.A. Electronic Division.
Awards and honors
Farina is IEEE Fellow since 2000 and International Fellow of the Royal Academy of Engineering since 2005, the latter with the citation "Distinguished for outstanding and continuous innovative in the development of radar signal and data processing techniques and application of these findings in practical systems". He received the award from the hands of Prince Philip, Duke of Edinburgh. From 1997 he is Fellow of IET. Since 2010 he is also Fellow of EURASIP. Starting from 2020, he is fellow member of European Academy of Science.
Since November 2020, he has been named "Académico Correspondiente de la Real Academia de Ingeniería de España".
He is in the Board of Governance of IEEE Aerospace and Electronic Systems Society (2022-2024).
He is part of the IEEE Aerospace and Electronic Systems Standing Committee Chairs as responsible of “Member Service: HISTORY”.
He won the following awards:
Fred Nathanson Memorial Radar Award, 1987, with the motivation
For development of radar data processing techniques.
M. Barry Carlton Award by IEEE Aerospace and Electronic Systems Society in 2001, 2003 and 2013. This award recognizes the best paper published in the IEEE Transactions on Aerospace and Electronic Systems for the given year.
Honour of Maestro del Lavoro with decoration of "Stella al Merito del Lavoro" presented to him by the President of Italian Republic in recognition of his outstanding professional career, 2003.
First Prize Award for Innovation Technology of Finmeccanica Group, 2004, team leader of the winner team presented by the Italian Ministry of Instruction, University and Scientific Research.
2006: Annual European Group Technical Achievement Award 2006 by the EURASIP “for development and application of adaptive signal processing techniques in practical radar systems”.
IEEE Dennis J. Picard Medal for Radar Technologies and Applications, 2010, with the motivation
For continuous, innovative, theoretical and practical contributions to radar systems and adaptive signal processing techniques.
Co-recipient of Oscar Masi Award for the AULOS “green” radar by the Italian Association for Industrial Research (AIRI) (2012).
IET Achievement Medals, 2014, with the motivation
For outstanding contributions to radar system design, signal, data and image processing and data fusion.
IEEE Signal Processing Society Industrial Leader Award, 2017 (presented on 2018), with the motivation
For contributions to radar array processing and industrial leadership.
Honorary chair of IEEE RadarConf 2020, Florence.
2019 Christian Hülsmeyer Award from the German Institute of Navigation (DGON), with the motivation
In appreciation of his outstanding contribution to radar research and education.
2020 IEEE AESS Pioneer Award, with the motivation:
For pioneering contributions to the analysis, design, development, and experimentation of digital-based adaptive radar systems.
2023 International Member of the United States National Academy of Engineering (NAE),In recognition of distinguished contributions to engineering,“for contributions to the development and deployment of advanced radar systems and technology”
2020-2024 Member of the scientific committee of the IEEE Italy Chapter of Signal Processing Society (2020-2024)
22 March 2024: Award of the Research Doctorate Honoris Causa in 'ICT Information and Communication Technologies' at the University of Palermo, Department of Engineering (video on Youtube)
10 July 2024: he gave the gala dinner speech at the ISIF IEEE International conference on Fusion 2024, Venice, Italy
References
In June 2023 Alfonso has collected the list of titles his first 1000 scientific publications in a file that is freely available
External links
Green Radar State of Art: theory, practice and way ahead. Plenary talk given at 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) held in Florence (Italy).
The future of radar: the evolution of a technology with a long history - We talk with Alfonso Farina, one of the fathers of modern radar.
Biography in Engineering and Technology History Wiki.
Interview with Pat Hindle, Editor of Microwave Journal.
Radar Role: From the Underground to Outer Space (ICASSP 2020, Barcellona, Spain).
1948 births
Electronics engineers
Italian engineers
Systems engineers
Fellows of the IEEE
Fellows of the Royal Academy of Engineering
Living people
Academic staff of the University of Naples Federico II | Alfonso Farina | Engineering | 1,721 |
77,310,528 | https://en.wikipedia.org/wiki/Ceria%20based%20thermochemical%20cycles | A ceria based thermochemical cycle is a type of two-step thermochemical cycle that uses as oxygen carrier cerium oxides (/) for synthetic fuel production such as hydrogen or syngas. These cycles are able to obtain either hydrogen () from the splitting of water molecules (), or also syngas, which is a mixture of hydrogen () and carbon monoxide (), by also splitting carbon dioxide () molecules alongside water molecules. These type of thermochemical cycles are mainly studied for concentrated solar applications.
Types of cycles
These cycles are based on the two step redox thermochemical cycle. In the first step, a metal oxide, such as ceria, is reduced by providing heat to the material, liberating oxygen. In the second step, a stream of steam oxidises the previously obtained molecule back to its starting state, therefore closing the cycle. Depending on the stoichiometry of the reactions, which is the relation of the reactants and products of the chemical reaction, these cycles can be classified in two categories.
Stoichiometric ceria cycle
The stoichiometric ceria cycle uses the cerium(IV) oxide () and cerium(III) oxide () metal oxide pairs as oxygen carriers. This cycle is composed of two steps:
A reduction step, to liberate oxygen () from the material:
And an oxidation step, to split the water molecules into hydrogen () and oxygen (), and/or the carbon dioxide molecules () into carbon monoxide () and oxygen ():
The reaction for hydrogen production:
The reaction for carbon monoxide production:
The reduction step is an endothermic reaction that takes place at temperatures around 2,300 K (2,027 °C) in order to ensure a sufficient reduction. In order to enhance the reduction of the material, low partial pressures of oxygen are required. To obtain these low partial pressures, there are two main possibilities, either by vacuum pumping the reactor chamber, or by using an chemically inert sweep gas, such as nitrogen () or argon ().
On the other hand, the oxidation step is an exothermic reaction that can take place at a considerable range of temperatures, from 400 °C up to 1,000 °C. In this case, depending on the fuel to be produced, a stream of steam, carbon dioxide or a mixture of both is introduced to the reaction chamber for hydrogen, carbon monoxide or syngas production respectively. The temperature difference between the two steps presents a challenge for heat recovery, since the existing solid to solid heat exchangers are not highly efficient.
The thermal energy required to achieve these high temperatures is provided by concentrated solar radiation. Due to the high concentration ratio required to achieve this high temperatures, the main technologies used are concentrating solar towers (CST) or parabolic dishes.
The main disadvantage of the stoichiometric ceria cycle lies in the fact that the reduction reaction temperature of cerium(IV) oxide () is at the same range of the melting temperature (1,687–2,230 °C) of cerium(IV) oxide (), which in the end results in some melting and sublimation of the material, which can produce reactor failures such as deposition on the window or sintering of the particles.
Non-stoichiometric ceria cycle
The non-stoichiometric ceria cycle uses only cerium(IV) oxide, and instead of totally reducing it to the next oxidation molecule, it performs a partial reduction of it. The quantity of this reduction is commonly expressed as reduction extent and is indicated as . In this way, by partially reducing ceria, oxygen vacancies are created in the material. The two steps are formulated as such:
Reduction reaction:
Oxidation reaction:
For hydrogen production:
For carbon monoxide production:
The main advantage of this cycle is that the reduction temperature is lower, around 1,773 K (1,500 °C) which alleviates the high temperature demand of some materials and avoids certain problems such as sublimation or sintering. Temperatures above these would result in the reduction of the material to the next oxidation molecule, which should be avoided.
In order to reduce the thermal loses of the cycle, the temperature difference between the reduction and oxidation chambers need to be optimized. This results in partially oxidated states, rather than a full oxidation of the ceria. Due to this, the chemical reaction is commonly expressed considering these two reduction extents:
Reduction reaction:
Oxidation reaction:
For hydrogen production:
For carbon monoxide production:
The main disadvantage of these cycles is the low reduction extent, due to the low non-stoichiometry, hence leaving less vacancies for the oxidation process, which in the end translates to lower fuel production rates.
Due to the properties of ceria, other materials are being studied, mainly perovskites based on ceria, to improve the thermodynamic and chemical properties of the metal oxide.
Methane driven non-stoichiometric ceria cycle
Since the temperatures needed to achieve the reduction of the material are considerably high, the reduction of the cerium oxide can be enhanced by providing methane to the reaction. This reduces significantly the temperatures required to achieve the reduction of ceria, ranging between 800-1,000 °C, while also producing syngas in the reduction reactor. In this case, the reduction reaction goes as follows:
The main disadvantages of this cycle are the carbon deposition on the material, which eventually deactivates it after several cycles and needs to be replaced, and the acquisition of the methane feedstock.
Types of reactors
Depending on the type and topology of the reactors, the cycles will function either in continuous production or in batch production. There are two main types of reactors for these specific cycles:
Monolithic reactors
These type of reactors consist on a piece of solid material, which is shaped as a reticulated porous foam (RPC) in other to increase both the surface area and the solar radiation penetration. This reactors are shaped as a cavity receivers, in order to reduce the thermal losses due to reradiation. They usually count with a quartz (fused silica) window in order to let the solar radiation inside the cavity.
Since the metal oxide is a solid structure, both reactions must be done in the same reactor, which leads to a discontinuous production process, carrying out one step after the other. To avoid this stops in the production time, multiple reactors can be arranged to approximate a continuous production process. This is usually referred as a batch process. The intention is to always have one or multiple reactors operating in the oxidation step at the same time, hence always generating hydrogen.
Some new reactor concepts are being studied, in which the RPCs can be moved from one reactor to another, in order to have one single reduction reactor.
Solid particles reactors
These type of reactors try to solve the discontinuity problem of the cycle by using solid particles of the metal oxide instead of having solid structures. This particles can be moved from the reduction reactor to the oxidation reactor, which allows a continuous production of fuel. Many types of reactors work with solid particles, from free falling receivers, to packed beds, fluidized beds or rotary kilns.
The main disadvantage of this approach is that, due to the high temperatures achieved, the solid particles are susceptible to sintering, which is a process in which small particles melt and get stuck to another particles, creating bigger particles, which reduces their surface area and difficult the transportation process.
See also
Thermochemical cycle
Solar fuel
Sulfur–iodine cycle
Hybrid sulfur cycle
References
External links
HYDROSOL project. Retrieved 07/07/2024
Sun to Liquid project Retrieved 11/07/2024
Chemical reactions
Hydrogen production
Cerium
Catalysis | Ceria based thermochemical cycles | Chemistry | 1,599 |
73,471,691 | https://en.wikipedia.org/wiki/Berkelium%20tetrafluoride | Berkelium tetrafluoride is a binary inorganic compound of berkelium and fluorine with the chemical formula .
Synthesis
Berkelium tetrafluoride may be formed by the fluorination of berkelium trioxide, dioxide, or trifluoride with elemental fluorine at elevated temperatures:
Physical properties
Berkelium(IV) fluoride forms light brown crystals of monoclinic crystal structure of uranium tetrafluoride type. Cell parameters: a = 1.2396 nm, b = 1.0466 nm, c = 0.8118 nm, angle β = 126.33°.
Chemical properties
Berkelium tetrafluoride is reduced by lithium at elevated temperatures to metallic berkelium:
References
Fluorides
Berkelium compounds
Actinide halides | Berkelium tetrafluoride | Chemistry | 170 |
43,112,814 | https://en.wikipedia.org/wiki/Silicon%20Photonics%20Cloud | Silicon Photonics Cloud (SiCloud) is an instructional web-based research tool for silicon photonics developed at UCLA under the National Science Foundation-funded CIAN research center.
Introduction
SiCloud’s provides instructional and research web-based tools. Such interactive learning tools provide two important benefits that enhance traditional teaching methods: They can be accessed by anyone from anywhere and interactive tools engage the brain in a way different from merely reading, and so enhance and reinforce the learning experience.
Silicon photonics is a platform for manufacturing low cost and high bandwidth communication components for data centers and distributed computing, storage and network systems. It has transitioned from research to industry with participation by most major semiconductor companies as well as myriad startups. Understanding this field may be challenging for researchers and students alike, as silicon photonics involves a wide range of disciplines, including material science, semiconductor physics, electronics and waveguide optics. This field has been recognized by the Forbes magazine as "The $100B Opportunity".
Features
This web-based calculator is an interactive analysis tool for optical properties of silicon and related material (, etc.). It is designed to be a one stop resource for students, researchers and design engineers. The first and most basic aspect of Silicon Photonics is the Material Parameters, which provides the foundation for the Device, Sub-System and System levels.
In the Material Parameters tab, one may study the physical properties of the materials commonly used in silicon photonics. SiCloud includes the common dielectrics and semiconductors for waveguide core, cladding, and photodetection, as well as metals for electrical contacts. In the Main Graph, one may examine several physical parameters of interest for each material, in different wavelength ranges, and choose between frequency and free-space wavelength for convenience. SiCloud also includes citations for the original data so that users may gather the raw data and be self-assured of its accuracy and conditions.
For silicon in particular, SiCloud includes a large number of parameters beyond refractive index and absorption coefficient, including the thermo-optic coefficient, Raman gain coefficient, Kerr coefficient, and two-photon absorption coefficient. One important consideration of a researcher is the optical loss in a given length of material, and so SiCloud provides a loss graph. Here, one may observe total material absorption, but also consider the reflection loss due to, e.g., coupling, for a variety of materials. With two facets we can even see Fabry-Perot resonances.
History
SiCloud was developed by UCLA graduate student Peter DeVore and a team of researchers at the Jalali-Lab. It is part of the educational effort funded by the Center for Integrated Access Networks Engineering Research Center) of NSF. It debuted at the 2014 CIAN Annual Meeting in Tucson, Arizona on May 14, 2014. SiCloud is a work in progress and its capability is being expanded.
References
University of California, Los Angeles
Silicon photonics | Silicon Photonics Cloud | Materials_science | 607 |
35,977,305 | https://en.wikipedia.org/wiki/Vitreous%20china | Vitreous china is an enamel coating that is applied to ceramics, particularly porcelain, after they have been fired, though the name can also refer to the finished piece as a whole. The coating makes the porcelain tougher, denser, and shinier, and it is a common choice for items such as toilets and sink basins.
History
Vitreous china’s development tracks closely with other vitreous materials like glass, owing to the similar production process in terms of the materials needed, and preparing and firing such. Because an enamel is essentially glass applied to a substrate or surface to cover it, its production differs from that of glass by only a few steps.
The earliest known objects to be covered with a glaze are glazed stones made for jewellery, having been manufactured in Egypt as early as 4000 BC, in Mesopotamia from 5500-4000 BC, in Europe from 1400 BC and in the Indus Valley from 4500-3500 BC. The first instance of applying an enamel to a substrate was in 3500 BP in Mycenae and Cyprus. The champlevé technique, which involves first placing a glass enamel powder on a substrate and then firing it, was the predecessor technique for applying vitreous china and was used by the Celts from the 1st century BC.
Uses
Vitreous china is used in a variety of household and sanitaryware items such as basins, toilets, bidets, urinals and bathtubs. Items that use vitreous china are usually ones that are best when kept clean and sanitary, with which a coating of vitreous china enamel helps. Those same vitreous china items also benefit from having stains and spots removed easily due to the nature of their use.
Vitreous china can be occasionally found applied to kitchen countertops and related fixtures. The low occurrence is due to vitreous china’s fragility when exposed to blunt force from crockery and other kitchen items. Instead, plastic and steel are examples of more common kitchen fixture materials.
Vitreous china can also be used for more aesthetic purposes. Items applied with a vitreous china enamel for this purpose include plates and other chinaware in china painting, and Fabergé eggs.
Structure
Vitreous china, like other enamels, is a glass-particulate composite, meaning it is glass (50-70 wt%) with non-silicate particles strewn about it that give it different properties. Included in the vitreous enamel mixture is clay, which helps hold it all together and gives the necessary flexibility to form it into shape during firing, quartz which reduces shrinkage, and feldspar which increases the liquidity of the mixture when fired into its vitreous phase; this ensures low porosity in the final product.
After vitrification, vitreous china contains mullite, a crystal which forms as a result of reactions taking place in clay. Not all of the quartz and feldspar liquify during firing, and so some of it remains as “relict” (its pre-firing powder form) in the final vitrified product.
Physical properties
The general purpose of applying vitreous china enamel to a ceramic item like a washbasin is to provide protection for it, and it may be used secondarily for aesthetic purposes. In covering their porcelain substrate, vitreous china gives anti-corrosive properties and helps against weathering and heat. Its non-porosity also prevents bacteria from entering the surface of ceramic material, and so keeps it from building up. Its porosity makes it absorb less than 0.5% of water. The porosity of vitreous china can be reduced by increasing its feldspar content.
Vitreous china is translucent, which, along with its protective nature, has made it preserve artefacts such as jewellery and potentially preserve them in the same state for thousands of years.
Viscosity while vitreous china is in its liquid phase during firing depends on its constitutive ratio of glass to other particles- a higher quantity of particles results in higher viscosity. Vitreous china’s density post vitrification ranges from 1.83 to 2.48 grams per cubic centimetre. Its Poisson ratio is 0.5. Its flexural strength is 400-800 kgf/cm2.
Production
In most cases, vitreous china consists of a mix of clay, feldspar, flint and quartz sand. This mix is usually fired once at 1200–1300 °C for most applications, and twice fired for use in crockery with a first firing at 900–950 °C and a second firing at 1200–1250 °C. Crockery is fired twice to reduce its porosity. To make the mix more workable, water is usually added. The firing temperature chosen is important, as vitreous and ceramic bodies are less strong and more porous if under or over fired.
Creep is the unwanted distortion a vitreous body undergoes during firing, and is influenced by the substance's rheological properties. Such properties depend on the particles in vitreous china's glass mixture, which can vary in type, size (1-50μm), distribution or shape. The amount of mullite in vitreous china, which is determined by the amount of clay in the starting mixture, is the primary determinant for creep rate during firing.
References
Ceramic materials
Plumbing
Porcelain | Vitreous china | Engineering | 1,106 |
13,464,676 | https://en.wikipedia.org/wiki/Pile%20%28abstract%20data%20type%29 | In computer science, a pile is an abstract data type for storing data in a loosely ordered way. There are two different usages of the term; one refers to an ordered double-ended queue, the other to an improved heap.
Ordered double-ended queue
The first version combines the properties of the double-ended queue (deque) and a priority queue and may be described as an ordered deque.
An item may be added to the head of the list if the new item is valued less than or equal to the current head or to the tail of the list if the new item is greater than or equal to the current tail. Elements may be removed from both the head and the tail.
Piles of this kind are used in the "UnShuffle sort" sorting algorithm.
Improved heap
The second version is a subject of patents and improves the heap data structure.
The whole data pile based system can be generalized as shown:
References
Abstract data types | Pile (abstract data type) | Mathematics | 192 |
31,084,540 | https://en.wikipedia.org/wiki/PSA%20HYbrid4 | HYbrid4 is PSA Peugeot-Citroën's in-house developed TTR hybrid powertrain system, shared between the two manufacturers. It takes the form of a diesel engine powering the front wheels coupled with an electric motor powering the rear wheels to provide a 4WD hybrid with a short fully electric range. The system made its production debut on the Peugeot 3008 HYbrid4 in 2011, emitting 99 g of carbon dioxide per kilometer. The 3008 HYbrid4 returns combined fuel consumption of 3.8 litres/100 km, beating the smaller Toyota Prius. It also operates in four driver-selected modes: Auto, Sport, ZEV (pure-electric) and 4WD.
The 1st generation HYbrid4 system was available 2011–2017. Since 2020 there's a new HYbrid/HYbrid4 system, which is using plug-in technology, either FWD or AWD with power between 180hp (308 1.6 HYbrid) and 360hp (508 PSE).
Operation
The Hybrid4 powertrain combines a 120 kW/163 hp (DW10CTED4 PSA-Ford Motor Company joint developed) 2.0 litre HDi FAP EURO5 diesel engine with a 28 kW (37 hp) electric motor. The powertrain offers 4 working modes:
AUTO, in which the vehicle strives for the best fuel economy - throttle response is slow and the automated manual transmission shifts early, furthermore the vehicle switches to ZEV mode as often as possible;
SPORT mode offers a combined power output of 200 hp (150 kW) and 500 Nm at speeds up to 120 km/h, throttle response is very aggressive and the engine revs up to 4000 RPM before upshifts;
4WD mode keeps the HDi engine running constantly and charging the high voltage battery to offer constant AWD with 60:40 split at speeds up to 120 km/h;
ZEV mode is available if the high voltage battery is at least 1/3 full. In this mode the vehicle moves only via the rear electric motor and the performance is limited. ZEV mode is available at speeds of up to ~60 km/h with moderate acceleration, the range on a full charge is about ~2 km (~1.2 miles), the A/C compressor is automatically switched off in this mode. If the driver presses the accelerator beyond a certain point, the HDi engine restarts automatically as to offer better acceleration and the vehicle goes to AUTO mode.
Auto Start-Stop is part of the HYbrid4 system and is available in all modes except for 4WD when the engine is constantly running so that the reversible alternator can top up the battery and provide constant rear drive. The engine automatically shuts down whenever possible (cruising downhill at speeds of up to 85 km/h when the battery is not fully charged or cruising on a level surface at speeds of up to ~60 km/h), when coming to a stop the HDi engine shuts down at speeds below 30 km/h. The Auto Start-Stop function is possible with the use of the BOSCH SMG (Separate Motor-Generator) 138/80 (138 mm in diameter, 80 mm long) reversible starter-generator motor. Restarting can occur at any moment, if the HYbrid4 control unit determines that additional power is needed; or the battery needs to be topped up; or the A/C compressor needs to kick in. The engine also restarts when the battery is fully charged and the vehicle is going downhill, as to offer additional engine braking, no fuel is used in this mode, but ZERO EMISSION is not displayed, as the HDi engine is rotating.
HV Battery
The high voltage battery consist of 42 packs of 4 D-size SANYO batteries. The nominal voltage of the battery is 201.6 V, the nominal capacity is 1.1kWh and the peak power output is 31kW. The battery operates in the ~150–270 V range depending on the state of charge and acceleration/charging demand. To offer a longer battery pack life the HCU (hybrid control unit) never charges the batteries over 90%, nor discharges them under 30%. The battery pack has an independent air cooling system which can operate at different flow rates depending on the cooling requirements of the unit. The cooling system can be operative even if the vehicle is locked; operation noise and a warm draft can be felt at the left rear quarter panel of the car.
The HV battery typically lasts about 150.000-200.000km or 6-10 years before starting to lose capacity noticeably. The vehicle could be driven with a "bad" battery and does not lose the 4WD function, however the EV range is significantly reduced, resulting in poorer fuel economy.
Rear electric drive
The rear electric drive was co-developed by PSA Peugeot-Citroën, Bosch and GKN. It has an integrated clutch mechanism that disengages the rear drive at speeds over 120 km/h to provide less drag and better fuel efficiency at highway speeds. Reverse gear is not mechanical, instead the current going through the electric motor is reversed.
Bosch SMG in HYbrid4
The Bosch SMG (“separate motor generator”) 180/120 (180 mm in diameter, 120 mm long) electric motor is used as the traction motor in the HYbrid4 system. The unit weighs only 32 kg and is a permanent magnet synchronous motor. In the HYbrid4 system it can deliver up to 27 kW and 200 Nm of power to help provide lower fuel consumption in AUTO and ZEV mode, better grip in 4WD mode and enhanced acceleration in SPORT mode.
The SMG 180/120 is also used in the Fiat 500e as main traction motor and in the front axle of the Porsche 918 Spyder.
HYbrid4 incorporates another Bosch permanent magnet synchronous motor, the SMG 138/80 (138 mm in diameter, 80 mm long) in the engine compartment to serve as a high-voltage starter/alternator for the start-stop system. It is also responsible for supplying power to the rear-wheel drive and charging the high voltage battery.
Common problems and known faults
Some of the common problems with the HYbrid4 system include:
Jerking/staling and slow shifts of the BMP6 manual piloted gearbox. This problem could easily be overcome by adapting the clutch and shifter mechanisms using the DiagBOX diagnostic tool.
Noise from the rear electric drive - the bearings of the reducer fail, as the oil is "without servicing" according to the technical documentation. In reality, though, it should be changed every 30.000-50.000km depending on the driving style and conditions.
Loss of drive, the engine stays in idle, no matter how hard the accelerator pedal is pressed. This is caused due to infrequent changes of the oil of the robotized gearbox.
The HDI engine does not turn off and a message "ELECTRIC MODE CURRENTLY UNAVAILABLE" is shown. This is caused by an in-built timer, which is supposed to disable the start-stop function once the accessories belt has done 60.000km since the last replacement. If the belt is OK physically - not stretched and without wear marks, the timer can be reset via DiagBOX, otherwise the belt should be changed.
The accessories belt snaps unexpectedly soon after having been replaced with a new one. This is caused by improper tensioning of the belt.
Quick discharge/charge of the battery, SoC jumps from 3 to 0 bars or from 4 to 8 bars very rapidly. This is caused by dying NiMH cells in the HV battery. The battery should be disassembled, inspected and the faulty cells should be replaced with new / used ones in good condition. Furthermore, the software of the HV battery ECU should be updated to the latest version, as there are bugs in the earlier version, which lead to a faster deteriorating of the battery.
The bearings of the Bosch SMG 138/80 high-voltage start-stop alternator are known to fail, which causes the accessories belt to snap.
Usage
Production
2011–2016 Peugeot 3008 HYbrid4
2012–2016 Peugeot 508 RXH HYbrid4
2012–2016 Peugeot 508 HYbrid4
2012–2016 Citroën DS5 HYbrid4
Concept
2008 Peugeot RC HYbrid4
2009 Peugeot RCZ HYbrid4 concept
2010 Peugeot HR1 (petrol engine)
2010 Peugeot SR1
2011 Peugeot HX1
References
External links
Peugeot 3008 HYbrid4 Limited Edition
Peugeot
Citroën
Hybrid powertrain
Automotive technology tradenames
Automotive transmission technologies
Engine technology | PSA HYbrid4 | Technology | 1,736 |
37,087,918 | https://en.wikipedia.org/wiki/Electronic%20beam%20curing | Electronic Beam curing (EBC) is a surface curing process in the manufacture of high pressure laminate (HPL) boards. The process applies color to a single sheet of Kraft paper which is adhered to a HPL board in such a way that it will keep its color durably while remaining scratch-resistant. Unlike other HPL creation methods, EBC does not use heat.
The EBC machine used in the curing process gives the finished HPL boards a surface that is both color-fade resistant and has a high resistance to damage. The process works by first mixing color pastes to the desired color and then applying it to a single sheet of kraft paper. This substrate is then put in the EBC machine together with a protective foil. In the machine this sheet is then shot with electrons at such high velocity that the color impregnated paper hardens almost instantly. After being stored in a temperature-controlled room for a short duration the sheets are ready to be adhered to unfinished HPL boards in a process called dry forming.
References
Composite materials
Curing agents | Electronic beam curing | Physics | 223 |
856,798 | https://en.wikipedia.org/wiki/Push%E2%80%93pull%20output | A push–pull amplifier is a type of electronic circuit that uses a pair of active devices that alternately supply current to, or absorb current from, a connected load. This kind of amplifier can enhance both the load capacity and switching speed.
Push–pull outputs are present in TTL and CMOS digital logic circuits and in some types of amplifiers, and are usually realized by a complementary pair of transistors, one dissipating or sinking current from the load to ground or a negative power supply, and the other supplying or sourcing current to the load from a positive power supply.
A push–pull amplifier is more efficient than a single-ended "class-A" amplifier. The output power that can be achieved is higher than the continuous dissipation rating of either transistor or tube used alone and increases the power available for a given supply voltage. Symmetrical construction of the two sides of the amplifier means that even-order harmonics are cancelled, which can reduce distortion. DC current is cancelled in the output, allowing a smaller output transformer to be used than in a single-ended amplifier. However, the push–pull amplifier requires a phase-splitting component that adds complexity and cost to the system; use of center-tapped transformers for input and output is a common technique but adds weight and restricts performance. If the two parts of the amplifier do not have identical characteristics, distortion can be introduced as the two halves of the input waveform are amplified unequally. Crossover distortion can be created near the zero point of each cycle as one device is cut off and the other device enters its active region.
Push–pull circuits are widely used in many amplifier output stages. A pair of audion tubes connected in push–pull is described in Edwin H. Colpitts' US patent 1137384 granted in 1915, although the patent does not specifically claim the push–pull connection. The technique was well known at that time and the principle had been claimed in an 1895 patent predating electronic amplifiers. Possibly the first commercial product using a push–pull amplifier was the RCA Balanced amplifier released in 1924 for use with their Radiola III regenerative broadcast receiver. By using a pair of low-power vacuum tubes in push–pull configuration, the amplifier allowed the use of a loudspeaker instead of headphones, while providing acceptable battery life with low standby power consumption. The technique continues to be used in audio, radio frequency, digital and power electronics systems today.
Digital circuits
A digital use of a push–pull configuration is the output of TTL and related families. The upper transistor is functioning as an active pull-up, in linear mode, while the lower transistor works digitally. For this reason they are not capable of sourcing as much current as they can sink (typically 20 times less). Because of the way these circuits are drawn schematically, with two transistors stacked vertically, normally with a level shifting diode in between, they are called "totem pole" outputs.
A disadvantage of simple push–pull outputs is that two or more of them cannot be connected together, because if one tried to pull while another tried to push, the transistors could be damaged. To avoid this restriction, some push–pull outputs have a third state in which both transistors are switched off. In this state, the output is said to be floating (or, to use a proprietary term, tri-stated).
An alternative to push–pull output is a single switch that disconnects or connects the load to ground (called an open collector or open drain output), or a single switch that disconnects or connects the load to the power supply (called an open-emitter or open-source output).
Analog circuits
A conventional amplifier stage which is not push–pull is sometimes called single-ended to distinguish it from a push–pull circuit.
In analog push–pull power amplifiers the two output devices operate in antiphase (i.e. 180° apart). The two antiphase outputs are connected to the load in a way that causes the signal outputs to be added, but distortion components due to non-linearity in the output devices to be subtracted from each other; if the non-linearity of both output devices is similar, distortion is much reduced. Symmetrical push–pull circuits must cancel even order harmonics, like 2f, 4f, 6f and therefore promote odd order harmonics, like f, 3f, 5f when driven into the nonlinear range.
A push–pull amplifier produces less distortion than a single-ended one. This allows a class-A or AB push–pull amplifier to have less distortion for the same power as the same devices used in single-ended configuration. Distortion can occur at the moment the outputs switch: the "hand-off" is not perfect. This is called crossover distortion. Class AB and class B dissipate less power for the same output than class A; general distortion can be kept low by negative feedback, and crossover distortion can be reduced by adding a 'bias current' to smoothen the hand-off.
A class-B push–pull amplifier is more efficient than a class-A power amplifier because each output device amplifies only half the output waveform and is cut off during the opposite half. It can be shown that the theoretical full power efficiency (AC power in load compared to DC power consumed) of a push–pull stage is approximately 78.5%. This compares with a class-A amplifier which has efficiency of 25% if directly driving the load and no more than 50% for a transformer coupled output. A push–pull amplifier draws little power with zero signal, compared to a class-A amplifier that draws constant power. Power dissipation in the output devices is roughly one-fifth of the output power rating of the amplifier. A class-A amplifier, by contrast, must use a device capable of dissipating several times the output power.
The output of the amplifier may be direct-coupled to the load, coupled by a transformer, or connected through a dc blocking capacitor. Where both positive and negative power supplies are used, the load can be returned to the midpoint (ground) of the power supplies. A transformer allows a single polarity power supply to be used, but limits the low-frequency response of the amplifier. Similarly, with a single power supply, a capacitor can be used to block the DC level at the output of the amplifier.
Where bipolar junction transistors are used, the bias network must compensate for the negative temperature coefficient of the transistors' base to emitter voltage. This can be done by including a small value resistor between emitter and output. Also, the driving circuit can have silicon diodes mounted in thermal contact with the output transistors to provide compensation.
Push–pull transistor output stages
Categories include:
Transformer-output transistor power amplifiers
It is now very rare to use output transformers with transistor amplifiers, although such amplifiers offer the best opportunity for matching the output devices (with only PNP or only NPN devices required).
Totem pole push–pull output stages
Two matched transistors of the same polarity can be arranged to supply opposite halves of each cycle without the need for an output transformer, although in doing so the driver circuit often is asymmetric and one transistor will be used in a common-emitter configuration while the other is used as an emitter follower. This arrangement is less used today than during the 1970s; it can be implemented with few transistors (not so important today) but is relatively difficult to balance and to keep a low distortion.
Symmetrical push–pull
Each half of the output pair "mirror" the other, in that an NPN (or N-Channel FET) device in one half will be matched by a PNP (or P-Channel FET) in the other. This type of arrangement tends to give lower distortion than quasi-symmetric stages because even harmonics are cancelled more effectively with greater symmetry.
Quasi-symmetrical push–pull
In the past when good quality PNP complements for high power NPN silicon transistors were limited, a workaround was to use identical NPN output devices, but fed from complementary PNP and NPN driver circuits in such a way that the combination was close to being symmetrical (but never as good as having symmetry throughout). Distortion due to mismatched gain on each half of the cycle could be a significant problem.
Super-symmetric output stages
Employing some duplication in the whole driver circuit, to allow symmetrical drive circuits can improve matching further, although driver asymmetry is a small fraction of the distortion generating process. Using a bridge-tied load arrangement allows a much greater degree of matching between positive and negative halves, compensating for the inevitable small differences between NPN and PNP devices.
Square-law push–pull
The output devices, usually MOSFETs or vacuum tubes, are configured so that their square-law transfer characteristics (that generate second-harmonic distortion if used in a single-ended circuit) cancel distortion to a large extent. That is, as one transistor's gate-source voltage increases, the drive to the other device is reduced by the same amount and the drain (or plate) current change in the second device approximately corrects for the non-linearity in the increase of the first.
Push–pull tube (valve) output stages
Vacuum tubes (valves) are not available in complementary types (as are PNP/NPN transistors), so the tube push–pull amplifier has a pair of identical output tubes or groups of tubes with the control grids driven in antiphase. These tubes drive current through the two halves of the primary winding of a center-tapped output transformer. Signal currents add, while the distortion signals due to the non-linear characteristic curves of the tubes subtract. These amplifiers were first designed long before the development of solid-state electronic devices; they are still in use by both audiophiles and musicians who consider them to sound better.
Vacuum tube push–pull amplifiers usually use an output transformer, although Output-transformerless (OTL) tube stages exist (such as the SEPP/SRPP and the White Cathode Follower below). The phase-splitter stage is usually another vacuum tube but a transformer with a center-tapped secondary winding was occasionally used in some designs. Because these are essentially square-law devices, the comments regarding distortion cancellation mentioned above apply to most push–pull tube designs when operated in class A (i.e. neither device is driven to its non-conducting state).
A Single Ended Push–Pull (SEPP, SRPP or mu-follower) output stage, originally called the Series-Balanced amplifier (US patent 2,310,342, Feb 1943). is similar to a totem-pole arrangement for transistors in that two devices are in series between the power supply rails, but the input drive goes only to one of the devices, the bottom one of the pair; hence the (seemingly contradictory) Single-Ended description. The output is taken from the cathode of the top (not directly driven) device, which acts part way between a constant current source and a cathode follower but receiving some drive from the plate (anode) circuit of the bottom device. The drive to each tube therefore might not be equal, but the circuit tends to keep the current through the bottom device somewhat constant throughout the signal, increasing the power gain and reducing distortion compared with a true single-tube single-ended output stage.
The transformer-less circuit with two tetrode tubes dates back to 1933: "THE USE OF A VACUUM TUBE AS A PLATE-FEED IMPEDANCE." by J.W.Horton in the Journal of the Franklin Institute 1933 volume 216 Issue 6
The White Cathode Follower (Patent 2,358,428, Sep. 1944 by E. L. C. White) is similar to the SEPP design above, but the signal input is to the top tube, acting as a cathode follower, but one where the bottom tube (in common cathode configuration) if fed (usually via a step-up transformer) from the current in the plate (anode) of the top device. It essentially reverses the roles of the two devices in SEPP. The bottom tube acts part way between a constant current sink and an equal partner in the push–pull workload. Again, the drive to each tube therefore might not be equal.
Transistor versions of the SEPP and White follower do exist, but are rare.
Ultra-linear push–pull
A so-called ultra-linear push–pull amplifier uses either pentodes or tetrodes with their screen grid fed from a percentage of the primary voltage on the output transformer. This gives efficiency and distortion that is a good compromise between triode (or triode-strapped) power amplifier circuits and conventional pentode or tetrode output circuits where the screen is fed from a relatively constant voltage source.
See also
Single-ended triode
Push–pull converter for more details on implementation
Open collector
References
Electronic circuits | Push–pull output | Engineering | 2,722 |
588,168 | https://en.wikipedia.org/wiki/SM%20EVM | SM EVM (СМ ЭВМ, abbreviation of Система Малых ЭВМ—literally System of Mini Computers) are several types of Soviet and Comecon minicomputers produced from 1975 through the 1980s.
Most types of SM EVM are clones of DEC PDP-11 and VAX. SM-1 and SM-2 are clones of Hewlett-Packard minicomputers.
The common operating systems for the PDP-11 clones are translated versions of RSX-11 (ОС РВ) for the higher spec models and RT-11 (РАФОС, ФОДОС) for lower spec models. Also available for the high-end PDP-11 clones is MOS, a clone of UNIX.
See also
SM-4
SM-1420
SM-1600
SM-1710
SM-1720
References
Computer-related introductions in 1975
Minicomputers
Soviet computer systems
PDP-11 | SM EVM | Technology | 202 |
38,996,720 | https://en.wikipedia.org/wiki/Thermal%20history%20of%20Earth | The thermal history of Earth involves the study of the cooling history of Earth's interior. It is a sub-field of geophysics. (Thermal histories are also computed for the internal cooling of other planetary and stellar bodies.) The study of the thermal evolution of Earth's interior is uncertain and controversial in all aspects, from the interpretation of petrologic observations used to infer the temperature of the interior, to the fluid dynamics responsible for heat loss, to material properties that determine the efficiency of heat transport.
Overview
Observations that can be used to infer the temperature of Earth's interior range from the oldest rocks on Earth to modern seismic images of the inner core size. Ancient volcanic rocks can be associated with a depth and temperature of melting through their geochemical composition. Using this technique and some geological inferences about the conditions under which the rock is preserved, the temperature of the mantle can be inferred. The mantle itself is fully convective, so that the temperature in the mantle is basically constant with depth outside the top and bottom thermal boundary layers. This is not quite true because the temperature in any convective body under pressure must increase along an adiabat, but the adiabatic temperature gradient is usually much smaller than the temperature jumps at the boundaries. Therefore, the mantle is usually associated with a single or potential temperature that refers to the mid-mantle temperature extrapolated along the adiabat to the surface. The potential temperature of the mantle is estimated to be about 1350 C today. There is an analogous potential temperature of the core but since there are no samples from the core its present-day temperature relies on extrapolating the temperature along an adiabat from the inner core boundary, where the iron solidus is somewhat constrained.
Thermodynamics
The simplest mathematical formulation of the thermal history of Earth's interior involves the time evolution of the mid-mantle and mid-core temperatures. To derive these equations one must first write the energy balance for the mantle and the core separately. They are,
for the mantle, and
for the core. is the surface heat flow [W] at the surface of the Earth (and mantle), is the secular cooling heat from the mantle, and , , and are the mass, specific heat, and temperature of the mantle. is the radiogenic heat production in the mantle and is the heat flow from the core mantle boundary. is the secular cooling heat from the core, and and are the latent and gravitational heat flow from the inner core boundary due to the solidification of iron.
Solving for and gives,
and,
Thermal catastrophe
In 1862, Lord Kelvin calculated the age of the Earth at between 20 million and 400 million years by assuming that Earth had formed as a completely molten object, and determined the amount of time it would take for the near-surface to cool to its present temperature. Since uniformitarianism required a much older Earth, there was a contradiction. Eventually, the additional heat sources within the Earth were discovered, allowing for a much older age. This section is about a similar paradox in current geology, called the thermal catastrophe.
The thermal catastrophe of the Earth can be demonstrated by solving the above equations for the evolution of the mantle with . The catastrophe is defined as when the mean mantle temperature exceeds the mantle solidus so that the entire mantle melts. Using the geochemically preferred Urey ratio of and the geodynamically preferred cooling exponent of the mantle temperature reaches the mantle solidus (i.e. a catastrophe) in 1-2 Ga. This result is clearly unacceptable because geologic evidence for a solid mantle exists as far back as 4 Ga (and possibly further). Hence, the thermal catastrophe problem is the foremost paradox in the thermal history of the Earth.
New Core Paradox
The "New Core Paradox" posits that the new upward revisions to the empirically measured thermal conductivity of iron at the pressure and temperature conditions of Earth's core imply that the dynamo is thermally stratified at present, driven solely by compositional convection associated with the solidification of the inner core. However, wide spread paleomagnetic evidence for a geodynamo older than the likely age of the inner core (~1 Gyr) creates a paradox as to what powered the geodynamo prior to inner core nucleation. Recently it has been proposed that a higher core cooling rate and lower mantle cooling rate can resolve the paradox in part. However, the paradox remains unresolved.
Also, recent geochemical experiments have led to the proposal that radiogenic heat in the core is larger than previously thought. This revision, if true, would also alleviate issues with the core heat budget by providing an additional energy source back in time.
See also
Earth's inner core
Earth's magnetic field
Earth's structure
Geologic temperature record
List of periods and events in climate history
Paleothermometer
Radiative forcing
Timeline of glaciation
References
Further reading
Geophysics
Heat transfer | Thermal history of Earth | Physics,Chemistry | 1,008 |
75,447,097 | https://en.wikipedia.org/wiki/Vacuum%20%28journal%29 | Vacuum is a quarterly peer-reviewed scientific journal published by the Elsevier. Founded in 1951, the journal covers the fundamental research and technical advances in vacuum engineering, materials science and surface science. Its editor-in-chief is Lars Hultman (Linköping University).
According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.8.
References
External links
Vacuum
Elsevier academic journals
English-language journals
Academic journals established in 1951
Quarterly journals
Plasma science journals | Vacuum (journal) | Physics,Materials_science | 99 |
58,936,195 | https://en.wikipedia.org/wiki/HD%2098219 | HD 98219, also named Hunahpú, is a subgiant star in the constellation Crater. It has a confirmed exoplanet. At around 4 billion years old, it is a star around 1.3 times as massive as the Sun that has cooled and expanded to 4.5 times the Sun's diameter, brightening to be around 11 times as luminous. The International Astronomical Union (IAU) gave the opportunity to Honduras to name the star Hunahpú as part of NameExoWorlds. Hunahpú was one of the twin gods who became the Sun in K'iche' (Quiché) Mayan mythology.
Planetary system
A gas giant planet with a minimum mass almost double that of Jupiter was discovered as part of a radial velocity survey of subgiant stars at Keck Observatory. The International Astronomical Union (IAU) has named it Ixbalanqué, the twin brother of Hunahpú.
References
Crater (constellation)
Planetary systems with one confirmed planet
Durchmusterung objects
098219
55174
K-type subgiants | HD 98219 | Astronomy | 225 |
70,769,413 | https://en.wikipedia.org/wiki/Parasola%20misera | Parasola misera is a species of coprophilous fungus in the family Psathyrellaceae. It grows on the dung of goats and possibly on that of sheep.
References
Fungi described in 2001
Fungi of Greece
Psathyrellaceae
Fungus species | Parasola misera | Biology | 52 |
25,902,271 | https://en.wikipedia.org/wiki/Mesenchymal%E2%80%93epithelial%20transition | A mesenchymal–epithelial transition (MET) is a reversible biological process that involves the transition from motile, multipolar or spindle-shaped mesenchymal cells to planar arrays of polarized cells called epithelia. MET is the reverse process of epithelial–mesenchymal transition (EMT) and it has been shown to occur in normal development, induced pluripotent stem cell reprogramming, cancer metastasis and wound healing.
Introduction
Unlike epithelial cells – which are stationary and characterized by an apico-basal polarity with binding by a basal lamina, tight junctions, gap junctions, adherent junctions and expression of cell-cell adhesion markers such as E-cadherin, mesenchymal cells do not make mature cell-cell contacts, can invade through the extracellular matrix, and express markers such as vimentin, fibronectin, N-cadherin, Twist, and Snail. MET plays also a critical role in metabolic switching and epigenetic modifications. In general epithelium-associated genes are upregulated and mesenchyme-associated genes are downregulated in the process of MET.
In development
During embryogenesis and early development, cells switch back and forth between different cellular phenotypes via MET and its reverse process, epithelial–mesenchymal transition (EMT). Developmental METs have been studied most extensively in embryogenesis during somitogenesis and nephrogenesis and carcinogenesis during metastasis, but it also occurs in cardiogenesis or foregut development. MET is an essential process in embryogenesis to gather mesenchymal-like cells into cohesive structures. Although the mechanism of MET during various organs morphogenesis is quite similar, each process has a unique signaling pathway to induce changes in gene expression profiles.
Nephrogenesis
One example of this, the most well described of the developmental METs, is kidney ontogenesis. The mammalian kidney is primarily formed by two early structures: the ureteric bud and the nephrogenic mesenchyme, which form the collecting duct and nephrons respectively (see kidney development for more details). During kidney ontogenesis, a reciprocal induction of the ureteric bud epithelium and nephrogenic mesenchyme occurs. As the ureteric bud grows out of the Wolffian duct, the nephrogenic mesenchyme induces the ureteric bud to branch. Concurrently, the ureteric bud induces the nephrogenic mesenchyme to condense around the bud and undergo MET to form the renal epithelium, which ultimately forms the nephron. Growth factors, integrins, cell adhesion molecules, and protooncogenes, such as c-ret, c-ros, and c-met, mediate the reciprocal induction in metanephrons and consequent MET.
Somitogenesis
Another example of developmental MET occurs during somitogenesis. Vertebrate somites, the precursors of axial bones and trunk skeletal muscles, are formed by the maturation of the presomitic mesoderm (PSM). The PSM, which is composed of mesenchymal cells, undergoes segmentation by delineating somite boundaries (see somitogenesis for more details). Each somite is encapsulated by an epithelium, formerly mesenchymal cells that had undergone MET. Two Rho family GTPases – Cdc42 and Rac1 – as well as the transcription factor Paraxis are required for chick somitic MET.
Cardiogenesis
Development of heart is involved in several rounds of EMT and MET. While development splanchnopleure undergo EMT and produce endothelial progenitors, these then form the endocardium through MET. Pericardium is formed by sinus venosus mesenchymal cells that undergo MET. Quite similar processes occur also while regeneration in the injured heart. Injured pericardium undergoes EMT and is transformed into adipocytes or myofibroblasts which induce arrhythmia and scars. MET than leads to the formation of vascular and epithelial progenitors that can differentiate into vasculogenic cells which lead to regeneration of heart injury.
Hepatogenesis
In cancer
While relatively little is known about the role MET plays in cancer when compared to the extensive studies of EMT in tumor metastasis, MET is believed to participate in the establishment and stabilization of distant metastases by allowing cancerous cells to regain epithelial properties and integrate into distant organs. Between these two states, cells occur in 'intermediate‐state', or so‐called partial EMT.
In recent years, researchers have begun to investigate MET as one of many potential therapeutic targets in the prevention of metastases. This approach to preventing metastasis is known as differentiation-based therapy or differentiation therapy and it can be used for development of new anti-cancer therapeutic strategies.
In iPS cell reprogramming
A number of different cellular processes must take place in order for somatic cells to undergo reprogramming into induced pluripotent stem cells (iPS cells). iPS cell reprogramming, also known as somatic cell reprogramming, can be achieved by ectopic expression of Oct4, Klf4, Sox2, and c-Myc (OKSM). Upon induction, mouse fibroblasts must undergo MET to successfully begin the initiation phase of reprogramming. Epithelial-associated genes such as E-cadherin/Cdh1, Cldns −3, −4, −7, −11, Occludin (Ocln), Epithelial cell adhesion molecule (Epcam), and Crumbs homolog 3 (Crb3), were all upregulated before Nanog, a key transcription factor in maintaining pluripotency, was turned on. Additionally, mesenchymal-associated genes such as Snail, Slug, Zeb −1, −2, and N-cadherin were downregulated within the first 5 days post-OKSM induction. Addition of exogenous TGF-β1, which blocks MET, decreased iPS reprogramming efficiency significantly. These findings are all consistent with previous observations that embryonic stem cells resemble epithelial cells and express E-cadherin.
Recent studies have suggested that ectopic expression of Klf4 in iPS cell reprogramming may be specifically responsible for inducing E-cadherin expression by binding to promoter regions and the first intron of CDH1 (the gene encoding for E-cadherin).
See also
Epithelial–mesenchymal transition
References
Developmental biology
Oncology | Mesenchymal–epithelial transition | Biology | 1,436 |
45,604,191 | https://en.wikipedia.org/wiki/Penicillium%20grevilleicola | Penicillium grevilleicola is a species of the genus of Penicillium which was isolated from Grevillea ilicifolia.
References
grevilleicola
Fungi described in 2014
Fungus species | Penicillium grevilleicola | Biology | 43 |
95,646 | https://en.wikipedia.org/wiki/Dioptre | A dioptre (British spelling) or (American spelling), symbol dpt or D, is a unit of measurement with dimension of reciprocal length, equivalent to one reciprocal metre, . It is normally used to express the optical power of a lens or curved mirror, which is a physical quantity equal to the reciprocal of the focal length, expressed in metres. For example, a 3-dioptre lens brings parallel rays of light to focus at metre. A flat window has an optical power of zero dioptres, as it does not cause light to converge or diverge. Dioptres are also sometimes used for other reciprocals of distance, particularly radii of curvature and the vergence of optical beams.
The main benefit of using optical power rather than focal length is that the thin lens formula has the object distance, image distance, and focal length all as reciprocals. Additionally, when relatively thin lenses are placed close together their powers approximately add. Thus, a thin 2.0-dioptre lens placed close to a thin 0.5-dioptre lens yields almost the same focal length as a single 2.5-dioptre lens.
Though the dioptre is based on the SI-metric system, it has not been included in the standard, so that there is no international name or symbol for this unit of measurement—within the international system of units, this unit for optical power would need to be specified explicitly as the inverse metre (m−1). However most languages have borrowed the original name and some national standardization bodies like DIN specify a unit name (dioptrie, dioptria, etc.). In vision care the symbol D is frequently used.
The idea of numbering lenses based on the reciprocal of their focal length in metres was first suggested by Albrecht Nagel in 1866. The term dioptre was proposed by French ophthalmologist Ferdinand Monoyer in 1872, based on earlier use of the term dioptrice by Johannes Kepler.
In vision correction
The fact that optical powers are approximately additive enables an eye care professional to prescribe corrective lenses as a simple correction to the eye's optical power, rather than doing a detailed analysis of the entire optical system (the eye and the lens). Optical power can also be used to adjust a basic prescription for reading. Thus an eye care professional, having determined that a myopic (nearsighted) person requires a basic correction of, say, −2 dioptres to restore normal distance vision, might then make a further prescription of 'add 1' for reading, to make up for lack of accommodation (ability to alter focus). This is the same as saying that −1 dioptre lenses are prescribed for reading.
In humans, the total optical power of the relaxed eye is approximately 60 dioptres. The cornea accounts for approximately two-thirds of this refractive power (about 40 dioptres) and the crystalline lens contributes the remaining one-third (about 20 dioptres). In focusing, the ciliary muscle contracts to reduce the tension or stress transferred to the lens by the suspensory ligaments. This results in increased convexity of the lens which in turn increases the optical power of the eye. The amplitude of accommodation is about 11 to 16 dioptres at age 15, decreasing to about 10 dioptres at age 25, and to around 1 dioptre above age 60.
Convex lenses have positive dioptric value and are generally used to correct hyperopia (farsightedness) or to allow people with presbyopia (the limited accommodation of advancing age) to read at close range. Over the counter reading glasses are rated at +1.00 to +4.00 dioptres. Concave lenses have negative dioptric value and generally correct myopia (nearsightedness). Typical glasses for mild myopia have a power of −0.50 to −3.00 dioptres. Optometrists usually measure refractive error using lenses graded in steps of 0.25 dioptres.
Curvature
The dioptre can also be used as a measurement of curvature equal to the reciprocal of the radius measured in metres. For example, a circle with a radius of 1/2 metre has a curvature of 2 dioptres. If the curvature of a surface of a lens is C and the index of refraction is n, the optical power is φ = (n − 1)C. If both surfaces of the lens are curved, consider their curvatures as positive toward the lens and add them. This gives approximately the right result, as long as the thickness of the lens is much less than the radius of curvature of one of the surfaces. For a mirror the optical power is φ = 2C.
Relation to magnifying power
The magnifying power of a simple magnifying glass is related to its optical power by
.
This is approximately the magnification observed when a person with normal vision holds the magnifying glass close to his or her eye.
See also
Astigmatism
Dioptrics
Lens clock
Lensmeter
Optics
Optometry
Vertometer
References
Optics
Units of measurement
Non-SI metric units | Dioptre | Physics,Chemistry,Mathematics | 1,063 |
5,793,608 | https://en.wikipedia.org/wiki/Railway%20Technical%20Research%20Institute | , or , is the technical research company under the Japan Railways group of companies.
Overview
RTRI was established in its current form in 1986 just before Japanese National Railways (JNR) was privatised and split into separate JR group companies. It conducts research on everything related to trains, railways and their operation. It is funded by the government and private rail companies. It works both on developing new railway technology, such as magnetic levitation, and on improving the safety and economy of current technology.
Its research areas include earthquake detection and alarm systems, obstacle detection on level crossings, improving adhesion between train wheels and tracks, reducing energy usage, noise barriers and preventing vibrations.
RTRI is the main developer in the Japanese SCMaglev program.
Offices and test facilities
Main office
844 Shin-Kokusai Bldg. 3-4-1 Marunouchi, Chiyoda-ku, Tokyo 100-0005, Japan
Research facilities
Kunitachi Institute - 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo, 185-8540, Japan
Wind Tunnel Technical Center, Maibara, Shiga
Shiozawa Snow Testing Station, Minami-Uonuma, Niigata
Hino Civil Engineering Testing Station, Hino, Tokyo
Gatsugi Anti-Salt Testing Station, Sanpoku, Niigata
Gauge Change Train
The RTRI is developing a variable gauge system, called the "Gauge Change Train", to allow Shinkansen trains to access lines of the original rail network.
Publications
Japan Railway & Technical Review
Quarterly Report of RTRI - Print: Online:
See also
British Rail Research Division
German Centre for Rail Traffic Research
Hydrail
References
External links
Organizations established in 1986
1986 establishments in Japan
Rail transport organizations based in Japan
Organizations based in Tokyo
Kokubunji, Tokyo
Railway infrastructure companies
Engineering research institutes
Japan Railway companies
Government-owned railway companies | Railway Technical Research Institute | Engineering | 385 |
776,713 | https://en.wikipedia.org/wiki/Unified%20field%20theory | In physics, a unified field theory (UFT) is a type of field theory that allows all fundamental forces and elementary particles to be written in terms of a single type of field. According to modern discoveries in physics, forces are not transmitted directly between interacting objects but instead are described and interpreted by intermediary entities called fields. Furthermore, according to quantum field theory, particles are themselves the quanta of fields. Examples of different fields in physics include vector fields such as the electromagnetic field, spinor fields whose quanta are fermionic particles such as electrons, and tensor fields such as the metric tensor field that describes the shape of spacetime and gives rise to gravitation in general relativity. Unified field theory attempts to organize these fields into a single mathematical structure.
For over a century, unified field theory has remained an open line of research. The term was coined by Albert Einstein, who attempted to unify his general theory of relativity with electromagnetism. Einstein attempted to create a classical unified field theory, rejecting quantum mechanics. Among other difficulties, this required a new explanation of particles as singularities or solitons instead of field quanta. Later attempts to unify general relativity with other forces incorporate quantum mechanics. The concept of a "Theory of Everything" or Grand Unified Theory are closely related to unified field theory, but differ by not requiring the basis of nature to be fields, and often by attempting to explain physical constants of nature. Additionally, Grand Unified Theories do not attempt to include the gravitational force and can therefore operate entirely within quantum field theory.
The goal of a unified field theory has led to a great deal of progress in theoretical physics.
Introduction
Unified field theory attempts to give a single elegant description of the following fields:
Forces
All four of the known fundamental forces are mediated by fields. In the Standard Model of particle physics, three of these result from the exchange of gauge bosons. These are:
Strong interaction: the interaction responsible for holding quarks together to form hadrons, and holding neutrons and also protons together to form atomic nuclei. The exchange particle that mediates this force is the gluon.
Electromagnetic interaction: the familiar interaction that acts on electrically charged particles. The photon is the exchange particle for this force.
Weak interaction: a short-range interaction responsible for some forms of radioactivity, that acts on electrons, neutrinos, and quarks. It is mediated by the W and Z bosons.
General relativity likewise describes gravitation as the result of the metric tensor field, which describes the shape of spacetime:
Gravitational interaction: a long-range attractive interaction that acts on all particles. In hypothetical quantum versions of GR, the postulated exchange particle has been named the graviton.
Matter
In the Standard Model, the "matter" particles (electrons, quarks, neutrinos, etc) are described as the quanta of spinor fields. Gauge boson fields also have quanta, such as photons for the electromagnetic field.
Higgs
The Standard Model has a unique fundamental scalar field, the Higgs field, the quanta of which are called Higgs bosons.
History
Classic theory
The first successful classical unified field theory was developed by James Clerk Maxwell. In 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currents. Until then, electricity and magnetism had been thought of as unrelated phenomena. In 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic field. This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetism. By 1905, Albert Einstein had used the constancy of the speed-of-light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime. In 1915, he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional (4D) spacetime.
In the years following the creation of the general theory, a large number of physicists and mathematicians enthusiastically participated in the attempt to unify the then-known fundamental interactions. Given later developments in this domain, of particular interest are the theories of Hermann Weyl of 1919, who introduced the concept of an (electromagnetic) gauge field in a classical field theory and, two years later, that of Theodor Kaluza, who extended General Relativity to five dimensions. Continuing in this latter direction, Oscar Klein proposed in 1926 that the fourth spatial dimension be curled up into a small, unobserved circle. In Kaluza–Klein theory, the gravitational curvature of the extra spatial direction behaves as an additional force similar to electromagnetism. These and other models of electromagnetism and gravity were pursued by Albert Einstein in his attempts at a classical unified field theory. By 1930 Einstein had already considered the Einstein-Maxwell–Dirac System [Dongen]. This system is (heuristically) the super-classical [Varadarajan] limit of (the not mathematically well-defined) quantum electrodynamics. One can extend this system to include the weak and strong nuclear forces to get the Einstein–Yang-Mills–Dirac System. The French physicist Marie-Antoinette Tonnelat published a paper in the early 1940s on the standard commutation relations for the quantized spin-2 field. She continued this work in collaboration with Erwin Schrödinger after World War II. In the 1960s Mendel Sachs proposed a generally covariant field theory that did not require recourse to renormalization or perturbation theory. In 1965, Tonnelat published a book on the state of research on unified field theories.
Modern progress
In 1963, American physicist Sheldon Glashow proposed that the weak nuclear force, electricity, and magnetism could arise from a partially unified electroweak theory. In 1967, Pakistani Abdus Salam and American Steven Weinberg independently revised Glashow's theory by having the masses for the W particle and Z particle arise through spontaneous symmetry breaking with the Higgs mechanism. This unified theory modelled the electroweak interaction as a force mediated by four particles: the photon for the electromagnetic aspect, a neutral Z particle, and two charged W particles for the weak aspect. As a result of the spontaneous symmetry breaking, the weak force becomes short-range and the W and Z bosons acquire masses of 80.4 and , respectively. Their theory was first given experimental support by the discovery of weak neutral currents in 1973. In 1983, the Z and W bosons were first produced at CERN by Carlo Rubbia's team. For their insights, Glashow, Salam, and Weinberg were awarded the Nobel Prize in Physics in 1979. Carlo Rubbia and Simon van der Meer received the Prize in 1984.
After Gerardus 't Hooft showed the Glashow–Weinberg–Salam electroweak interactions to be mathematically consistent, the electroweak theory became a template for further attempts at unifying forces. In 1974, Sheldon Glashow and Howard Georgi proposed unifying the strong and electroweak interactions into the Georgi–Glashow model, the first Grand Unified Theory, which would have observable effects for energies much above 100 GeV.
Since then there have been several proposals for Grand Unified Theories, e.g. the Pati–Salam model, although none is currently universally accepted. A major problem for experimental tests of such theories is the energy scale involved, which is well beyond the reach of current accelerators. Grand Unified Theories make predictions for the relative strengths of the strong, weak, and electromagnetic forces, and in 1991 LEP determined that supersymmetric theories have the correct ratio of couplings for a Georgi–Glashow Grand Unified Theory.
Many Grand Unified Theories (but not Pati–Salam) predict that the proton can decay, and if this were to be seen, details of the decay products could give hints at more aspects of the Grand Unified Theory. It is at present unknown if the proton can decay, although experiments have determined a lower bound of 1035 years for its lifetime.
Current status
Theoretical physicists have not yet formulated a widely accepted, consistent theory that combines general relativity and quantum mechanics to form a theory of everything. Trying to combine the graviton with the strong and electroweak interactions leads to fundamental difficulties and the resulting theory is not renormalizable. The incompatibility of the two theories remains an outstanding problem in the field of physics.
See also
Sheldon Glashow
Unification (physics)
References
Further reading
Jeroen van Dongen Einstein's Unification, Cambridge University Press (July 26, 2010)
Varadarajan, V.S. Supersymmetry for Mathematicians: An Introduction (Courant Lecture Notes), American Mathematical Society (July 2004)
External links
On the History of Unified Field Theories, by Hubert F. M. Goenner
Particle physics
Theories of gravity
Unsolved problems in physics | Unified field theory | Physics | 1,879 |
8,758,924 | https://en.wikipedia.org/wiki/W%20Mensae | W Mensae (W Men) is an unusual yellow supergiant star in the Large Magellanic Cloud in the southern constellation Mensa. It is an R Coronae Borealis variable and periodically decreases in brightness by several magnitudes.
W Men is very distant, being located in the neighboring galaxy Large Magellanic Cloud, where it lies on the southern metal-deficient edge. Despite its high luminosity, the star has a maximum apparent brightness of +13.8m, too dim to be visible in a small telescope. Its radius has been calculated to be 61 times that of the Sun.
The variability of W Men was discovered in 1927 by W. J. Luyten. It belongs to the very rare R Coronae Borealis class of variables which are often called "inverse novae" since they experience occasional very large drops in brightness. At minimum brightness, W Men has a photographic (blue) magnitude less than +18.3, being undetectable on photographic plates at the time. The drop in brightness is less pronounced at longer wavelengths, and the overall luminosity of the star is thought to be largely unchanged. The variations are caused by condensation of dust which temporarily obscures the star. Short wavelengths of light are absorbed and re-emitted as infra-red. Many R CrB variables show small amplitude pulsations and W Mensae has a pulsation period of approximately 67 days.
References
Stars in the Large Magellanic Cloud
Large Magellanic Cloud
Mensa (constellation)
R Coronae Borealis variables
F-type supergiants
Extragalactic stars
Mensae, W
J05262451-7111117 | W Mensae | Astronomy | 346 |
6,132,123 | https://en.wikipedia.org/wiki/Water%20slide%20decal | Water slide decals (or water transfer decals) are decals which rely on dextrose residue from the decal paper to bond the decal to a surface. A water-based adhesive layer can be added to the decal to create a stronger bond or may be placed between layers of lacquer to create a durable decal transfer. The paper also has a layer of glucose film added prior to the dextrose layer which gives it adhesive properties; the dextrose layer gives the decal the ability to slide off the paper and onto the substrate (lubricity).
Water slide decals are thinner than many other decorative techniques (such as vinyl stickers) and as they are printed, they can be produced to a very high level of detail. As such, they are popular in craft areas such as scale modeling, as well as for labeling DIY electronics devices, such as guitar pedals.
Previously, water slide decals were professionally printed and only available in supplied designs, but with the advent of printable decal paper for colour inkjet and laser printers, custom decals can now be produced by the hobbyist or small business.
References
Scale modeling
Adhesives | Water slide decal | Physics | 245 |
21,958,460 | https://en.wikipedia.org/wiki/International%20Journal%20of%20Fracture | The International Journal of Fracture is a scientific journal focused on fracture in materials science. Founded in 1965, it is published by Springer. The journal publishes original analytical, numerical and experimental contributions which provide improved understanding of the mechanisms of micro and macro fracture in all materials, and their engineering implications. The journal has an impact factor of 2.175 (2017).
References
External links
Official Website
Springer Science+Business Media
SpringerLink.com
English-language journals
Engineering journals
Academic journals established in 1965
Springer Science+Business Media academic journals | International Journal of Fracture | Materials_science | 105 |
30,041 | https://en.wikipedia.org/wiki/Technetium | Technetium is a chemical element; it has symbol Tc and atomic number 43. It is the lightest element whose isotopes are all radioactive. Technetium and promethium are the only radioactive elements whose neighbours in the sense of atomic number are both stable. All available technetium is produced as a synthetic element. Naturally occurring technetium is a spontaneous fission product in uranium ore and thorium ore (the most common source), or the product of neutron capture in molybdenum ores. This silvery gray, crystalline transition metal lies between manganese and rhenium in group 7 of the periodic table, and its chemical properties are intermediate between those of both adjacent elements. The most common naturally occurring isotope is 99Tc, in traces only.
Many of technetium's properties had been predicted by Dmitri Mendeleev before it was discovered; Mendeleev noted a gap in his periodic table and gave the undiscovered element the provisional name ekamanganese (Em). In 1937, technetium became the first predominantly artificial element to be produced, hence its name (from the Greek , 'artificial', +
One short-lived gamma ray–emitting nuclear isomer, technetium-99m, is used in nuclear medicine for a wide variety of tests, such as bone cancer diagnoses. The ground state of the nuclide technetium-99 is used as a gamma ray–free source of beta particles. Long-lived technetium isotopes produced commercially are byproducts of the fission of uranium-235 in nuclear reactors and are extracted from nuclear fuel rods. Because even the longest-lived isotope of technetium has a relatively short half-life (4.21 million years), the 1952 detection of technetium in red giants helped to prove that stars can produce heavier elements.
History
Early assumptions
From the 1860s through 1871, early forms of the periodic table proposed by Dmitri Mendeleev contained a gap between molybdenum (element 42) and ruthenium (element 44). In 1871, Mendeleev predicted this missing element would occupy the empty place below manganese and have similar chemical properties. Mendeleev gave it the provisional name eka-manganese (from eka, the Sanskrit word for one) because it was one place down from the known element manganese.
Early misidentifications
Many early researchers, both before and after the periodic table was published, were eager to be the first to discover and name the missing element. Its location in the table suggested that it should be easier to find than other undiscovered elements. This turned out not to be the case, due to technetium's radioactivity.
Irreproducible results
German chemists Walter Noddack, Otto Berg, and Ida Tacke reported the discovery of element 75 and element 43 in 1925, and named element 43 masurium (after Masuria in eastern Prussia, now in Poland, the region where Walter Noddack's family originated). This name caused significant resentment in the scientific community, because it was interpreted as referring to a series of victories of the German army over the Russian army in the Masuria region during World War I; as the Noddacks remained in their academic positions while the Nazis were in power, suspicions and hostility against their claim for discovering element 43 continued. The group bombarded columbite with a beam of electrons and deduced element 43 was present by examining X-ray emission spectrograms. The wavelength of the X-rays produced is related to the atomic number by a formula derived by Henry Moseley in 1913. The team claimed to detect a faint X-ray signal at a wavelength produced by element 43. Later experimenters could not replicate the discovery, and it was dismissed as an error. Still, in 1933, a series of articles on the discovery of elements quoted the name masurium for element 43. Some more recent attempts have been made to rehabilitate the Noddacks' claims, but they are disproved by Paul Kuroda's study on the amount of technetium that could have been present in the ores they studied: it could not have exceeded of ore, and thus would have been undetectable by the Noddacks' methods.
Official discovery and later history
The discovery of element 43 was finally confirmed in a 1937 experiment at the University of Palermo in Sicily by Carlo Perrier and Emilio Segrè. In mid-1936, Segrè visited the United States, first Columbia University in New York and then the Lawrence Berkeley National Laboratory in California. He persuaded cyclotron inventor Ernest Lawrence to let him take back some discarded cyclotron parts that had become radioactive. Lawrence mailed him a molybdenum foil that had been part of the deflector in the cyclotron.
Segrè enlisted his colleague Perrier to attempt to prove, through comparative chemistry, that the molybdenum activity was indeed from an element with the atomic number 43. In 1937, they succeeded in isolating the isotopes technetium-95m and technetium-97. University of Palermo officials wanted them to name their discovery , after the Latin name for Palermo, . In 1947, element 43 was named after the Greek word (), meaning 'artificial', since it was the first element to be artificially produced.
Segrè returned to Berkeley and met Glenn T. Seaborg. They isolated the metastable isotope technetium-99m, which is now used in some ten million medical diagnostic procedures annually.
In 1952, the astronomer Paul W. Merrill in California detected the spectral signature of technetium (specifically wavelengths of 403.1 nm, 423.8 nm, 426.2 nm, and 429.7 nm) in light from S-type red giants. The stars were near the end of their lives but were rich in the short-lived element, which indicated that it was being produced in the stars by nuclear reactions. That evidence bolstered the hypothesis that heavier elements are the product of nucleosynthesis in stars. More recently, such observations provided evidence that elements are formed by neutron capture in the s-process.
Since that discovery, there have been many searches in terrestrial materials for natural sources of technetium. In 1962, technetium-99 was isolated and identified in pitchblende from the Belgian Congo in very small quantities (about 0.2 ng/kg), where it originates as a spontaneous fission product of uranium-238. The natural nuclear fission reactor in Oklo contains evidence that significant amounts of technetium-99 were produced and have since decayed into ruthenium-99.
Characteristics
Physical properties
Technetium is a silvery-gray radioactive metal with an appearance similar to platinum, commonly obtained as a gray powder. The crystal structure of the bulk pure metal is hexagonal close-packed, and crystal structures of the nanodisperse pure metal are cubic. Nanodisperse technetium does not have a split NMR spectrum, while hexagonal bulk technetium has the Tc-99-NMR spectrum split in 9 satellites. Atomic technetium has characteristic emission lines at wavelengths of 363.3 nm, 403.1 nm, 426.2 nm, 429.7 nm, and 485.3 nm. The unit cell parameters of the orthorhombic Tc metal were reported when Tc is contaminated with carbon ( = 0.2805(4), = 0.4958(8), = 0.4474(5)·nm for Tc-C with 1.38 wt% C and = 0.2815(4), = 0.4963(8), = 0.4482(5)·nm for Tc-C with 1.96 wt% C ). The metal form is slightly paramagnetic, meaning its magnetic dipoles align with external magnetic fields, but will assume random orientations once the field is removed. Pure, metallic, single-crystal technetium becomes a type-II superconductor at temperatures below .
Below this temperature, technetium has a very high magnetic penetration depth, greater than any other element except niobium.
Chemical properties
Technetium is located in the group 7 of the periodic table, between rhenium and manganese. As predicted by the periodic law, its chemical properties are between those two elements. Of the two, technetium more closely resembles rhenium, particularly in its chemical inertness and tendency to form covalent bonds. This is consistent with the tendency of period 5 elements to resemble their counterparts in period 6 more than period 4 due to the lanthanide contraction. Unlike manganese, technetium does not readily form cations (ions with net positive charge). Technetium exhibits nine oxidation states from −1 to +7, with +4, +5, and +7 being the most common. Technetium dissolves in aqua regia, nitric acid, and concentrated sulfuric acid, but not in hydrochloric acid of any concentration.
Metallic technetium slowly tarnishes in moist air and, in powder form, burns in oxygen. When reacting with hydrogen at high pressure, it forms the hydride TcH and while reacting with carbon it forms TcC, with cell parameter 0.398 nm, as well as the nanodisperce low-carbon-content carbide with parameter 0.402nm.
Technetium can catalyse the destruction of hydrazine by nitric acid, and this property is due to its multiplicity of valencies. This caused a problem in the separation of plutonium from uranium in nuclear fuel processing, where hydrazine is used as a protective reductant to keep plutonium in the trivalent rather than the more stable tetravalent state. The problem was exacerbated by the mutually enhanced solvent extraction of technetium and zirconium at the previous stage, and required a process modification.
Compounds
Pertechnetate and other derivatives
The most prevalent form of technetium that is easily accessible is sodium pertechnetate, Na[TcO4]. The majority of this material is produced by radioactive decay from [99MoO4]2−:
Pertechnetate () is only weakly hydrated in aqueous solutions, and it behaves analogously to perchlorate anion, both of which are tetrahedral. Unlike permanganate (), it is only a weak oxidizing agent.
Related to pertechnetate is technetium heptoxide. This pale-yellow, volatile solid is produced by oxidation of Tc metal and related precursors:
It is a molecular metal oxide, analogous to manganese heptoxide. It adopts a centrosymmetric structure with two types of Tc−O bonds with 167 and 184 pm bond lengths.
Technetium heptoxide hydrolyzes to pertechnetate and pertechnetic acid, depending on the pH:
HTcO4 is a strong acid. In concentrated sulfuric acid, [TcO4]− converts to the octahedral form TcO3(OH)(H2O)2, the conjugate base of the hypothetical triaquo complex [TcO3(H2O)3]+.
Other chalcogenide derivatives
Technetium forms a dioxide, disulfide, diselenide, and ditelluride. An ill-defined Tc2S7 forms upon treating pertechnate with hydrogen sulfide. It thermally decomposes into disulfide and elemental sulfur. Similarly the dioxide can be produced by reduction of the Tc2O7.
Unlike the case for rhenium, a trioxide has not been isolated for technetium. However, TcO3 has been identified in the gas phase using mass spectrometry.
Simple hydride and halide complexes
Technetium forms the complex . The potassium salt is isostructural with . At high pressure formation of TcH1.3 from elements was also reported.
The following binary (containing only two elements) technetium halides are known: TcF6, TcF5, TcCl4, TcBr4, TcBr3, α-TcCl3, β-TcCl3, TcI3, α-TcCl2, and β-TcCl2. The oxidation states range from Tc(VI) to Tc(II). Technetium halides exhibit different structure types, such as molecular octahedral complexes, extended chains, layered sheets, and metal clusters arranged in a three-dimensional network. These compounds are produced by combining the metal and halogen or by less direct reactions.
TcCl4 is obtained by chlorination of Tc metal or Tc2O7. Upon heating, TcCl4 gives the corresponding Tc(III) and Tc(II) chlorides.
The structure of TcCl4 is composed of infinite zigzag chains of edge-sharing TcCl6 octahedra. It is isomorphous to transition metal tetrachlorides of zirconium, hafnium, and platinum.
Two polymorphs of technetium trichloride exist, α- and β-TcCl3. The α polymorph is also denoted as Tc3Cl9. It adopts a confacial bioctahedral structure. It is prepared by treating the chloro-acetate Tc2(O2CCH3)4Cl2 with HCl. Like Re3Cl9, the structure of the α-polymorph consists of triangles with short M-M distances. β-TcCl3 features octahedral Tc centers, which are organized in pairs, as seen also for molybdenum trichloride. TcBr3 does not adopt the structure of either trichloride phase. Instead it has the structure of molybdenum tribromide, consisting of chains of confacial octahedra with alternating short and long Tc—Tc contacts. TcI3 has the same structure as the high temperature phase of TiI3, featuring chains of confacial octahedra with equal Tc—Tc contacts.
Several anionic technetium halides are known. The binary tetrahalides can be converted to the hexahalides [TcX6]2− (X = F, Cl, Br, I), which adopt octahedral molecular geometry. More reduced halides form anionic clusters with Tc–Tc bonds. The situation is similar for the related elements of Mo, W, Re. These clusters have the nuclearity Tc4, Tc6, Tc8, and Tc13. The more stable Tc6 and Tc8 clusters have prism shapes where vertical pairs of Tc atoms are connected by triple bonds and the planar atoms by single bonds. Every technetium atom makes six bonds, and the remaining valence electrons can be saturated by one axial and two bridging ligand halogen atoms such as chlorine or bromine.
Coordination and organometallic complexes
Technetium forms a variety of coordination complexes with organic ligands. Many have been well-investigated because of their relevance to nuclear medicine.
Technetium forms a variety of compounds with Tc–C bonds, i.e. organotechnetium complexes. Prominent members of this class are complexes with CO, arene, and cyclopentadienyl ligands. The binary carbonyl Tc2(CO)10 is a white volatile solid. In this molecule, two technetium atoms are bound to each other; each atom is surrounded by octahedra of five carbonyl ligands. The bond length between technetium atoms, 303 pm, is significantly larger than the distance between two atoms in metallic technetium (272 pm). Similar carbonyls are formed by technetium's congeners, manganese and rhenium. Interest in organotechnetium compounds has also been motivated by applications in nuclear medicine. Technetium also forms aquo-carbonyl complexes, one prominent complex being [Tc(CO)3(H2O)3]+, which are unusual compared to other metal carbonyls.
Isotopes
Technetium, with atomic number Z = 43, is the lowest-numbered element in the periodic table for which all isotopes are radioactive. The second-lightest exclusively radioactive element, promethium, has atomic number 61. Atomic nuclei with an odd number of protons are less stable than those with even numbers, even when the total number of nucleons (protons + neutrons) is even, and odd numbered elements have fewer stable isotopes.
The most stable radioactive isotopes are technetium-97 with a half-life of million years and technetium-98 with million years; current measurements of their half-lives give overlapping confidence intervals corresponding to one standard deviation and therefore do not allow a definite assignment of technetium's most stable isotope. The next most stable isotope is technetium-99, which has a half-life of 211,100 years. Thirty-four other radioisotopes have been characterized with mass numbers ranging from 86 to 122. Most of these have half-lives that are less than an hour, the exceptions being technetium-93 (2.73 hours), technetium-94 (4.88 hours), technetium-95 (20 hours), and technetium-96 (4.3 days).
The primary decay mode for isotopes lighter than technetium-98 (98Tc) is electron capture, producing molybdenum (Z = 42). For technetium-98 and heavier isotopes, the primary mode is beta emission (the emission of an electron or positron), producing ruthenium (Z = 44), with the exception that technetium-100 can decay both by beta emission and electron capture.
Technetium also has numerous nuclear isomers, which are isotopes with one or more excited nucleons. Technetium-97m (97mTc; "m" stands for metastability) is the most stable, with a half-life of 91 days and excitation energy 0.0965 MeV.
This is followed by technetium-95m (61 days, 0.03 MeV), and technetium-99m (6.01 hours, 0.142 MeV).
Technetium-99 (99Tc) is a major product of the fission of uranium-235 (235U), making it the most common and most readily available isotope of technetium. One gram of technetium-99 produces per second (in other words, the specific activity of 99Tc is 0.62 GBq/g).
Occurrence and production
Technetium occurs naturally in the Earth's crust in minute concentrations of about 0.003 parts per trillion. Technetium is so rare because the half-lives of 97Tc and 98Tc are only More than a thousand of such periods have passed since the formation of the Earth, so the probability of survival of even one atom of primordial technetium is effectively zero. However, small amounts exist as spontaneous fission products in uranium ores. A kilogram of uranium contains an estimated 1 nanogram equivalent to ten trillion atoms of technetium.
Some red giant stars with the spectral types S-, M-, and N display a spectral absorption line indicating the presence of technetium. These red giants are known informally as technetium stars.
Fission waste product
In contrast to the rare natural occurrence, bulk quantities of technetium-99 are produced each year from spent nuclear fuel rods, which contain various fission products. The fission of a gram of uranium-235 in nuclear reactors yields 27 mg of technetium-99, giving technetium a fission product yield of 6.1%. Other fissile isotopes produce similar yields of technetium, such as 4.9% from uranium-233 and 6.21% from plutonium-239. An estimated 49,000 TBq (78 metric tons) of technetium was produced in nuclear reactors between 1983 and 1994, by far the dominant source of terrestrial technetium.
Only a fraction of the production is used commercially.
Technetium-99 is produced by the nuclear fission of both uranium-235 and plutonium-239. It is therefore present in radioactive waste and in the nuclear fallout of fission bomb explosions. Its decay, measured in becquerels per amount of spent fuel, is the dominant contributor to nuclear waste radioactivity after about after the creation of the nuclear waste. From 1945–1994, an estimated 160 TBq (about 250 kg) of technetium-99 was released into the environment during atmospheric nuclear tests.
The amount of technetium-99 from nuclear reactors released into the environment up to 1986 is on the order of 1000 TBq (about 1600 kg), primarily by nuclear fuel reprocessing; most of this was discharged into the sea. Reprocessing methods have reduced emissions since then, but as of 2005 the primary release of technetium-99 into the environment is by the Sellafield plant, which released an estimated 550 TBq (about 900 kg) from 1995 to 1999 into the Irish Sea.
From 2000 onwards the amount has been limited by regulation to 90 TBq (about 140 kg) per year.
Discharge of technetium into the sea resulted in contamination of some seafood with minuscule quantities of this element. For example, European lobster and fish from west Cumbria contain about 1 Bq/kg of technetium.
Fission product for commercial use
The metastable isotope technetium-99m is continuously produced as a fission product from the fission of uranium or plutonium in nuclear reactors:
^{238}_{92}U ->[\ce{sf}] ^{137}_{53}I + ^{99}_{39}Y + 2^{1}_{0}n
^{99}_{39}Y ->[\beta^-][1.47\,\ce{s}] ^{99}_{40}Zr ->[\beta^-][2.1\,\ce{s}] ^{99}_{41}Nb ->[\beta^-][15.0\,\ce{s}] ^{99}_{42}Mo ->[\beta^-][65.94\,\ce{h}] ^{99}_{43}Tc ->[\beta^-][211,100\,\ce{y}] ^{99}_{44}Ru
Because used fuel is allowed to stand for several years before reprocessing, all molybdenum-99 and technetium-99m is decayed by the time that the fission products are separated from the major actinides in conventional nuclear reprocessing. The liquid left after plutonium–uranium extraction (PUREX) contains a high concentration of technetium as but almost all of this is technetium-99, not technetium-99m.
The vast majority of the technetium-99m used in medical work is produced by irradiating dedicated highly enriched uranium targets in a reactor, extracting molybdenum-99 from the targets in reprocessing facilities, and recovering at the diagnostic center the technetium-99m produced upon decay of molybdenum-99. Molybdenum-99 in the form of molybdate is adsorbed onto acid alumina () in a shielded column chromatograph inside a technetium-99m generator ("technetium cow", also occasionally called a "molybdenum cow"). Molybdenum-99 has a half-life of 67 hours, so short-lived technetium-99m (half-life: 6 hours), which results from its decay, is being constantly produced. The soluble pertechnetate can then be chemically extracted by elution using a saline solution. A drawback of this process is that it requires targets containing uranium-235, which are subject to the security precautions of fissile materials.
Almost two-thirds of the world's supply comes from two reactors; the National Research Universal Reactor at Chalk River Laboratories in Ontario, Canada, and the High Flux Reactor at Nuclear Research and Consultancy Group in Petten, Netherlands. All major reactors that produce technetium-99m were built in the 1960s and are close to the end of life. The two new Canadian Multipurpose Applied Physics Lattice Experiment reactors planned and built to produce 200% of the demand of technetium-99m relieved all other producers from building their own reactors. With the cancellation of the already tested reactors in 2008, the future supply of technetium-99m became problematic.
Waste disposal
The long half-life of technetium-99 and its potential to form anionic species creates a major concern for long-term disposal of radioactive waste. Many of the processes designed to remove fission products in reprocessing plants aim at cationic species such as caesium (e.g., caesium-137) and strontium (e.g., strontium-90). Hence the pertechnetate escapes through those processes. Current disposal options favor burial in continental, geologically stable rock. The primary danger with such practice is the likelihood that the waste will contact water, which could leach radioactive contamination into the environment. The anionic pertechnetate and iodide tend not to adsorb into the surfaces of minerals, and are likely to be washed away. By comparison plutonium, uranium, and caesium tend to bind to soil particles. Technetium could be immobilized by some environments, such as microbial activity in lake bottom sediments, and the environmental chemistry of technetium is an area of active research.
An alternative disposal method, transmutation, has been demonstrated at CERN for technetium-99. In this process, the technetium (technetium-99 as a metal target) is bombarded with neutrons to form the short-lived technetium-100 (half-life = 16 seconds) which decays by beta decay to stable ruthenium-100. If recovery of usable ruthenium is a goal, an extremely pure technetium target is needed; if small traces of the minor actinides such as americium and curium are present in the target, they are likely to undergo fission and form more fission products which increase the radioactivity of the irradiated target. The formation of ruthenium-106 (half-life 374 days) from the 'fresh fission' is likely to increase the activity of the final ruthenium metal, which will then require a longer cooling time after irradiation before the ruthenium can be used.
The actual separation of technetium-99 from spent nuclear fuel is a long process. During fuel reprocessing, it comes out as a component of the highly radioactive waste liquid. After sitting for several years, the radioactivity reduces to a level where extraction of the long-lived isotopes, including technetium-99, becomes feasible. A series of chemical processes yields technetium-99 metal of high purity.
Neutron activation
Molybdenum-99, which decays to form technetium-99m, can be formed by the neutron activation of molybdenum-98. When needed, other technetium isotopes are not produced in significant quantities by fission, but are manufactured by neutron irradiation of parent isotopes (for example, technetium-97 can be made by neutron irradiation of ruthenium-96).
Particle accelerators
The feasibility of technetium-99m production with the 22-MeV-proton bombardment of a molybdenum-100 target in medical cyclotrons following the reaction 100Mo(p,2n)99mTc was demonstrated in 1971. The recent shortages of medical technetium-99m reignited the interest in its production by proton bombardment of isotopically enriched (>99.5%) molybdenum-100 targets. Other techniques are being investigated for obtaining molybdenum-99 from molybdenum-100 via (n,2n) or (γ,n) reactions in particle accelerators.
Applications
Nuclear medicine and biology
Technetium-99m ("m" indicates that this is a metastable nuclear isomer) is used in radioactive isotope medical tests. For example, technetium-99m is a radioactive tracer that medical imaging equipment tracks in the human body. It is well suited to the role because it emits readily detectable 140 keV gamma rays, and its half-life is 6.01 hours (meaning that about 94% of it decays to technetium-99 in 24 hours). The chemistry of technetium allows it to be bound to a variety of biochemical compounds, each of which determines how it is metabolized and deposited in the body, and this single isotope can be used for a multitude of diagnostic tests. More than 50 common radiopharmaceuticals are based on technetium-99m for imaging and functional studies of the brain, heart muscle, thyroid, lungs, liver, gall bladder, kidneys, skeleton, blood, and tumors.
The longer-lived isotope, technetium-95m with a half-life of 61 days, is used as a radioactive tracer to study the movement of technetium in the environment and in plant and animal systems.
Industrial and chemical
Technetium-99 decays almost entirely by beta decay, emitting beta particles with consistent low energies and no accompanying gamma rays. Moreover, its long half-life means that this emission decreases very slowly with time. It can also be extracted to a high chemical and isotopic purity from radioactive waste. For these reasons, it is a National Institute of Standards and Technology (NIST) standard beta emitter, and is used for equipment calibration. Technetium-99 has also been proposed for optoelectronic devices and nanoscale nuclear batteries.
Like rhenium and palladium, technetium can serve as a catalyst. In processes such as the dehydrogenation of isopropyl alcohol, it is a far more effective catalyst than either rhenium or palladium. However, its radioactivity is a major problem in safe catalytic applications.
When steel is immersed in water, adding a small concentration (55 ppm) of potassium pertechnetate(VII) to the water protects the steel from corrosion, even if the temperature is raised to . For this reason, pertechnetate has been used as an anodic corrosion inhibitor for steel, although technetium's radioactivity poses problems that limit this application to self-contained systems. While (for example) can also inhibit corrosion, it requires a concentration ten times as high. In one experiment, a specimen of carbon steel was kept in an aqueous solution of pertechnetate for 20 years and was still uncorroded. The mechanism by which pertechnetate prevents corrosion is not well understood, but seems to involve the reversible formation of a thin surface layer (passivation). One theory holds that the pertechnetate reacts with the steel surface to form a layer of technetium dioxide which prevents further corrosion; the same effect explains how iron powder can be used to remove pertechnetate from water. The effect disappears rapidly if the concentration of pertechnetate falls below the minimum concentration or if too high a concentration of other ions is added.
As noted, the radioactive nature of technetium (3 MBq/L at the concentrations required) makes this corrosion protection impractical in almost all situations. Nevertheless, corrosion protection by pertechnetate ions was proposed (but never adopted) for use in boiling water reactors.
Precautions
Technetium plays no natural biological role and is not normally found in the human body. Technetium is produced in quantity by nuclear fission, and spreads more readily than many radionuclides. It appears to have low chemical toxicity. For example, no significant change in blood formula, body and organ weights, and food consumption could be detected for rats which ingested up to 15 μg of technetium-99 per gram of food for several weeks. In the body, technetium quickly gets converted to the stable ion, which is highly water-soluble and quickly excreted. The radiological toxicity of technetium (per unit of mass) is a function of compound, type of radiation for the isotope in question, and the isotope's half-life.
All isotopes of technetium must be handled carefully. The most common isotope, technetium-99, is a weak beta emitter; such radiation is stopped by the walls of laboratory glassware. The primary hazard when working with technetium is inhalation of dust; such radioactive contamination in the lungs can pose a significant cancer risk. For most work, careful handling in a fume hood is sufficient, and a glove box is not needed.
Notes
References
Bibliography
Further reading
External links
Chemical elements
Transition metals
Synthetic elements
Chemical elements predicted by Dmitri Mendeleev
Chemical elements with hexagonal close-packed structure | Technetium | Physics,Chemistry | 7,002 |
11,421,609 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Me28S-Cm2645 | In molecular biology, Small nucleolar RNA Me28S-Cm2645 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA Me28S-Cm2645 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. It is predicted that this family directs 2'-O-methylation of 28S C-2645.
References
External links
Small nuclear RNA | Small nucleolar RNA Me28S-Cm2645 | Chemistry | 208 |
76,109,895 | https://en.wikipedia.org/wiki/TK83 | The TK83 was a home computer produced by the Brazilian company Microdigital Eletrônica Ltda. and introduced in August 1982. By December 1984, it was no longer being advertised by Microdigital, being discontinued in 1985.
The TK83 was a clone of the Sinclair ZX81, and can for all practical purposes, be considered a version of the TK82C with repagged memory and including the SLOW function which permitted the video be shown during processing.
General information
The TK83 had the Zilog Z80A processor running at 3.25 MHz, 2 KB RAM (expandable to 64 KB) and 8 KB of ROM that included the BASIC interpreter.
The keyboard was made of layers of conductive (membrane) material and followed the Sinclair layout with 40 keys.
Video output was sent via a RF modulator to a TV set tuned at VHF channel 3, and featured black characters on a white background. The maximum resolution was 64 x 44 pixels, based on semigraphic characters useful for games and basic images (see ZX81 character set).
There was one expansion slot at the side of the machine, a cassette interface (data storage in tapes at 300 to 4200 baud, with audio cables were supplied with the computer for connection with a regular tape recorder) and a DIN joystick connector.
References
Microdigital Eletrônica
Computer-related introductions in 1982
Goods manufactured in Brazil
Products introduced in 1983
Sinclair ZX81 clones | TK83 | Technology | 311 |
13,474,685 | https://en.wikipedia.org/wiki/Ground-effect%20vehicle | A ground-effect vehicle (GEV), also called a wing-in-ground-effect (WIGE or WIG), ground-effect craft/machine (GEM), wingship, flarecraft, surface effect vehicle or ekranoplan (), is a vehicle that is able to move over the surface by gaining support from the reactions of the air against the surface of the earth or water. Typically, it is designed to glide over a level surface (usually over the sea) by making use of ground effect, the aerodynamic interaction between the moving wing and the surface below. Some models can operate over any flat area such as frozen lakes or flat plains similar to a hovercraft. The term Ground-Effect Vehicle originally referred to any craft utilizing ground effect, including what is known later as hovercraft, in descriptions of patents during the 1950s. However, this term is nowadays regarded as distinct from air-cushion vehicles or hovercraft. The definition of GEVs does not include racecars utilizing ground-effect for increasing downforce.
Design
A ground-effect vehicle needs some forward velocity to produce lift dynamically, and the principal benefit of operating a wing in ground effect is to reduce its lift-dependent drag. The basic design principle is that the closer the wing operates to an external surface such as the ground, when it is said to be in ground effect, the less drag it experiences.
An airfoil passing through air increases air pressure on the underside, while decreasing pressure across the top. The high and low pressures are maintained until they flow off the ends of the wings, where they form vortices which in turn are the major cause of lift-induced drag—normally a significant portion of the drag affecting an aircraft. The greater the span of a wing, the less induced drag created for each unit of lift and the greater the efficiency of the particular wing. This is the primary reason gliders have long wings.
Placing the same wing near a surface such as the water or the ground has the same effect as increasing the aspect ratio because the ground prevents wingtip vortices from expanding, but without having the complications associated with a long and slender wing, so that the short stubs on a GEV can produce just as much lift as the much larger wing on a transport aircraft, though it can do this only when close to the earth's surface. Once sufficient speed has built up, some GEVs may be capable of leaving ground effect and functioning as normal aircraft until they approach their destination. The distinguishing characteristic is that they are unable to land or take off without a significant amount of help from the ground effect cushion, and cannot climb until they have reached a much higher speed.
A GEV is sometimes characterized as a transition between a hovercraft and an aircraft, although this is not correct as a hovercraft is statically supported upon a cushion of pressurized air from an onboard downward-directed fan. Some GEV designs, such as the Russian Lun and Dingo, have used forced blowing under the wing by auxiliary engines to increase the high pressure area under the wing to assist the takeoff; however they differ from hovercraft in still requiring forward motion to generate sufficient lift to fly.
Although the GEV may look similar to the seaplane and share many technical characteristics, it is generally not designed to fly out of ground effect. It differs from the hovercraft in lacking low-speed hover capability in much the same way that a fixed-wing airplane differs from the helicopter. Unlike the hydrofoil, it does not have any contact with the surface of the water when in "flight". The ground-effect vehicle constitutes a unique class of transportation.
The Boston-based (United States) company REGENT proposed an electric-powered high-wing design with a standard hull for water operations, but also incorporated fore- and aft-mounted hydrofoil units designed to lift the craft out of the water during takeoff run, to facilitate lower liftoff speeds.
Wing configurations
Straight wing
Used by the Russian Rostislav Alexeyev for his ekranoplan. The wings are significantly shorter than those of comparable aircraft, and this configuration requires a high aft-placed horizontal tail to maintain stability. The pitch and altitude stability comes from the lift slope difference between a front low wing in ground-effect (commonly the main wing) and an aft, higher-located second wing nearly out of ground-effect (generally named a stabilizer).
Reverse-delta wing
Developed by Alexander Lippisch, this wing allows stable flight in ground-effect through self-stabilization. This is the main Class B form of GEV. Hanno Fischer later developed WIG craft based on the configuration, which were then transferred to multiple companies in Asia, thus becoming one of the "standards" in GEV design.
Tandem wings
Tandem wings can have three configurations:
A biplane-style type-1 utilising a shoulder-mounted main lift wing and belly-mounted sponsons similar to those on combat and transport helicopters.
A canard-style type-2 with a mid-size horizontal wing near the nose of the craft directing airflow under the main lift airfoil. This type-2 tandem design is a major improvement during takeoff, as it creates an air cushion to lift the craft above the water at a lower speed, thereby reducing water drag, which is the biggest obstacle to successful seaplane launches.
Two stubby wings as in the tandem-airfoil flairboat produced by Günther Jörg in Germany. His particular design is self-stabilizing longitudinally.
Advantages and disadvantages
Given similar hull size and power, and depending on its specific design, the lower lift-induced drag of a GEV, as compared to an aircraft of similar capacity, will improve its fuel efficiency and, up to a point, its speed. GEVs are also much faster than surface vessels of similar power, because they avoid drag from the water.
On the water the aircraft-like construction of GEVs increases the risk of damage in collisions with surface objects. Furthermore, the limited number of egress points make it more difficult to evacuate the vehicle in an emergency. According to WST, the builders of the WIG craft WSH-500, GEVs furthermore have the advantage of avoiding conflict with ocean currents by flying over them.
Since most GEVs are designed to operate from water, accidents and engine failure typically are less hazardous than in a land-based aircraft, but the lack of altitude control leaves the pilot with fewer options for avoiding collision, and to some extent that negates such benefits. Low altitude brings high-speed craft into conflict with ships, buildings and rising land, which may not be sufficiently visible in poor conditions to avoid. GEVs may be unable to climb over or turn sharply enough to avoid collisions, while drastic, low-level maneuvers risk contact with solid or water hazards beneath. Aircraft can climb over most obstacles, but GEVs are more limited.
In high winds, take-off must be into the wind, which takes the craft across successive lines of waves, causing heavy pounding, stressing the craft and creating an uncomfortable ride. In light winds, waves may be in any direction, which can make control difficult as each wave causes the vehicle to both pitch and roll. The lighter construction of GEVs makes their ability to operate in higher sea states less than that of conventional ships, but greater than the ability of hovercraft or hydrofoils, which are closer to the water surface.
Like conventional aircraft, greater power is needed for takeoff, and, like seaplanes, ground-effect vehicles must get on the step before they can accelerate to flight speed. Careful design, usually with multiple redesigns of hullforms, is required to get this right, which increases engineering costs. This obstacle is more difficult for GEVs with short production runs to overcome. For the vehicle to work, its hull needs to be stable enough longitudinally to be controllable yet not so stable that it cannot lift off the water.
The bottom of the vehicle must be formed to avoid excessive pressures on landing and taking off without sacrificing too much lateral stability, and it must not create too much spray, which damages the airframe and the engines. The Russian ekranoplans show evidence of fixes for these problems in the form of multiple chines on the forward part of the hull undersides and in the forward location of the jet engines.
Finally, limited utility has kept production levels low enough that it has been impossible to amortize development costs sufficiently to make GEVs competitive with conventional aircraft.
A 2014 study by students at NASA's Ames Research Center claims that use of GEVs for passenger travel could lead to cheaper flights, increased accessibility and less pollution.
Classification
One obstacle to GEV development is the classification and legislation to be applied. The International Maritime Organization has studied the application of rules based on the International Code of Safety for High-Speed Craft (HSC code) which was developed for fast ships such as hydrofoils, hovercraft, catamarans and the like. The Russian Rules for classification and construction of small type A ekranoplans is a document upon which most GEV design is based. However, in 2005, the IMO classified the WISE or GEV under the category of ships.
The International Maritime Organization recognizes three types of GEVs:
At the time of writing, those classes only applied to craft carrying 12 passengers or more, and (as of 2019) there was disagreement between national regulatory agencies about whether these vehicles should be classified, and regulated, as aircraft or as boats.
History
By the 1920s, the ground effect phenomenon was well-known, as pilots found that their airplanes appeared to become more efficient as they neared the runway surface during landing. In 1934 the US National Advisory Committee for Aeronautics issued Technical Memorandum 771, Ground Effect on the Takeoff and Landing of Airplanes, which was a translation into English of a summary of French research on the subject. The French author Maurice Le Sueur had added a suggestion based on this phenomenon: "Here the imagination of inventors is offered a vast field. The ground interference reduces the power required for level flight in large proportions, so here is a means of rapid and at the same time economic locomotion: Design an airplane which is always within the ground-interference zone. At first glance this apparatus is dangerous because the ground is uneven and the altitude called skimming permits no freedom of maneuver. But on large-sized aircraft, over water, the question may be attempted ..."
By the 1960s, the technology started maturing, in large part due to the independent contributions of Rostislav Alexeyev in the Soviet Union and German Alexander Lippisch, working in the United States. Alexeyev worked from his background as a ship designer whereas Lippisch worked as an aeronautical engineer. The influence of Alexeyev and Lippisch remains noticeable in most GEVs seen today.
Canada
It is said that the research hydrofoil HD-4 by Alexander Graham Bell had part of its dynamic lift contributed by its pair of wings operating in ground effect. However it is dubious whether the designer was aware of its existence due to the relative infancy of aerodynamics.
Avro Canada investigated into aircraft with a Coanda-effect propulsion system. Such jets were supposed to create an air cushion below the airframe that will allow them to hover on the ground. In fact, of the only test aircraft built, this was the only mode they could possibly operate from due to stability issues when taking off. The designs were later further developed by the United States, while Convair could have possibly been inspired by them to create a preliminary design of a large ocean-going ground-effect ship called Hydroskimmer.
Soviet Union
Led by Alexeyev, the Soviet Central Hydrofoil Design Bureau () was the center of ground-effect craft development in the USSR. The vehicle came to be known as an ekranoplan (, экран screen + план plane, from , literally screen effect, or ground effect in English). The military potential for such a craft was soon recognized, and Alexeyev received support and financial resources from Soviet leader Nikita Khrushchev.
Some manned and unmanned prototypes were built, ranging up to eight tonnes in displacement. This led to the development of a 550-tonne military ekranoplan of length. The craft was dubbed the Caspian Sea Monster by U.S. intelligence experts, after a huge, unknown craft was spotted on satellite reconnaissance photos of the Caspian Sea area in the 1960s. With its short wings, it looked airplane-like in planform, but would probably be incapable of flight. Although it was designed to travel a maximum of above the sea, it was found to be most efficient at , reaching a top speed of in research flights.
The Soviet ekranoplan program continued with the support of Minister of Defence Dmitriy Ustinov. It produced the most successful ekranoplan so far, the 125-tonne A-90 Orlyonok. These craft were originally developed as high-speed military transports and were usually based on the shores of the Caspian Sea and Black Sea. The Soviet Navy ordered 120 Orlyonok-class ekranoplans, but this figure was later reduced to fewer than 30 vessels, with planned deployment mainly in the Black Sea and Baltic Sea fleets.
A few Orlyonoks served with the Soviet Navy from 1979 to 1992. In 1987, the 400-tonne Lun-class ekranoplan was built as an anti-ship missile launch platform. A second Lun, renamed Spasatel, was laid down as a rescue vessel, but was never finished. The two major problems that the Soviet ekranoplans faced were poor longitudinal stability and a need for reliable navigation.
Minister Ustinov died in 1984, and the new Minister of Defence, Marshal Sokolov, cancelled funding for the program. Only three operational Orlyonok-class ekranoplans (with revised hull design) and one Lun-class ekranoplan remained at a naval base near Kaspiysk.
Since the dissolution of the Soviet Union, ekranoplans have been produced by the Volga Shipyard in Nizhniy Novgorod. Smaller ekranoplans for non-military use have been under development. The CHDB had already developed the eight-seat Volga-2 in 1985, and Technologies and Transport is developing a smaller version called the Amphistar. Beriev proposed a large craft of the type, the Be-2500, as a "flying ship" cargo carrier, but nothing came of the project.
United States of America
During the 1950s, the US Navy investigated into anti-submarine vessels operating on the ram effect, a product of ground effect. Such vessels were to use this to create an air cushion below the hulls that will allow hovering. If this is not possible, additional engines were to be used to artificially blow air underneath the craft. The project was designated RAM-2. Several other projects were proposed throughout the early Cold War, some using a similar mix of wings and lift engines while others are more akin to Russian types. More than a decade later, General Dynamics designed catamaran vessels equipped with ground-effect and filed them as patents.
Germany
Lippisch Type and Hanno Fischer
In Germany, Lippisch was asked to build a very fast boat for American businessman Arthur A. Collins. In 1963 Lippisch developed the X-112, a revolutionary design with reversed delta wing and T-tail. This design proved to be stable and efficient in ground effect, and even though it was successfully tested, Collins decided to stop the project and sold the patents to the German company Rhein Flugzeugbau (RFB), which further developed the inverse delta concept into the X-113 and the six-seat X-114. These craft could be flown out of ground effect so that, for example, peninsulas could be overflown.
Hanno Fischer took over the works from RFB and created his own company, Fischer Flugmechanik, which eventually completed two models. The Airfisch 3 carried two persons, and the FS-8 carried six persons. The FS-8 was to be developed by Fischer Flugmechanik for a Singapore-Australian joint venture called Flightship. Powered by a V8 Chevrolet automobile engine rated at 337 kW, the prototype made its first flight in February 2001 in the Netherlands. The company no longer exists but the prototype craft was bought by Wigetworks, a company based in Singapore and renamed as AirFish 8. In 2010, that vehicle was registered as a ship in the Singapore Registry of Ships.
The University of Duisburg-Essen is supporting an ongoing research project to develop the Hoverwing.
Günther Jörg-type tandem-airfoil flairboat
German engineer Günther Jörg, who had worked on Alexeyev's first designs and was familiar with the challenges of GEV design, developed a GEV with two wings in a tandem arrangement, the Jörg-II. It was the third, manned, tandem-airfoil boat, named "Skimmerfoil", which was developed during his consultancy period in South Africa. It was a simple and low-cost design of a first 4-seater tandem-airfoil flairboat completely constructed of aluminium. The prototype was in the SAAF Port Elizabeth Museum from 4 July 2007 until 2013, and is now in private use. Pictures of the museum show the boat after some years outside the museum and without protection against the sun.
The consultancy of Günther Jörg, a specialist and insider of German airplane industry from 1963 and a colleague of Alexander Lippisch and Hanno Fischer, was founded with a fundamental knowledge of wing in ground effect physics, as well as results of fundamental tests under different conditions and designs having begun in 1960. For over 30 years, Jörg built and tested 15 different tandem-airfoil flairboats in different sizes and made of different materials.
The following tandem-airfoil flairboat (TAF) types had been built after a previous period of nearly 10 years of research and development:
TAB VII-3: First manned tandem W.I.G type Jörg, being built at Technical University of Darmstadt, Akaflieg
TAF VII-5: Second manned tandem-airfoil Flairboat, 2 seater made of wood
TAF VIII-1: 2-seater tandem-airfoil flairboat built of glass-reinforced plastic (GRP) and aluminium. A small serie of 6 Flairboats had been produced by former Botec Company
TAF VIII-2: 4-seater tandem-airfoil Flairboat built of full aluminium (2 units) and built of GRP (3 units)
TAF VIII-3: 8-seater tandem-airfoil Flairboat built of aluminium combined with GRP parts
TAF VIII-4: 12-seater tandem-airfoil Flairboat built of aluminium combined with GRP parts
TAF VIII-3B: 6-seater tandem-airfoil flairboat under carbon fibre composite construction
Bigger concepts are: 25-seater, 32-seater, 60-seater, 80-seater and bigger up to the size of a passenger airplane.
1980-1999
Since the 1980s GEVs have been primarily smaller craft designed for the recreational and civilian ferry markets. Germany, Russia and the United States have provided most of the activity with some development in Australia, China, Japan, Korea and Taiwan. In these countries and regions, small craft with up to ten seats have been built. Other larger designs such as ferries and heavy transports have been proposed but have not been carried to completion.
Besides the development of appropriate design and structural configuration, automatic control and navigation systems have been developed. These include altimeters with high accuracy for low altitude flight and lesser dependence on weather conditions. "Phase radio altimeters" have become the choice for such applications beating laser altimeter, isotropic or ultrasonic altimeters.
With Russian consultation, the United States Defense Advanced Research Projects Agency (DARPA) studied the Aerocon Dash 1.6 wingship.
Universal Hovercraft developed a flying hovercraft, first flying a prototype in 1996. Since 1999, the company has offered plans, parts, kits and manufactured ground effect hovercraft called the Hoverwing.
2000-2019
Iran deployed three squadrons of Bavar 2 two-seat GEVs in September 2010. This GEV carries one machine gun and surveillance gear, and incorporates features to reduce its radar signature. In October 2014, satellite images showed the GEV in a shipyard in southern Iran. The GEV has two engines and no armament.
In Singapore, Wigetworks obtained certification from Lloyd's Register for entry into class. On 31 March 2011, AirFish 8-001 became one of the first GEVs to be flagged with the Singapore Registry of Ships, one of the largest ship registries. Wigetworks partnered with National University of Singapore's Engineering Department to develop higher capacity GEVs.
Burt Rutan in 2011 and Korolev in 2015 showed GEV projects.
In Korea, Wing Ship Technology Corporation developed and tested a 50-seat passenger GEV named the WSH-500. in 2013
Estonian transport company Sea Wolf Express planned to launch passenger service in 2019 between Helsinki and Tallinn, a distance of 87 km taking only half an hour, using a Russian-built ekranoplan. The company ordered 15 ekranoplans with maximum speed of 185 km/h and capacity of 12 passengers, built by Russian RDC Aqualines.
2020-
In 2021 Brittany Ferries announced that they were looking into using REGENT (Regional Electric Ground Effect Naval Transport) ground effect craft "seagliders" for cross English Channel services. Southern Airways Express also placed firm orders for seagliders with intent to operate them along Florida's east coast.
Around mid-2022, the US Defense Advanced Research Projects Agency (DARPA) launched its Liberty Lifter project, with the goal of creating a low-cost seaplane that would use the ground-effect to extend its range. The program aims to carry 90 tons over , operate at sea without ground-based maintenance, all using low-cost materials.
In May 2024, Ocean Glider announced a deal with UK-based investor MONTE to finance $145m of a $700m deal to begin operating 25 REGENT seagliders between destinations in New Zealand. The order includes 15 12-seater Viceroys and 10 100-seater Monarchs.
See also
Aerodynamically alleviated marine vehicle
Flying Platform
Ground effect (aerodynamics)
Ground-effect train
Hovercraft
List of ground-effect vehicles
Surface effect ship
Caspian Sea Monster
Footnotes
Notes
Citations
Bibliography
.
External links
Amphibious vehicles
Aircraft configurations
Ekranoplan
Soviet inventions | Ground-effect vehicle | Engineering | 4,633 |
53,207,054 | https://en.wikipedia.org/wiki/Carbohydrate%20Structure%20Database | Carbohydrate Structure Database (CSDB) is a free curated database and service platform in glycoinformatics, launched in 2005 by a group of Russian scientists from N.D. Zelinsky Institute of Organic Chemistry, Russian Academy of Sciences. CSDB stores published structural, taxonomical, bibliographic and NMR-spectroscopic data on natural carbohydrates and carbohydrate-related molecules.
Overview
The main data stored in CSDB are carbohydrate structures of bacterial, fungal, and plant origin. Each structure is assigned to an organism and is provided with the link(s) to the corresponding scientific publication(s), in which it was described. Apart from structural data, CSDB also stores NMR spectra, information on methods used to decipher a particular structure, and some other data.
CSDB provides access to several carbohydrate-related research tools:
Simulation of 1D and 2D NMR spectra of carbohydrates (GODDESS: glycan-oriented database-driven empirical spectrum simulation).
Automated NMR-based structure elucidation (GRASS: generation, ranking and assignment of saccharide structures).
Statistical analysis of structural feature distribution in glycomes of living organisms
Generation of optimized atomic coordinates for an arbitrary saccharide and subdatabase of conformation maps.
Taxon clustering based on similarities of glycomes (carbohydrate-based tree of life)
Glycosyltransferase subdatabase (GT-explorer)
History and funding
Until 2015, Bacterial Carbohydrate Structure Database (BCSDB) and Plant&Fungal Carbohydrate Structure Database (PFCSDB) databases existed in parallel. In 2015, they were joined into the single Carbohydrate Structure Database (CSDB). The development and maintenance of CSDB have been funded by International Science and Technology Center (2005-2007), Russian Federation President grant program (2005-2006), Russian Foundation for Basic Research (2005-2007,2012-2014,2015-2017,2018-2020), Deutsches Krebsforschungszentrum (short-term in 2006-2010), and Russian Science Foundation (2018-2020).
Data sources and coverage
The main sources of CSDB data are:
Scientific publications indexed in the dedicated citation databases, including NCBI Pubmed and Thomson Reuters Web Of Science (approx. 18000 records).
CCSD (Carbbank ) database (approx. 3000 records).
The data are selected and added to CSDB manually by browsing original scientific publications. The data originating from other databases are subject to error-correction and approval procedures.
As of 2017, the coverage on bacteria and archaea is ca. 80% of carbohydrate structures published in scientific literature The time lag between the publication of relative data and their deposition into CSDB is about 18 months. Plants are covered up to 1997, and fungi up to 2012.
CSDB does not cover data from the animalia domain, except unicellular metazoa. There is a number of dedicated databases on animal carbohydrates, e.g. UniCarbKB or GLYCOSCIENCES.de .
CSDB is reported as one of the biggest projects in glycoinformatics. It is employed in structural studies of natural carbohydrates and in glyco-profiling.
The content of CSDB has been used as a data source in other glycoinformatics projects.
Deposited objects
Molecular structures of glycans, glycopolymers and glycoconjugates: primary structure, aglycon information, polymerization degree and class of molecule. Structural scope includes molecules composed of residues (monosaccharides, alditols, amino acids, fatty acids etc.) linked by glycosidic, ester, amidic, ketal, phospho- or sulpho-diester bonds, in which at least one residue is a monosaccharide or its derivative.
Bibliography associated with structures: imprint data, keywords, abstracts, IDs in bibliographic databases
Biological context of structures: associated taxon, strain, serogroup, host organism, disease information. The covered domains are: prokaryotes, plants, fungi and selected pathogenic unicellular metazoa. The database contains only glycans originating from these domains or obtained by chemical modification of such glycans.
Assigned NMR spectra and experimental conditions.
Glycosyltransferases associated with taxons: gene and enzyme identifiers, full structures, donor and substrates, methods used to prove enzymatic activity, trustworthiness level.
References to other databases
Other data collected from original publications
Conformation maps of disaccharides derived from molecular dynamics simulations.
Interrelation with other databases
CSDB is cross-linked to other glycomics databases, such as MonosaccharideDB, Glycosciences.DE , NCBI Pubmed, NCBI Taxonomy, NLM catalog, International Classification of Diseases 11, etc. Besides a native notation, CSDB Linear, structures are presented in multiple carbohydrate notations (SNFG, SweetDB, GlycoCT, WURCS, GLYCAM, etc.). CSDB is exportable as a Resource Description Framework (RDF) feed according to the GlycoRDF ontology.
External links
CSDB web site
CSDB usage examples
CSDB technical documentation
CSDB Linear (structure encoding notation)
Carbohydrate databases registered in NAR collection
Carbohydrate databases in the recent decade (lection)
References
Biochemistry databases
Carbohydrates
Glycomics | Carbohydrate Structure Database | Chemistry,Biology | 1,210 |
47,376,397 | https://en.wikipedia.org/wiki/TY%20Pyxidis | TY Pyxidis is an eclipsing binary star in the constellation Pyxis. The apparent magnitude ranges from 6.85 to 7.5 over 3.2 days.
The two components are both of spectral type G5IV, have a mass of 1.2 solar masses and revolve around each other every 3.2 days. Each star is around 2.2 times the diameter of the Sun.
The system is classified as either a RS Canum Venaticorum variable or a BY Draconis variable, stars that vary on account of prominent starspot activity, and lies 184 ± 5 light years away. The system emits X-rays, and analysing the emission curve over time led Pres and colleagues to conclude that there was a loop of material arcing between the two stars.
References
Pyxis
BY Draconis variables
RS Canum Venaticorum variables
Durchmusterung objects
044164
077137
G-type subgiants
Pyxidis, TY | TY Pyxidis | Astronomy | 210 |
75,751,222 | https://en.wikipedia.org/wiki/Imre%20Tak%C3%A1cs | Imre Takács is a Hungarian-Canadian environmental engineer and process engineer. He is a founder and CEO of Dynamita SARL, based in France, and developer of process simulators and dynamic models for wastewater treatment plants.
Takács has made contributions to environmental engineering, with a particular focus on biological and physico-chemical water treatment processes. He is known for the development of modeling and software solutions for water and wastewater plant control and has overseen projects aimed at implementing progressive technologies in full-scale wastewater treatment plants. He has authored book chapters for organizations, peer-reviewed journals and his paper on the dynamic process model for thickening and clarification was selected as one of the top ten most influential papers for the Water Research journal in the past 40 years. He is the recipient of the 2019 Fuhrman Medal for Outstanding Academic-Practice Collaboration from the International Water Association (IWA).
Takács has contributed to the development of industry process software, including GPS-X from and Biowin. Additionally, he introduced Sumo, a third-generation wastewater process modeling software.
Takács initiated the WWTmod (later WRRmod) conference series for modellers. He is the founder and first director of the MEGA workgroup in Municipal Resource Recovery Design Community (MRRDC) at WEF. He has been involved in various IWA groups, such as the Task Group for Good Modelling Practice (GMP) and Good Biofilm Reactor Modelling Practice, and has been serving on many scientific committees including the scientific committee for the IWA Specialised Conference on Design, Operation, and Economics of Large Wastewater Treatment Plants.
Education
Takács obtained his bachelor's degree in 1978 from Budapest University of Technology and Economics, specializing in Industrial Food Processing Engineering. In 1980, he earned a master's degree and completed his Doctor of Technology degree in 1986, both in Environmental Bioengineering from the same institution. Subsequently, he continued his doctoral studies, culminating in a PhD in Environmental Technology from Ghent University in 2008.
Career
Takács started his professional career in 1980 as a Project Engineer at the Water Quality Institute (VITUKI, Hungary), where he served until 1988 while also maintaining a position at VIZITERV in 1983. Following this, he assumed the role of Head of R&D at Hydromantis, and from 2002 to 2008, he served at EnviroSim Associates, followed by two years where he was assigned a managerial position within their European subsidiary office. In 2010, he founded Dynamita, a software and process modelling company and has since served as its CEO.
Takács held the role of Project Manager for numerous projects including for DCWATER's Blue Plains plant and HRSD's Nansemond plant at Norfolk.
Takács is a Water-Energy Nexus (WEX) fellow with the University of California at Irvine (UCI). He held a part-time professorship within the Geology department from 1994 to 2002. Prior to that he worked as a Research Engineer at McMaster University between 1988 and 1991.
Contributions
Takács has been involved with projects of environmental software development, process optimization, and advanced control systems. He played a key role in the development of environmental software packages like VNP, GPS-X, BioWin and SUMO. His work extended to the Blue Plains facility, where he devised characterization methods for optimizing carbon source dosing and anaerobic digestion modeling.
Research
Takács developed new concepts in process modeling including settling, chemical and biological phosphorus removal, side-stream treatment, carbon capture for energy recovery, biofilms, granules and granulation, equilibrium chemistry, natural and engineered precipitation, such as for nutrient recovery.
Wastewater treatment models
Takács' research on wastewater treatment modeling has emphasized the improvement of modeling techniques and data quality. He introduced a dynamic model for the clarification-thickening process, employing experimental data from various experiments. In 2008, he critically assessed various model concepts for nitrite modeling in processes like two-step denitrification, anaerobic ammonium oxidation, and phosphorus uptake, highlighting the need for further development. Furthermore, he authored a book chapter in Biological Wastewater Treatment: Principles, Modelling, and Design, which focused on final settling tanks to emphasize the practical aspects, design, and operation of phase separation units. He also contributed to a collaborative effort proposing a standardized notation system for naming state variables in biokinetic models, aiming for consistent rules across existing and future models. In a paper published for the Water Environment Research, his work involved the development of a phosphate complexation model which utilized geochemical reactions on hydrous ferric oxide (HFO) surfaces to comprehend the process of chemically mediated phosphate removal.
Takács has conducted research addressing the critical issue of data reconciliation in wastewater treatment modeling, offering an approach for obtaining reliable data sets for model-based studies. He investigated the sensitivity of nitrite transfer between aerobic and anaerobic ammonia oxidizers highlighting the significance of selecting an appropriate sludge retention time. Alongside fellow researchers, he introduced a dynamic physico-chemical model for chemical phosphorus removal in wastewater treatment. This model incorporated chemical equilibrium and physical precipitation reactions, effectively simulating observed bulk dynamics in terms of pH.
Verification and calibration of activated sludge models
Takács has conducted in-depth studies of Activated Sludge Models and its practical applications. In his research about the respirometric experiments for calibrating ASM1, he emphasized the importance of different methods for assessing component concentrations. His 2009 collaborative research examined the work of the Good Modelling Practice Task Group by investigating Activated Sludge Models users, their profiles, tools, procedures, and limitations to enhance modeling procedures. Subsequently, he compiled survey responses from model users in 2008, creating a database to identify common parameter changes, ranges, and typical values for ASM-type models. He reviewed Hélène Hauduc's research, where she introduced a method to verify activated sludge models by tracking errors through stoichiometry examination. These findings led him to develop SUMO at Dynamita.
Having participated in the IWA Task Group for GMP, Takács co-authored the book Guidelines for Using Activated Sludge Models in 2012, presenting the establishment of a global framework for wastewater treatment using activated sludge models. He authored a book chapter for Activated Sludge - 100 Years and Counting, which delves into the status, history, and advancements of the extensively used activated sludge process in wastewater treatment.
Chemical phosphorus removal and carbon footprint reduction
Takács' paper on the development of a dynamic mathematical model for activated sludge wastewater treatment demonstrated the model's incorporation of the Langmuir isotherm to simulate powdered activated carbon addition. Following verification through both batch and continuous experiments, the extended model was applied in an in-situ full-scale implementation at the Nitrochemical Works. Alongside Leiv Rieger and Hansruedi Siegrist, he offered alternatives to expanding reactor volumes by conducting research based on a case study of aeration control algorithms at three wastewater treatment plants and proposed advanced process control as a solution to reduce energy use and carbon footprint.
Integrated biological systems
Takács has investigated the role of integrated biological systems in wastewater treatment processes and their modeling applications. In one of his earliest studies, he evaluated the adaptability of existing models from conventional activated sludge systems to PAC-fed systems, emphasizing the positive impact of PAC on bacterial activity, organic adsorption, and sludge settleability. He also addressed filamentous bulking in activated sludge systems and devised a dynamic mathematical model to simulate the population dynamics of floc-formers and filaments within the microenvironment of the activated sludge floc.
Awards and honors
2019 – Fuhrman Medal for Outstanding Academic-Practice Collaboration, International Water Association
Selected articles
Takács, I., Patry, G. G., & Nolasco, D. (1991). A dynamic model of the clarification-thickening process. Water research, 25(10), 1263–1271.
Vanrolleghem, P. A., Spanjers, H., Petersen, B., Ginestet, P., & Takacs, I. (1999). Estimating (combinations of) Activated Sludge Model No. 1 parameters and components by respirometry. Water Science and Technology, 39(1), 195–214.
Rieger, L., Gillot, S., Langergraber, G., Ohtsuki, T., Shaw, A., Takacs, I., & Winkler, S. (2012). Guidelines for using activated sludge models. IWA publishing.
Wett, B., Omari, A., Podmirseg, S. M., Han, M., Akintayo, O., Gómez Brandón, M., ... & O'Shaughnessy, M. (2013). Going for mainstream deammonification from bench to full scale for maximized resource efficiency. Water science and technology, 68(2), 283–289.
Sin, G., Kaelin, D., Kampschreur, M. J., Takacs, I., Wett, B., Gernaey, K. V., ... & van Loosdrecht, M. C. (2008). Modelling nitrite in wastewater treatment systems: a discussion of different modelling concepts. Water science and technology, 58(6), 1155–1171.
References
Environmental engineers
Hungarian engineers
Budapest University of Technology and Economics alumni
Ghent University alumni
Living people
Year of birth missing (living people) | Imre Takács | Chemistry,Engineering | 2,000 |
31,688,526 | https://en.wikipedia.org/wiki/Alkyl-lysophospholipids | Alky-lysophospholipids (ALPs) are synthetic analogs of lysophosphatidylcholines (LPCs), also called lysolecithins. They are synthesized by replacing the acyl-group within the LPC with an alkyl group. In contrast to LPCs, ALPs are metabolically very stable.
ALPs are being studied for their potential antineoplastic (anti-cancer) and immune-modulating effects. Their anti-tumor effects are due to modulation of intracellular signalling pathways, inducing apoptosis. It is highly selective, sparing healthy cells. Several examples including edelfosine, miltefosine, and perifosine are under development as drugs against cancer and other diseases.
References
Experimental cancer drugs
Lipids | Alkyl-lysophospholipids | Chemistry | 169 |
71,389,617 | https://en.wikipedia.org/wiki/C11H20O4 | {{DISPLAYTITLE:C11H20O4}}
The molecular formula C11H20O4 (molar mass: 216.27 g/mol) may refer to:
Neopentyl glycol diglycidyl ether
Diethyl diethylmalonate | C11H20O4 | Chemistry | 61 |
73,967,005 | https://en.wikipedia.org/wiki/Matsaev%27s%20theorem | Matsaev's theorem is a theorem from complex analysis, which characterizes the order and type of an entire function.
The theorem was proven in 1960 by Vladimir Igorevich Matsaev.
Matsaev's theorem
Let with be an entire function which is bounded from below as follows
where
and
Then is of order and has finite type.
References
Theorems in complex analysis | Matsaev's theorem | Mathematics | 77 |
72,147,813 | https://en.wikipedia.org/wiki/Cyclic%20glycine-proline | Cyclic glycine-proline (cGP) is a small neuroactive peptide that belongs to a group of bioactive 2,5-diketopiperazines (2,5-DKPs) and is also known as cyclo-glycine-proline. cGP is a neutral, stable naturally occurring compound and is endogenous to the human body; found in human plasma, breast milk and cerebrospinal fluid. DKPs are bioactive compounds often found in foods. Cyclic dipeptides such as 2,5 DKPs are formed by the cyclisation of two amino acids of linear peptides produced in heated or fermented foods. The bioactivity of cGP is a property of functional foods and presents in several matrices of foods including blackcurrants.
cGP is metabolite of hormone insulin-like growth factor-1 (IGF-1). It has a cyclic structure, lipophilic nature, and is enzymatically stable which makes its a more favorable candidate for manipulating the binding-release process between IGF-1 and its binding protein thereby, normalizing IGF-1 function.
IGF-1 family
Insulin-like growth factor-1 (IGF-1) is a hormone that is structurally very similar to insulin and mediates the effects of growth hormone (GH) thus affecting metabolism, regeneration, and overall development. The GH-IGF-1 signaling pathway is crucial in the process of vascular remodeling and angiogenesis, i.e., the process of building new blood vessels and thus, helps in maintaining blood circulation in the body. In the brain, IGF-1 is abundant in various cells and regions and research over the years, suggest an imperative role of IGF-1 activity in neurodevelopment making it critical in learning and memory.
The IGF-1 family comprises
IGF-1,
IGF receptors (IGF-1R) and
IGF binding proteins (IGFBP).
The therapeutic applications of IGF-1 are limited due to its poor central uptake and potential side-effects. IGF-1 that is not bound to its binding protein bares a very short half-life and is cleaved by enzymes to form the tripeptide glycine-proline-glutamate (GPE). However, the enzymatic instability of GPE, with a plasma half-life of less than 4 minutes, is further cleaved to produce the final product, cyclic-Glycine-Proline (cGP).
Biological Role of cGP
The hepatic production of IGF-1 is controlled by the growth hormone (GH)-IGF-1 axis. The majority of circulating IGF-1 is not bioavailable because of its affinity and binding to IGF-binding protein (IGFBP), mainly IGFBP3. IGF-1 bioactivity is therefore, tightly regulated through reversible binding with IGFBP3. It is this binding-release process that determines the amount of bioavailable IGF-1 in circulation. IGF-1 that is not bound, is cleaved into an N-terminal tripeptide, Glycine-Proline-Glutamate (GPE) and Des-N-IGF-1. and GPE metabolizes to result in cyclic glycine proline (cGP).
Unbound IGF-1, cleaved at the N-terminal, can be metabolized through a series of
downstream enzymatic reactions to cGP. The N-terminal is the binding site of IGF-1 which allows cGP to retain the same binding affinity to IGFBP-3 and thus, regulates the bioavailability of IGF-1 through competitive binding with IGFBP3. An increase in cGP, would increase competitive advantage and thus, increase the amount of circulating and therefore, bioavailable IGF-1.
Research shows that cGP can normalize IGF-1 function under pathophysiological conditions of increased or diminished IGF-1 bioactivity.
In vitro studies show that cGP promoted the activity of IGF-1 when insufficient and inhibited the activity of IGF-1 when in excess.
Uses
A recently published review in the journal Marine Drugs, provides an excellent overview of cGP sources and biological effects. Biologically, cGP is most strongly associated with cognitive benefits, however it also has a role in other biological functions, as outlined below.
Cognition
Vascular health is critical in maintaining cognitive function. IGF-1 plays an essential role in vascular remodelling of the brain and supports cognitive retention. Metabolic IGF-1 levels tend to reduce with age and this reduction appears to be a major contributor to cognitive impairment in older populations.
Low or deficient IGF-1 levels can be normalized by cGP, restoring its vascular function. Studies evaluating cGP, IGF-1 and IGFBP3 levels suggest that cGP concentration and cGP/IGF-1 molar ratio were positively associated suggesting that older people with higher plasma cGP concentration (and cGP/IGF-1 molar ratio) have better memory/cognitive retention.
Hypertension
IGF-1 plays a critical role in energy metabolism with deficient IGF-1 levels being implicated in obesity and hypertension.
Stroke
The role of IGF-1 in supporting recovery from stroke, which is a condition of vascular origin, is reported. A study in 34 stroke patients reported that patients with higher plasma concentration of cGP made better recovery within 3 months than those with lower cGP levels. Further, patients with higher cGP levels also showed lesser neurological deficits.
Therapeutic Potential
Excessive IGF-1 activity promotes tumorigenesis while reduced IGF-1 activity is linked with diseases such as Alzheimer's and Parkinson's. cGP normalises the autocrine function of IGF-1 under pathological conditions and when there are low levels of cGP in the human body, IGF-1 regulation is compromised. Therefore, it is reasonable to assume that treatment with exogenous cGP could assist with improving IGF-1 implicated health benefits.
References
Neuropeptides
Diketopiperazines
Heterocyclic compounds with 2 rings | Cyclic glycine-proline | Chemistry | 1,328 |
37,493,271 | https://en.wikipedia.org/wiki/Tyntec | Tyntec (or , as spelled by the company) is a global application-to-person messaging operator, cloud communications provider, and a US Inter-Carrier Vendor incorporated in London, UK.
History
Tyntec was founded by entrepreneurs Dr. Ralph Eric Kunz and Thorsten Trapp in 2002.
In 2017, it has regrouped its regional operations, Tyntec Limited in the UK, Tyntec GmbH in Germany, Tyntec Inc. in the US, and Tyntec Pte Ltd in Singapore, under a new holding company, the Tyntec Group Limited, based in London.
Funding
In June 2008, founders sold a minority share of the company to HarbourVest, an independent global alternative investment firm. In December 2010, received investment from Iris Capital, a pan-European growth fund specializing in technology, media and telecommunications.
In 2016, the management of tyntec has acquired the company from HarbourVest Partners and Iris Capital, backed by Cipio Partners, a Germany-based private equity firm.
Technology
has built and developed a scalable proprietary patent-protected technology infrastructure that is installed at the operator level, which gives it direct operator-level connectivity to the GSM network.
The signal routing and delivery platform, housed in the technical operations center in Dortmund, is the core of 's messaging platform. It is designed to be scalable and to handle high volumes of traffic without service degradation. It also provides several interfaces, which are available across all networks.
's direct access into the global mobile network through its agreements with operators means that the company can directly reach the subscribers' handset. This is particularly important in crisis situations when networks are often overloaded or go down.
Awards
In 2010, was awarded the Red Herring 100 Europe Award. In 2011 it was awarded the Red Herring Global 100 Award for its solution. In the same year it also won the Internet Telephony Product of the Year Award.
Competitors
Competitors are Sinch, Soprano Design, Twilio, Infobip, Vonage (Nexmo), Clickatell, and BICS (TeleSign).
External links
References
Mobile telecommunication services | Tyntec | Technology | 438 |
51,630,259 | https://en.wikipedia.org/wiki/NGC%20230 | NGC 230 is a spiral galaxy located in the constellation Cetus. It was discovered in 1886 by Francis Leavenworth.
References
0230
Spiral galaxies
Cetus
002539 | NGC 230 | Astronomy | 36 |
66,657,037 | https://en.wikipedia.org/wiki/Acremoniella%20atra | Acremoniella atra (A. atra) is a species of fungus with unknown family.
A. atra has been reported in the rhizospehere of multiple plants, particularly wheat. It has cosmopolitan distribution.
References
Hypocreales
Fungus species | Acremoniella atra | Biology | 58 |
8,114,777 | https://en.wikipedia.org/wiki/Muscle%20hypertrophy | Muscle hypertrophy or muscle building involves a hypertrophy or increase in size of skeletal muscle through a growth in size of its component cells. Two factors contribute to hypertrophy: sarcoplasmic hypertrophy, which focuses more on increased muscle glycogen storage; and myofibrillar hypertrophy, which focuses more on increased myofibril size. It is the primary focus of bodybuilding-related activities.
Hypertrophy stimulation
A range of stimuli can increase the volume of muscle cells. These changes occur as an adaptive response that serves to increase the ability to generate force or resist fatigue in anaerobic conditions.
Strength training
Strength training (resistance training) causes neural and muscular adaptations which increase the capacity of an athlete to exert force through voluntary muscular contraction: After an initial period of neuro-muscular adaptation, the muscle tissue expands by creating sarcomeres (contractile elements) and increasing non-contractile elements like sarcoplasmic fluid.
Muscular hypertrophy can be induced by progressive overload (a strategy of progressively increasing resistance or repetitions over successive bouts of exercise to maintain a high level of effort). However, the precise mechanisms are not clearly understood; the current accepted theory is through the combination of mechanical tension, metabolic stress, and muscle damage. Although, there is insufficient evidence to suggest that metabolic stress has any significant effect on hypertrophy outcomes.
Muscular hypertrophy plays an important role in competitive bodybuilding and strength sports like powerlifting, American football, and Olympic weightlifting.
Anaerobic training
The best approach to specifically achieve muscle growth remains controversial (as opposed to focusing on gaining strength, power, or endurance); it was generally considered that consistent anaerobic strength training will produce hypertrophy over the long term, in addition to its effects on muscular strength and endurance. Muscular hypertrophy can be increased through strength training and other short-duration, high-intensity anaerobic exercises. Lower-intensity, longer-duration aerobic exercise generally does not result in very effective tissue hypertrophy; instead, endurance athletes enhance storage of fats and carbohydrates within the muscles, as well as neovascularization.
Temporary swelling
During a workout, increased blood flow to metabolically active areas causes muscles to temporarily increase in size. This phenomenon is referred to as transient hypertrophy, or more commonly known as being "pumped up" or getting "a pump." About two hours after a workout and typically for seven to eleven days, muscles swell due to an inflammation response as tissue damage is repaired. Longer-term hypertrophy occurs due to more permanent changes in muscle structure.
Hirono et al. explained the causes of Muscle swelling:"Muscle swelling occurs as a result of the following:
(a) resistance exercise can increase phosphocreatine and hydrogen ion accumulations due to blood lactate and growth hormone production, and
(b) the high lactate and hydrogen ion concentrations may accelerate water uptake in muscle cells according to cell permeability because the molecular weights of the lactate and hydrogen ions are smaller than that of muscle glycogen."
Factors affecting hypertrophy
Biological factors (such as DNA and sex), nutrition, and training variables can affect muscle hypertrophy.
Individual differences in genetics account for a substantial portion of the variance in existing muscle mass. A classical twin study design (similar to those of behavioral genetics) estimated that about 53% of the variance in lean body mass is heritable, along with about 45% of the variance in muscle fiber proportion.
During puberty in males, hypertrophy occurs at an increased rate. Natural hypertrophy normally stops at full growth in the late teens. As testosterone is one of the body's major growth hormones, on average, males find hypertrophy much easier (on an absolute scale) to achieve than females, and, on average, have about 60% more muscle mass than women. Taking additional testosterone, as in anabolic steroids, will increase results. It is also considered a performance-enhancing drug, the use of which can cause competitors to be suspended or banned from competitions. Testosterone is also a medically regulated substance in most countries, making it illegal to possess without a medical prescription. Anabolic steroid use can cause testicular atrophy, cardiac arrest, and gynecomastia.
In the long term, a positive energy balance, when more calories are consumed rather than burned, is helpful for anabolism and therefore muscle hypertrophy. An increased requirement for protein can help elevate protein synthesis, which is seen in athletes training for muscle hypertrophy. Protein intakes up to 1.62 grams per kilogram of body weight a day help increase gains in strength and muscle size from resistance training.
Training variables, in the context of strength training, such as frequency, intensity, and total volume also directly affect the increase of muscle hypertrophy. A gradual increase in all of these training variables will yield muscular hypertrophy.
Changes in protein synthesis and muscle cell biology associated with stimuli
Protein synthesis
The message filters down to alter the pattern of gene expression. The additional contractile proteins appear to be incorporated into existing myofibrils (the chains of sarcomeres within a muscle cell). There appears to be some limit to how large a myofibril can become: at some point, they split. These events appear to occur within each muscle fiber. That is hypertrophy results primarily from the growth of each muscle cell rather than an increase in the number of cells. Skeletal muscle cells are however unique in the body in that they can contain multiple nuclei, and the number of nuclei can increase.
Cortisol decreases amino acid uptake by muscle tissue, and inhibits protein synthesis. The short-term increase in protein synthesis that occurs subsequent to resistance training returns to normal after approximately 28 hours in adequately fed male youths. Another study determined that muscle protein synthesis was elevated even 72 hours following training.
A small study performed on young and elderly found that ingestion of 340 grams of lean beef (90 g protein) did not increase muscle protein synthesis any more than ingestion of 113 grams of lean beef (30 g protein). In both groups, muscle protein synthesis increased by 50%. The study concluded that more than 30 g protein in a single meal did not further enhance the stimulation of muscle protein synthesis in young and elderly. However, this study didn't check protein synthesis in relation to training; therefore conclusions from this research are controversial. A 2018 review of the scientific literature concluded that for the purpose of building lean muscle tissue, a minimum of 1.6 g protein per kilogram of body weight is required, which can for example be divided over 4 meals or snacks and spread out over the day.
It is not uncommon for bodybuilders to advise a protein intake as high as 2–4 g per kilogram of bodyweight per day. However, scientific literature has suggested this is higher than necessary, as protein intakes greater than 1.8 g per kilogram of body weight showed to have no greater effect on muscle hypertrophy. A study carried out by American College of Sports Medicine (2002) put the recommended daily protein intake for athletes at 1.2–1.8 g per kilogram of body weight. Conversely, Di Pasquale (2008), citing recent studies, recommends a minimum protein intake of 2.2 g/kg "for anyone involved in competitive or intense recreational sports who wants to maximize lean body mass but does not wish to gain weight. However athletes involved in strength events (..) may need even more to maximize body composition and athletic performance. In those attempting to minimize body fat and thus maximize body composition, for example in sports with weight classes and in bodybuilding, it's possible that protein may well make up over 50% of their daily caloric intake."
Microtrauma
Microtrauma is tiny damage to the muscle fibers. The precise relation between microtrauma and muscle growth is not entirely understood yet.
One theory is that microtrauma plays a significant role in muscle growth. When microtrauma occurs (from weight training or other strenuous activities), the body responds by overcompensating, replacing the damaged tissue and adding more, so that the risk of repeat damage is reduced. Damage to these fibers has been theorized as the possible cause for the symptoms of delayed onset muscle soreness (DOMS), and is why progressive overload is essential to continued improvement, as the body adapts and becomes more resistant to stress.
However, other work examining the time course of changes in muscle protein synthesis and their relationship to hypertrophy showed that damage was unrelated to hypertrophy. In fact, in one study the authors showed that it was not until the damage subsided that protein synthesis was directed to muscle growth.
Myofibrillar vs. sarcoplasmic hypertrophy
In the bodybuilding and fitness community and even in some academic books skeletal muscle hypertrophy is described as being in one of two types: Sarcoplasmic or myofibrillar. According to this hypothesis, during sarcoplasmic hypertrophy, the volume of sarcoplasmic fluid in the muscle cell increases with no accompanying increase in muscular strength, whereas during myofibrillar hypertrophy, actin and myosin contractile proteins increase in number and add to muscular strength as well as a small increase in the size of the muscle. Sarcoplasmic hypertrophy is greater in the muscles of bodybuilders because studies suggest sarcoplasmic hypertrophy shows a greater increase in muscle size while myofibrillar hypertrophy proves to increase overall muscular strength making it more dominant in Olympic weightlifters. These two forms of adaptations rarely occur completely independently of one another; one can experience a large increase in fluid with a slight increase in proteins, a large increase in proteins with a small increase in fluid, or a relatively balanced combination of the two.
In sports
Examples of increased muscle hypertrophy are seen in various professional sports, mainly strength related sports such as boxing, olympic weightlifting, mixed martial arts, rugby, professional wrestling and various forms of gymnastics. Athletes in other more skill-based sports such as basketball, baseball, ice hockey, and football may also train for increased muscle hypertrophy to better suit their position of play. For example, a center (basketball) may want to be bigger and more muscular to better overpower their opponents in the low post. Athletes training for these sports train extensively not only in strength but also in cardiovascular and muscular endurance training.
Pathology
Some neuromuscular diseases result in true hypertrophy of one or more skeletal muscles, confirmed by MRI or muscle biopsy. As this muscle hypertrophy is not the result of resistance training nor heavy manual labour, this is why the muscle hypertrophy is described as a pseudoathletic appearance.
As muscle hypertrophy is a response to strenuous anaerobic activity, ordinary everyday activity would become strenuous in diseases that result in premature muscle fatigue (neural or metabolic), or disrupt the excitation-contraction coupling in muscle, or cause repetitive or sustained involuntary muscle contractions (fasciculations, myotonia, or spasticity). In lipodystrophy, an abnormal deficit of subcutaneous fat accentuates the appearance of the muscles, though the muscles are quantifiably hypertrophic (possibly due to a metabolic abnormality).
Diseases that result in true muscle hypertrophy include, but not limited to, select: muscular dystrophies, metabolic myopathies, endocrine myopathies, congenital myopathies, non-dystrophic myotonias and pseudomyotonias, denervation, spasticity, and lipodystrophy. The muscle hypertrophy may persist throughout the course of the disease, or may later atrophy, or become pseudohypertrophic (muscle atrophy with infiltration of fat or other tissue). For instance, Duchenne and Becker muscular dystrophy may start as true muscle hypertrophy, but later develop into pseudohypertrophy.
See also
Anabolism
Colorado Experiment
Davis' law
Follistatin
Lean body mass
Muscle atrophy
Muscle dystrophy
Myostatin
Pseudoathletic appearance
Pseudohypertrophy
References
Further reading
Muscular system
Tissues (biology)
Physiology
Exercise physiology
Bodybuilding | Muscle hypertrophy | Biology | 2,565 |
31,387,138 | https://en.wikipedia.org/wiki/Thallane | Thallane (systematically named trihydridothallium) is an inorganic compound with the empirical chemical formula . It has not yet been obtained in bulk, hence its bulk properties remain unknown. However, molecular thallane has been isolated in solid gas matrices. Thallane is mainly produced for academic purposes.
is the simplest thallane. Thallium is the heaviest stable member of the group 13 metals; the stability of group 13 hydrides decreases with increasing periodic number. This is commonly attributed to poor overlap of the metal valence orbitals with that of the 1s orbital of hydrogen. Despite encouraging early reports, it is unlikely that a thallane species has been isolated. Thallanes have been observed only in matrix isolation studies; the infrared spectrum was obtained in the gas phase by laser ablation of thallium in the presence of hydrogen gas. This study confirmed aspects of ab initio calculations conducted by Schwerdtfeger which indicated the similar stability of thallium and indiganes. There has not been a confirmed isolation of a thallium hydride complex to date.
History
In 2004, American chemist Lester Andrews synthesised thallane for the first time. This reaction sequence consisted of atomisation of thallium, followed by cryogenic co-deposition with hydrogen, and concluded with shortwave ultraviolet irradiation.
References
Metal hydrides
Thallium(III) compounds
Substances discovered in the 2000s | Thallane | Chemistry | 298 |
38,380,954 | https://en.wikipedia.org/wiki/Biotransducer | A biotransducer is the recognition-transduction component of a biosensor system. It consists of two intimately coupled parts; a bio-recognition layer and a physicochemical transducer, which acting together converts a biochemical signal to an electronic or optical signal. The bio-recognition layer typically contains an enzyme or another binding protein such as antibody. However, oligonucleotide sequences, sub-cellular fragments such as organelles (e.g. mitochondria) and receptor carrying fragments (e.g. cell wall), single whole cells, small numbers of cells on synthetic scaffolds, or thin slices of animal or plant tissues, may also comprise the bio-recognition layer. It gives the biosensor selectivity and specificity. The physicochemical transducer is typically in intimate and controlled contact with the recognition layer. As a result of the presence and biochemical action of the analyte (target of interest), a physico-chemical change is produced within the biorecognition layer that is measured by the physicochemical transducer producing a signal that is proportionate to the concentration of the analyte. The physicochemical transducer may be electrochemical, optical, electronic, gravimetric, pyroelectric or piezoelectric. Based on the type of biotransducer, biosensors can be classified as shown to the right.
Electrochemical biotransducers
Electrochemical biosensors contain a biorecognition element that selectively reacts with the target analyte and produces an electrical signal that is proportional to the analyte concentration. In general, there are several approaches that can be used to detect electrochemical changes during a biorecognition event and these can be classified as follows: amperometric, potentiometric, impedance, and conductometric.
Amperometric
Amperometric transducers detect change in current as a result of electrochemical oxidation or reduction. Typically, the bioreceptor molecule is immobilized on the working electrode (commonly gold, carbon, or platinum). The potential between the working electrode and the reference electrode (usually Ag/AgCl) is fixed at a value and then current is measured with respect to time. The applied potential is the driving force for the electron transfer reaction. The current produced is a direct measure of the rate of electron transfer. The current reflects the reaction occurring between the bioreceptor molecule and analyte and is limited by the mass transport rate of the analyte to the electrode.
Potentiometric
Potentiometric sensors measure a potential or charge accumulation of an electrochemical cell. The transducer typically comprises an ion selective electrode (ISE) and a reference electrode. The ISE features a membrane that selectively interacts with the charged ion of interest, causing the accumulation of a charge potential compared to the reference electrode. The reference electrode provides a constant half-cell potential that is unaffected by analyte concentration. A high impedance voltmeter is used to measure the electromotive force or potential between the two electrodes when zero or no significant current flows between them. The potentiometric response is governed by the Nernst equation in that the potential is proportional to the logarithm of the concentration of the analyte.
Impedance
Electrochemical impedance spectroscopy (EIS) involves measuring resistive and capacitive changes caused by a biorecognition event. Typically, a small amplitude sinusoidal electrical stimulus is applied, causing current to flow through the biosensor. The frequency is varied over a range to obtain the impedance spectrum. The resistive and capacitive components of impedance are determined from in phase and out of phase current responses. Typically, a conventional three-electrode system is made specific to the analyte by immobilizing a biorecognition element to the surface. A voltage is applied and the current is measured. The interfacial impedance between the electrode and solution changes as a result of the analyte binding. An impedance analyzer can be used to control and apply the stimulus as well as measure the impedance changes.
Conductometry
Conductometric sensing involves measuring the change in conductive properties of the sample solution or a medium. The reaction between the biomolecule and analyte changes the ionic species concentration, leading to a change in the solution electrical conductivity or current flow. Two metal electrodes are separated at a certain distance and an AC potential is applied across the electrodes, causing a current flow between the electrodes. During a biorecognition event the ionic composition changes, using an ohmmeter the change in conductance can be measured.
Optical biotransducers
Optical biotransducers, used in optical biosensors for signal transduction, use photons in order to collect information about analyte. These are highly sensitive, highly specific, small in size and cost effective.
The detection mechanism of optical biotransducer depends upon the enzyme system that converts analyte into products which are either oxidized or reduced at the working electrode.
Evanescent field detection principle is most commonly used in an optical biosensor system as the transduction principle . This principle is one of the most sensitive detection methods. It enables the detection of fluorophores exclusively in the close proximity of the optical fiber.
FET-based electronic biotransducers
Electronic biosensing offers significant advantages over optical, biochemical and biophysical methods, in terms of high sensitivity and new sensing mechanisms, high spatial resolution for localized detection, facile integration with standard wafer-scale semiconductor processing and label-free, real-time detection in a nondestructive manner [6].
Devices based on field-effect transistors (FETs) have attracted great attention because they can directly translate the interactions between target biological molecules and the FET surface into readable electrical signals. In a FET, current flows along the channel which is connected to the source and the drain. The channel conductance between the source and the drain is switched on and off by gate electrode that is capacitively coupled through a thin dielectric layer [6].
In FET-based biosensors, the channel is in direct contact with the environment, and this gives better control over the surface charge. This improves the sensitivity of surface FET-based biosensors as biological events occurring at the channel surface could result in the surface potential variation of the semiconductor channel and then modulate the channel conductance. In addition to ease of on-chip integration of device arrays and the cost-effective device fabrication, the surface ultrasensitivity of FET-based biosensors makes it an attractive alternative to existing biosensor technologies[6].
Gravimetric/Piezoelectric biotransducers
Gravimetric biosensors use the basic principle of a response to a change in mass. Most gravimetric biosensors use thin piezoelectric quartz crystals, either as resonating crystals (QCM), or as bulk/surface acoustic wave (SAW) devices. In the majority of these the mass response is inversely proportional to the crystal thickness. Thin polymer films are also used in which biomolecules can be added to the surface with known surface mass. Acoustic waves can be projected to the thin film to produce an oscillatory device, which then follows an equation that is nearly identical to the Sauerbrey equation used in the QCM method. Biomolecules, such as proteins or antibodies can bind and its change in mass gives a measureable signal proportional to the presence of the target analyte in the sample.
Pyroelectric biotransducers
Pyroelectric biosensors generate an electric current as a result of a temperature change. This differential induces a polarization in the substance, producing a dipole moment in the direction of the temperature gradient. The result is a net voltage across the material. This net voltage can be calculated by the following equation.
where V = Voltage,
ω = angular frequency of the modulated incident,
P = pyroelectric coefficient,
L = film thickness,
ε = film dielectric constant,
A = area of film,
r = resistance of the film,
C = capacitance of the film,
τE = electrical time constant of the detector output.
See also
Biosensor
DNA field-effect transistor
Biointerface
Electrochemiluminescence
Bioelectronics
Nanobiotechnology
References
Biosensors
Biotechnology
Molecular biology | Biotransducer | Chemistry,Biology | 1,794 |
64,304,060 | https://en.wikipedia.org/wiki/Vector%20bornology | In mathematics, especially functional analysis, a bornology on a vector space over a field where has a bornology ℬ, is called a vector bornology if makes the vector space operations into bounded maps.
Definitions
Prerequisits
A on a set is a collection of subsets of that satisfy all the following conditions:
covers that is,
is stable under inclusions; that is, if and then
is stable under finite unions; that is, if then
Elements of the collection are called or simply if is understood.
The pair is called a or a .
A or of a bornology is a subset of such that each element of is a subset of some element of Given a collection of subsets of the smallest bornology containing is called the bornology generated by
If and are bornological sets then their on is the bornology having as a base the collection of all sets of the form where and
A subset of is bounded in the product bornology if and only if its image under the canonical projections onto and are both bounded.
If and are bornological sets then a function is said to be a or a (with respect to these bornologies) if it maps -bounded subsets of to -bounded subsets of that is, if
If in addition is a bijection and is also bounded then is called a .
Vector bornology
Let be a vector space over a field where has a bornology
A bornology on is called a if it is stable under vector addition, scalar multiplication, and the formation of balanced hulls (i.e. if the sum of two bounded sets is bounded, etc.).
If is a vector space and is a bornology on then the following are equivalent:
is a vector bornology
Finite sums and balanced hulls of -bounded sets are -bounded
The scalar multiplication map defined by and the addition map defined by are both bounded when their domains carry their product bornologies (i.e. they map bounded subsets to bounded subsets)
A vector bornology is called a if it is stable under the formation of convex hulls (i.e. the convex hull of a bounded set is bounded) then
And a vector bornology is called if the only bounded vector subspace of is the 0-dimensional trivial space
Usually, is either the real or complex numbers, in which case a vector bornology on will be called a if has a base consisting of convex sets.
Characterizations
Suppose that is a vector space over the field of real or complex numbers and is a bornology on
Then the following are equivalent:
is a vector bornology
addition and scalar multiplication are bounded maps
the balanced hull of every element of is an element of and the sum of any two elements of is again an element of
Bornology on a topological vector space
If is a topological vector space then the set of all bounded subsets of from a vector bornology on called the , the , or simply the of and is referred to as .
In any locally convex topological vector space the set of all closed bounded disks form a base for the usual bornology of
Unless indicated otherwise, it is always assumed that the real or complex numbers are endowed with the usual bornology.
Topology induced by a vector bornology
Suppose that is a vector space over the field of real or complex numbers and is a vector bornology on
Let denote all those subsets of that are convex, balanced, and bornivorous.
Then forms a neighborhood basis at the origin for a locally convex topological vector space topology.
Examples
Locally convex space of bounded functions
Let be the real or complex numbers (endowed with their usual bornologies), let be a bounded structure, and let denote the vector space of all locally bounded -valued maps on
For every let for all where this defines a seminorm on
The locally convex topological vector space topology on defined by the family of seminorms is called the .
This topology makes into a complete space.
Bornology of equicontinuity
Let be a topological space, be the real or complex numbers, and let denote the vector space of all continuous -valued maps on
The set of all equicontinuous subsets of forms a vector bornology on
See also
Bornivorous set
Bornological space
Bornology
Space of linear maps
Ultrabornological space
Citations
Bibliography
Topological vector spaces | Vector bornology | Mathematics | 864 |
435,420 | https://en.wikipedia.org/wiki/Anaerobic%20respiration | Anaerobic respiration is respiration using electron acceptors other than molecular oxygen (O2). Although oxygen is not the final electron acceptor, the process still uses a respiratory electron transport chain.
In aerobic organisms undergoing respiration, electrons are shuttled to an electron transport chain, and the final electron acceptor is oxygen. Molecular oxygen is an excellent electron acceptor. Anaerobes instead use less-oxidizing substances such as nitrate (), fumarate (), sulfate (), or elemental sulfur (S). These terminal electron acceptors have smaller reduction potentials than O2. Less energy per oxidized molecule is released. Therefore, anaerobic respiration is less efficient than aerobic.
As compared with fermentation
Anaerobic cellular respiration and fermentation generate ATP in very different ways, and the terms should not be treated as synonyms. Cellular respiration (both aerobic and anaerobic) uses highly reduced chemical compounds such as NADH and FADH2 (for example produced during glycolysis and the citric acid cycle) to establish an electrochemical gradient (often a proton gradient) across a membrane. This results in an electrical potential or ion concentration difference across the membrane. The reduced chemical compounds are oxidized by a series of respiratory integral membrane proteins with sequentially increasing reduction potentials, with the final electron acceptor being oxygen (in aerobic respiration) or another chemical substance (in anaerobic respiration). A proton motive force drives protons down the gradient (across the membrane) through the proton channel of ATP synthase. The resulting current drives ATP synthesis from ADP and inorganic phosphate.
Fermentation, in contrast, does not use an electrochemical gradient but instead uses only substrate-level phosphorylation to produce ATP. The electron acceptor NAD+ is regenerated from NADH formed in oxidative steps of the fermentation pathway by the reduction of oxidized compounds. These oxidized compounds are often formed during the fermentation pathway itself, but may also be external. For example, in homofermentative lactic acid bacteria, NADH formed during the oxidation of glyceraldehyde-3-phosphate is oxidized back to NAD+ by the reduction of pyruvate to lactic acid at a later stage in the pathway. In yeast, acetaldehyde is reduced to ethanol to regenerate NAD+.
There are two important anaerobic microbial methane formation pathways, through carbon dioxide / bicarbonate () reduction (respiration) or acetate fermentation.
Ecological importance
Anaerobic respiration is a critical component of the global nitrogen, iron, sulfur, and carbon cycles through the reduction of the oxyanions of nitrogen, sulfur, and carbon to more-reduced compounds. The biogeochemical cycling of these compounds, which depends upon anaerobic respiration, significantly impacts the carbon cycle and global warming. Anaerobic respiration occurs in many environments, including freshwater and marine sediments, soil, subsurface aquifers, deep subsurface environments, and biofilms. Even environments that contain oxygen, such as soil, have micro-environments that lack oxygen due to the slow diffusion characteristics of oxygen gas.
An example of the ecological importance of anaerobic respiration is the use of nitrate as a terminal electron acceptor, or dissimilatory denitrification, which is the main route by which fixed nitrogen is returned to the atmosphere as molecular nitrogen gas. The denitrification process is also very important in host-microbe interactions. Like mitochondria in oxygen-respiring microorganisms, some single-cellular anaerobic ciliates use denitrifying endosymbionts to gain energy. Another example is methanogenesis, a form of carbon-dioxide respiration, that is used to produce methane gas by anaerobic digestion. Biogenic methane can be a sustainable alternative to fossil fuels. However, uncontrolled methanogenesis in landfill sites releases large amounts of methane into the atmosphere, acting as a potent greenhouse gas. Sulfate respiration produces hydrogen sulfide, which is responsible for the characteristic 'rotten egg' smell of coastal wetlands and has the capacity to precipitate heavy metal ions from solution, leading to the deposition of sulfidic metal ores.
Economic relevance
Dissimilatory denitrification is widely used in the removal of nitrate and nitrite from municipal wastewater. An excess of nitrate can lead to eutrophication of waterways into which treated water is released. Elevated nitrite levels in drinking water can lead to problems due to its toxicity. Denitrification converts both compounds into harmless nitrogen gas.
Specific types of anaerobic respiration are also critical in bioremediation, which uses microorganisms to convert toxic chemicals into less-harmful molecules to clean up contaminated beaches, aquifers, lakes, and oceans. For example, toxic arsenate or selenate can be reduced to less toxic compounds by various anaerobic bacteria via anaerobic respiration. The reduction of chlorinated chemical pollutants, such as vinyl chloride and carbon tetrachloride, also occurs through anaerobic respiration.
Anaerobic respiration is useful in generating electricity in microbial fuel cells, which employ bacteria that respire solid electron acceptors (such as oxidized iron) to transfer electrons from reduced compounds to an electrode. This process can simultaneously degrade organic carbon waste and generate electricity.
Examples of electron acceptors in respiration
See also
Hydrogenosomes and mitosomes
Anaerobic digestion
Microbial fuel cell
Standard electrode potential (data page)
Table of standard reduction potentials for half-reactions important in biochemistry
Lithotrophs
Further reading
References
Anaerobic digestion
Biodegradation
Cellular respiration
Anaerobic respiration | Anaerobic respiration | Chemistry,Engineering,Biology | 1,221 |
18,095,448 | https://en.wikipedia.org/wiki/Epitome%20%28data%20processing%29 | An epitome, in data processing, is a condensed digital representation of the essential statistical properties of ordered datasets such as matrices that represent images, audio signals, videos or genetic sequences. Although much smaller than the data, the epitome contains many of its smaller overlapping parts with much less repetition and with some level of generalization. As such, it can be used in tasks such as data mining, machine learning and signal processing.
The first use of epitomic analysis was with image textures for the purposes of image parsing. Epitomes have also been used in video processing to replace, remove or superresolve imagery.
Epitomes are also being investigated as tools for vaccine design.
See also
Image processing
Video imprint (computer vision)
References
Data processing
Image processing | Epitome (data processing) | Technology | 160 |
67,805,581 | https://en.wikipedia.org/wiki/Eddy%20saturation%20and%20eddy%20compensation | Eddy saturation and eddy compensation are phenomena found in the Southern Ocean. Both are limiting processes where eddy activity increases due to the momentum of strong westerlies, and hence do not enhance their respective mean currents. Where eddy saturations impacts the Antarctic Circumpolar Current (ACC), eddy compensation influences the associated Meridional Overturning Circulation (MOC).
In recent decades wind stresses in the Southern Ocean have increased partly due to greenhouse gases and ozone depletion in the stratosphere. Because the ACC and MOC play an important role in the global climate; affecting the stratification of the ocean, uptake of heat, carbon dioxide and other passive tracers. Addressing how these increased zonal winds affect the MOC and the ACC will help understand whether the uptakes will change in the future, which could have serious impact on the carbon cycle. This remains an important and critical research topic.
Formation of eddies and overturning response
Dynamics in the Southern Ocean are dominated by two cells with opposing rotation, each is forced by surface buoyancy fluxes. Via isopycnals, tracers are transferred from the deep to the surface. Isopycnal slopes are key for determining the depth of the global Pycnocline, and where water mass outcrops are. Therefore, isopycnals play an important role in the interaction with the atmosphere. In the Southeren Ocean it is thought that isopycnals are steepened by wind forcing and baroclinic eddies are acting to flatten the isopycnals.
The westerly winds (), which make the ACC flow eastward, induce a clockwise rotating Eulerian meridional circulation () via Ekman dynamics, which is also known as the Deacon cell. This circulation acts to overturn isopycnals enhance the buoyance forcing and therefore increase the mean flow.
Although the ACC is very close to a geostrophic balance, when frontal jets reach a velocity that is high enough geostrophic turbulence (i.e. chaotic motion of fluids that are near to a state of hydrostatic balance and geostrophic balance) arises. Due to this geostrophic turbulence, potential energy stored in the fronts of jet streams is converted into eddy kinetic energy (EKE), which finally leads to the formation of mesoscale eddies. The surface EKE has increased in the recent decades, as proven by satellite altimetry. The relation between the increased wind stress and EKE is assumed to be near-linear, which explains the limited sensitivity of the ACC transport.
In areas where stratification is very weak the formation of eddies is often associated with barotropic instabilities. Where stratification is more substantial, baroclinic instabilities (misalignment of isobars and isopycnals) are the mean cause of the formation of eddies. Eddies have the tendency to flatten isopycnals (surfaces of equal buoyancy), which slow down the mean flow . Due to these instabilities a counterclockwise rotating eddy-induced circulation () is formed, which partially counteracts the Eulerian meridional circulation.
The balance between the two overturning circulations determines the residual overturning, . This residual flow () is assumed to be directed along mean buoyance surfaces in the interior but to have a diapycnal component in the mixed later.
Eddy saturation
The Southern Ocean contains a system of ocean currents, which together forms the Antarctic Circumpolar Current (ACC). These ocean currents are subject to strong westerly winds jointly responsible for driving the zonal transport of the ACC. In recent decades a positive trend in SAM index (Southern Annular Mode) is seen, which measures the zonal pressure difference between the latitudes of 40S and 65S, showing us that the zonal winds have increased in the Southern Ocean. Studies indicate that the zonal Ekman transport in the ACC is relatively insensitive to the increasing changes in wind stress. This behaviour can also be seen in the change in isopycnal slopes (surfaces of equal buoyancy), which show limited response despite intensification of western winds. Therefore, the increased momentum (due to enhanced wind stress) is diverted into the oceanic mesoscale and transferred to the bottom of the ocean instead of the horizontal mean flow. This flattens isopycnals (reduces buoyancy forcing) and therefore slows down the mean flow. This phenomenon is known as eddy saturation.
Eddy compensation
Alongside the ACC there is also the associated Meridional Overturning Circulation (MOC), which is also mainly driven by wind forcing. The insensitivity of the mean current to the accelerating wind forcing can also be seen in the MOC. This near independence of the MOC to the increase of wind stress is referred to as eddy compensation. There would be perfect eddy compensation when the Ekman transport would be balanced by eddy-induced transport.
There is a widespread belief that the sensitivities of the transport in the ACC and MOC are dynamically linked. However, note that eddy saturation and eddy compensation are distinct dynamical mechanisms. Occurrence of one does not necessarily entail the occurrence of the other. It is hypothesized that dearth of a dynamical link between eddy saturation and eddy compensation is a consequence of the depth dependence (cancellation between the Eulerian circulation and eddy-induced circulation). Currently it is assumed that the ACC is fully eddy saturated, but only partially eddy compensated. The degree to which there is eddy compensation in the Southern Ocean is currently unknown.
Models and sensitivity
Eddy permitting and eddy resolving models are used to examine the effect of eddy saturation and eddy compensation in the ACC. In these models resolution is of great importance. Ocean observations do not have a high enough resolution to fully estimate the degree of eddy saturation and eddy compensation. Idealized studies show that the MOC response is more sensitive to model resolution than the ACC transport. A general conclusion in such numerical models is that southward eddy transport in combination with enhanced westerlies results in an increase in EKE.
References
See also
Currents of the Southern Ocean
Physical oceanography
Fluid dynamics
Water waves | Eddy saturation and eddy compensation | Physics,Chemistry,Engineering | 1,299 |
61,158,619 | https://en.wikipedia.org/wiki/C3H5Br | {{DISPLAYTITLE:C3H5Br}}
The molecular formula C3H5Br may refer to:
Allyl bromide
Bromocyclopropane | C3H5Br | Chemistry | 38 |
53,850,913 | https://en.wikipedia.org/wiki/Evaluative%20conditioning | Evaluative conditioning is defined as a change in the valence of a stimulus that is due to the pairing of that stimulus with another positive or negative stimulus. The first stimulus is often referred to as the conditioned stimulus and the second stimulus as the unconditioned stimulus. A conditioned stimulus becomes more positive when it has been paired with a positive unconditioned stimulus and more negative when it has been paired with a negative unconditioned stimulus. Evaluative conditioning thus refers to attitude formation or change toward an object due to that object's mere co-occurrence with another object.
Evaluative conditioning is a form of classical conditioning, as invented by Ivan Pavlov, in that it involves a change in the responses to the conditioned stimulus that results from pairing the conditioned stimulus with an unconditioned stimulus. Whereas classic conditioning can refer to a change in any type of response, evaluative conditioning concerns only a change in the evaluative responses to the conditioned stimulus, that is, a change in the liking of the conditioned stimulus.
A classic example of the formation of attitudes through conditioning is the 1958 experiment by Staats and Staats. Subjects first were asked to learn a list of words that were presented visually, and were tested on their learning of the list. They then did the same with a list of words presented orally, all of which set the stage for the critical phase of the experiment which was portrayed as an assessment of subjects' ability to learn via both visual and auditory channels at once. During this phase, subjects were exposed visually to a set of nationality names, specifically Dutch and Swedish. Approximately one second after the nationality appeared on the screen, the experimenter announced a word aloud. Most of these latter words, none of which were repeated, were neutral (e.g., chair, with, twelve). Included, however, were a few positive words (e.g., gift, sacred, happy) and a few negative words (e.g., bitter, ugly, failure). These words were systematically paired with the two conditional stimuli nationalities such that one always appeared with positive words and the other with negative words. Thus, the conditioning trials were embedded within a stream of visually presented nationality names and orally presented words. When the conditioning phase was completed, the subjects were first asked to recall the words that had been presented visually and then to evaluate them, presumably because how they felt about those words might have affected their learning. The conditioning was successful. The nationality that had been paired with the more positive unconditional stimuli was rated as more pleasant than the one paired with the negative unconditional stimuli.
References
Notes
Sources
Experimental psychology
Behavioral concepts
History of psychology
Behaviorism
Learning | Evaluative conditioning | Biology | 552 |
55,127,738 | https://en.wikipedia.org/wiki/NGC%204647 | NGC 4647 is an intermediate spiral galaxy estimated to be around 63 million light-years away in the constellation of Virgo. It was discovered by astronomer William Herschel on March 15, 1784. NGC 4647 is listed along with Messier 60 as being part of a pair of galaxies called Arp 116; their designation in Halton Arp's Atlas of Peculiar Galaxies. The galaxy is located on the outskirts of the Virgo Cluster.
Interaction with Messier 60
In optical images, the two galaxies' disks overlap. This has suggested an ongoing interaction, however images do not reveal any signs of star formation which would have been caused by a tidal interaction between the two galaxies. Recent studies of Hubble images made in 2012 of the two galaxies indicate that tidal interactions between the two have just begun.
Interstellar medium of NGC 4647
The gas in NGC 4647 has been mildly disturbed. The galaxy's location in the Virgo Cluster suggests that it might have suffered an effect known as ram-pressure stripping caused by the intracluster medium. Another explanation may be hot gas in the halo of Messier 60. The hot gas in Messier 60 may have increased the pressure of gas on the eastern side of NGC 4647 through either ram-pressure stripping or a bow-shock between the two galaxies causing the observed asymmetry of gas in the galaxy. The difficulty is that the galaxies would have to be so close that tidal forces from Messier 60 would cause the disk of NGC 4647 to get ripped apart.
Supernovae
Two supernovae have been observed in NGC 4647:
SN 1979A (type unknown, mag. 15) was discovered by Givi N. Kimeridze on 25 January 1979.
SN 2022hrs (typeIa, mag. 15) was discovered by Kōichi Itagaki on 16 April 2022.
See also
List of NGC objects (4001–5000)
NGC 4567 and NGC 4568
References
External links
Intermediate spiral galaxies
Overlapping galaxies
Interacting galaxies
Virgo (constellation)
4647
42816
7896
116
Astronomical objects discovered in 1784
Virgo Cluster | NGC 4647 | Astronomy | 432 |
56,234,594 | https://en.wikipedia.org/wiki/Vanadium%28II%29%20sulfate | Vanadium(II) sulfate describes a family of inorganic compounds with the formula VSO4(H2O)x where 0 ≤ x ≤ 7. The hexahydrate is most commonly encountered. It is a violet solid that dissolves in water to give air-sensitive solutions of the aquo complex. The salt is isomorphous with [Mg(H2O)6]SO4. Compared to the V–O bond length of 191 pm in [V(H2O)6]3+, the V–O distance is 212 pm in the [V(H2O)6]SO4. This nearly 10% elongation reflects the effect of the lower charge, hence weakened electrostatic attraction.
The heptahydrate has also been crystallized. The compound is prepared by electrolytic reduction of vanadyl sulfate in sulfuric acid. The crystals also feature [V(H2O)6]2+ centers but with an extra water of crystallization. The salt is isomorphous with ferrous sulfate heptahydrate. A related salt is vanadous ammonium sulfate, (NH4)2V(SO4)2·6H2O, a Tutton's salt isomorphous with ferrous ammonium sulfate.
References
Vanadium(II) compounds
Sulfates | Vanadium(II) sulfate | Chemistry | 288 |
70,633,465 | https://en.wikipedia.org/wiki/Vivo%20X80 | Vivo X80 is a line of Android-based smartphones developed and manufactured by Vivo. It features a Zeiss co-engineered imaging system.
Notes
References
Android (operating system) devices
Vivo smartphones
Mobile phones introduced in 2022 | Vivo X80 | Technology | 48 |
71,099,887 | https://en.wikipedia.org/wiki/International%20Union%20of%20District%2050%2C%20Allied%20and%20Technical%20Workers%20of%20the%20United%20States%20and%20Canada | The International Union of District 50, Allied and Technical Workers of the United States and Canada, was a labor union representing workers in the energy and chemical industries, and in uranium mining.
The union's origins lay in the foundation of the Massachusetts Council of Utility Workers by workers at the Everett Coke-Oven Plant in 1933. The union began representing workers in a variety of utilities, and in neighboring states, becoming the New England Council of Utility Workers in 1934, and the National Council of Gas and By-Product Coke Workers in 1935. In 1936, it affiliated to the United Mine Workers of America (UMW), which designated it as its District 50, lower numbers being reserved for geographical districts of coal miners. After several name changes, in 1941, it became District 50, United Mine Workers of America.
The district grew rapidly, and soon became larger than the remaining districts of the UMW put together. In 1961, it received organizational but not financial independence. This led it into disputes with the remainder of the UMW, particularly when it advocated for nuclear power plants. In March 1968, it was expelled from the UMW, adopting its final name in 1970. At this time, it had around 200,000 members, and was led by president Ellwood Moffett. On August 9, 1972, it merged into the United Steelworkers of America.
References
Chemical industry trade unions
Energy industry trade unions
Trade unions established in 1968
Trade unions disestablished in 1972 | International Union of District 50, Allied and Technical Workers of the United States and Canada | Chemistry | 297 |
75,735,875 | https://en.wikipedia.org/wiki/Transverse%20arch | In architecture, a transverse arch is an arch in a vaulted building that goes across the barrel vault. A series of transverse arches sitting on tops of the columns on the sides of the nave was typical in the churches of Romanesque architecture (common since Carolingian times). By analogy, the term is also used to describe the transverse ribs of a groined vault and for any crosswise arch in modern buildings. An arc that goes in transverse direction, but carries an exposed wall on top, dividing the vault into compartments, is called a diaphragm arch.
In the historical buildings, the transverse arches provide support for purlins and roof ridge beams. They also subdivide the nave into bays. The springings of the arch are typically pinned to supports using wooden or steel ties, but the bulk of lateral thrust is terminated in the abutments.
See also
Separating arch, an arch parallel to the sides of the nave
References
Sources
Arches and vaults | Transverse arch | Engineering | 195 |
4,192,777 | https://en.wikipedia.org/wiki/History%20of%20the%20World%20Wide%20Web | The World Wide Web ("WWW", "W3" or simply "the Web") is a global information medium that users can access via computers connected to the Internet. The term is often mistakenly used as a synonym for the Internet, but the Web is a service that operates over the Internet, just as email and Usenet do. The history of the Internet and the history of hypertext date back significantly further than that of the World Wide Web.
Tim Berners-Lee invented the World Wide Web while working at CERN in 1989. He proposed a "universal linked information system" using several concepts and technologies, the most fundamental of which was the connections that existed between information. He developed the first web server, the first web browser, and a document formatting protocol, called Hypertext Markup Language (HTML). After publishing the markup language in 1991, and releasing the browser source code for public use in 1993, many other web browsers were soon developed, with Marc Andreessen's Mosaic (later Netscape Navigator), being particularly easy to use and install, and often credited with sparking the Internet boom of the 1990s. It was a graphical browser which ran on several popular office and home computers, bringing multimedia content to non-technical users by including images and text on the same page.
Websites for use by the general public began to emerge in 1993–94. This spurred competition in server and browser software, highlighted in the Browser wars which was initially dominated by Netscape Navigator and Internet Explorer. Following the complete removal of commercial restrictions on Internet use by 1995, commercialization of the Web amidst macroeconomic factors led to the dot-com boom and bust in the late 1990s and early 2000s.
The features of HTML evolved over time, leading to HTML version 2 in 1995, HTML3 and HTML4 in 1997, and HTML5 in 2014. The language was extended with advanced formatting in Cascading Style Sheets (CSS) and with programming capability by JavaScript. AJAX programming delivered dynamic content to users, which sparked a new era in Web design, styled Web 2.0. The use of social media, becoming common-place in the 2010s, allowed users to compose multimedia content without programming skills, making the Web ubiquitous in every-day life.
Background
The underlying concept of hypertext as a user interface paradigm originated in projects in the 1960s, from research such as the Hypertext Editing System (HES) by Andries van Dam at Brown University, IBM Generalized Markup Language, Ted Nelson's Project Xanadu, and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based memex, which was described in the 1945 essay "As We May Think". Other precursors were FRESS and Intermedia. Paul Otlet's project Mundaneum has also been named as an early 20th-century precursor of the Web.
In 1980, Tim Berners-Lee, at the European Organization for Nuclear Research (CERN) in Switzerland, built ENQUIRE, as a personal database of people and software models, but also as a way to experiment with hypertext; each new page of information in ENQUIRE had to be linked to another page.
When Berners-Lee built ENQUIRE, the ideas developed by Bush, Engelbart, and Nelson did not influence his work, since he was not aware of them. However, as Berners-Lee began to refine his ideas, the work of these predecessors would later help to confirm the legitimacy of his concept.
During the 1980s, many packet-switched data networks emerged based on various communication protocols (see Protocol Wars). One of these standards was the Internet protocol suite, which is often referred to as TCP/IP. As the Internet grew through the 1980s, many people realized the increasing need to be able to find and organize files and use information. By 1985, the Domain Name System (upon which the Uniform Resource Locator is built) came into being. Many small, self-contained hypertext systems were created, such as Apple Computer's HyperCard (1987).
Berners-Lee's contract in 1980 was from June to December, but in 1984 he returned to CERN in a permanent role, and considered its problems of information management: physicists from around the world needed to share data, yet they lacked common machines and any shared presentation software. Shortly after Berners-Lee's return to CERN, TCP/IP protocols were installed on Unix machines at the institution, turning it into the largest Internet site in Europe. In 1988, the first direct IP connection between Europe and North America was established and Berners-Lee began to openly discuss the possibility of a web-like system at CERN. He was inspired by a book, Enquire Within upon Everything. Many online services existed before the creation of the World Wide Web, such as for example CompuServe, Usenet and bulletin board systems.
1989–1991: Origins
CERN
While working at CERN, Tim Berners-Lee became frustrated with the inefficiencies and difficulties posed by finding information stored on different computers. On 12 March 1989, he submitted a memorandum, titled "Information Management: A Proposal", to the management at CERN. The proposal used the term "web" and was based on "a large hypertext database with typed links". It described a system called "Mesh" that referenced ENQUIRE, the database and software project he had built in 1980, with a more elaborate information management system based on links embedded as text: "Imagine, then, the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document, you could skip to them with a click of the mouse." Such a system, he explained, could be referred to using one of the existing meanings of the word hypertext, a term that he says was coined in the 1950s. Berners-Lee notes the possibility of multimedia documents that include graphics, speech and video, which he terms hypermedia.
Although the proposal attracted little interest, Berners-Lee was encouraged by his manager, Mike Sendall, to begin implementing his system on a newly acquired NeXT workstation. He considered several names, including Information Mesh, The Information Mine or Mine of Information, but settled on World Wide Web. Berners-Lee found an enthusiastic supporter in his colleague and fellow hypertext enthusiast Robert Cailliau who began to promote the proposed system throughout CERN. Berners-Lee and Cailliau pitched Berners-Lee's ideas to the European Conference on Hypertext Technology in September 1990, but found no vendors who could appreciate his vision.
Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested to members of both technical communities that a marriage between the two technologies was possible. But, when no one took up his invitation, he finally assumed the project himself. In the process, he developed three essential technologies:
a system of globally unique identifiers for resources on the Web and elsewhere, the universal document identifier (UDI), later known as uniform resource locator (URL);
the publishing language Hypertext Markup Language (HTML);
the Hypertext Transfer Protocol (HTTP).
With help from Cailliau he published a more formal proposal on 12 November 1990 to build a "hypertext project" called World Wide Web (abbreviated "W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture. The proposal was modelled after the Standard Generalized Markup Language (SGML) reader Dynatext by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration.
At this point HTML and HTTP had already been in development for about two months and the first web server was about a month from completing its first successful test. Berners-Lee's proposal estimated that a read-only Web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available".
By December 1990, Berners-Lee and his work team had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP), the HyperText Markup Language (HTML), the first web browser (named WorldWideWeb, which was also a web editor), the first web server (later known as CERN httpd) and the first web site (https://info.cern.ch/) containing the first web pages that described the project itself was published on 20 December 1990. The browser could access Usenet newsgroups and FTP files as well. A NeXT Computer was used by Berners-Lee as the web server and also to write the web browser.
Working with Berners-Lee at CERN, Nicola Pellow developed the first cross-platform web browser, the Line Mode Browser.
1991–1994: The Web goes public, early growth
Initial launch
In January 1991, the first web servers outside CERN were switched on. On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on the newsgroup alt.hypertext, inviting collaborators.
Paul Kunz from the Stanford Linear Accelerator Center (SLAC) visited CERN in September 1991, and was captivated by the Web. He brought the NeXT software back to SLAC, where librarian Louise Addis adapted it for the VM/CMS operating system on the IBM mainframe as a way to host the SPIRES-HEP database and display SLAC's catalog of online documents. This was the first web server outside of Europe and the first in North America.
The World Wide Web had several differences from other hypertext systems available at the time. The Web required only unidirectional links rather than bidirectional ones, making it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn, presented the chronic problem of link rot.
Early browsers
The WorldWideWeb browser only ran on NeXTSTEP operating system. This shortcoming was discussed in January 1992, and alleviated in April 1992 by the release of Erwise, an application developed at the Helsinki University of Technology, and in May by ViolaWWW, created by Pei-Yuan Wei, which included advanced features such as embedded graphics, scripting, and animation. ViolaWWW was originally an application for HyperCard. Both programs ran on the X Window System for Unix. In 1992, the first tests between browsers on different platforms were concluded successfully between buildings 513 and 31 in CERN, between browsers on the NexT station and the X11-ported Mosaic browser. ViolaWWW became the recommended browser at CERN. To encourage use within CERN, Bernd Pollermann put the CERN telephone directory on the web—previously users had to log onto the mainframe in order to look up phone numbers. The Web was successful at CERN and spread to other scientific and academic institutions.
Students at the University of Kansas adapted an existing text-only hypertext browser, Lynx, to access the web in 1992. Lynx was available on Unix and DOS, and some web designers, unimpressed with glossy graphical websites, held that a website not accessible through Lynx was not worth visiting.
In these earliest browsers, images opened in a separate "helper" application.
From Gopher to the WWW
In the early 1990s, Internet-based projects such as Archie, Gopher, Wide Area Information Servers (WAIS), and the FTP Archive list attempted to create ways to organize distributed data. Gopher was a document browsing system for the Internet, released in 1991 by the University of Minnesota. Invented by Mark P. McCahill, it became the first commonly used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way. In less than a year, there were hundreds of Gopher servers. It offered a viable alternative to the World Wide Web in the early 1990s and the consensus was that Gopher would be the primary way that people would interact with the Internet. However, in 1993, the University of Minnesota declared that Gopher was proprietary and would have to be licensed.
In response, on 30 April 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due, and released their code into the public domain. This made it possible to develop servers and clients independently and to add extensions without licensing restrictions. Coming two months after the announcement that the server implementation of the Gopher protocol was no longer free to use, this spurred the development of various browsers which precipitated a rapid shift away from Gopher. By releasing Berners-Lee's invention for public use, CERN encouraged and enabled its widespread use.
Early websites intermingled links for both the HTTP web protocol and the Gopher protocol, which provided access to content through hypertext menus presented as a file system rather than through HTML files. Early Web users would navigate either by bookmarking popular directory pages or by consulting updated lists such as the NCSA "What's New" page. Some sites were also indexed by WAIS, enabling users to submit full-text searches similar to the capability later provided by search engines.
After 1993 the World Wide Web saw many advances to indexing and ease of access through search engines, which often neglected Gopher and Gopherspace. As its popularity increased through ease of use, incentives for commercial investment in the Web also grew. By the middle of 1994, the Web was outcompeting Gopher and the other browsing systems for the Internet.
NCSA
The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana–Champaign (UIUC) established a website in November 1992. After Marc Andreessen, a student at UIUC, was shown ViolaWWW in late 1992, he began work on Mosaic with another UIUC student Eric Bina, using funding from the High-Performance Computing and Communications Initiative, a US-federal research and development program initiated by US Senator Al Gore. Andreessen and Bina released a Unix version of the browser in February 1993; Mac and Windows versions followed in August 1993. The browser gained popularity due to its strong support of integrated multimedia, and the authors' rapid response to user bug reports and recommendations for new features. Historians generally agree that the 1993 introduction of the Mosaic web browser was a turning point for the World Wide Web.
Before the release of Mosaic in 1993, graphics were not commonly mixed with text in web pages, and the Web was less popular than older protocols such as Gopher and WAIS. Mosaic could display inline images and submit forms for Windows, Macintosh and X-Windows. NCSA also developed HTTPd, a Unix web server that used the Common Gateway Interface to process forms and Server Side Includes for dynamic content. Both the client and server were free to use with no restrictions. Mosaic was an immediate hit; its graphical user interface allowed the Web to become by far the most popular protocol on the Internet. Within a year, web traffic surpassed Gopher's. Wired declared that Mosaic made non-Internet online services obsolete, and the Web became the preferred interface for accessing the Internet.
Early growth
The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularising use of the Internet. Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet. The Web is an information space containing hyperlinked documents and other resources, identified by their URIs. It is implemented as both client and server software using Internet protocols such as TCP/IP and HTTP.
In keeping with its origins at CERN, early adopters of the Web were primarily university-based scientific departments or physics laboratories such as SLAC and Fermilab. By January 1993 there were fifty web servers across the world. By October 1993 there were over five hundred servers online, including some notable websites.
Practical media distribution and streaming media over the Web was made possible by advances in data compression, due to the impractically high bandwidth requirements of uncompressed media. Following the introduction of the Web, several media formats based on discrete cosine transform (DCT) were introduced for practical media distribution and streaming over the Web, including the MPEG video format in 1991 and the JPEG image format in 1992. The high level of image compression made JPEG a good format for compensating slow Internet access speeds, typical in the age of dial-up Internet access. JPEG became the most widely used image format for the World Wide Web. A DCT variation, the modified discrete cosine transform (MDCT) algorithm, led to the development of MP3, which was introduced in 1991 and became the first popular audio format on the Web.
In 1992 the Computing and Networking Department of CERN, headed by David Williams, withdrew support of Berners-Lee's work. A two-page email sent by Williams stated that the work of Berners-Lee, with the goal of creating a facility to exchange information such as results and comments from CERN experiments to the scientific community, was not the core activity of CERN and was a misallocation of CERN's IT resources. Following this decision, Tim Berners-Lee left CERN for the Massachusetts Institute of Technology (MIT), where he continued to develop HTTP.
The first Microsoft Windows browser was Cello, written by Thomas R. Bruce for the Legal Information Institute at Cornell Law School to provide legal information, since access to Windows was more widespread amongst lawyers than access to Unix. Cello was released in June 1993.
1994–2004: Open standards, going global
The rate of web site deployment increased sharply around the world, and fostered development of international standards for protocols and content formatting. Berners-Lee continued to stay involved in guiding web standards, such as the markup languages to compose web pages, and he advocated his vision of a Semantic Web (sometimes known as Web 3.0) based around machine-readability and interoperability standards.
World Wide Web Conference
In May 1994, the first International WWW Conference, organized by Robert Cailliau, was held at CERN; the conference has been held every year since.
World Wide Web Consortium
The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in September/October 1994 in order to create open standards for the Web. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet. A year later, a second site was founded at INRIA (a French national computer research lab) with support from the European Commission; and in 1996, a third continental site was created in Japan at Keio University.
W3C comprised various companies that were willing to create standards and recommendations to improve the quality of the Web. Berners-Lee made the Web available freely, with no patent and no royalties due. The W3C decided that its standards must be based on royalty-free technology, so they can be easily adopted by anyone. Netscape and Microsoft, in the middle of a browser war, ignored the W3C and added elements to HTML ad hoc (e.g., blink and marquee). Finally, in 1995, Netscape and Microsoft came to their senses and agreed to abide by the W3C's standard.
The W3C published the standard for HTML 4 in 1997, which included Cascading Style Sheets (CSS), giving designers more control over the appearance of web pages without the need for additional HTML tags. The W3C could not enforce compliance so none of the browsers were fully compliant. This frustrated web designers who formed the Web Standards Project (WaSP) in 1998 with the goal of cajoling compliance with standards. A List Apart and CSS Zen Garden were influential websites that promoted good design and adherence to standards. Nevertheless, AOL halted development of Netscape and Microsoft was slow to update IE. Mozilla and Apple both released browsers that aimed to be more standards compliant (Firefox and Safari), but were unable to dislodge IE as the dominant browser.
Commercialization, dot-com boom and bust, aftermath
As the Web grew in the mid-1990s, web directories and primitive search engines were created to index pages and allow people to find things. Commercial use restrictions on the Internet were lifted in 1995 when NSFNET was shut down.
In the US, the online service America Online (AOL) offered their users a connection to the Internet via their own internal browser, using a dial-up Internet connection. In January 1994, Yahoo! was founded by Jerry Yang and David Filo, then students at Stanford University. Yahoo! Directory became the first popular web directory. Yahoo! Search, launched the same year, was the first popular search engine on the World Wide Web. Yahoo! became the quintessential example of a first mover on the Web.
Online shopping began to emerge with the launch of Amazon's shopping site by Jeff Bezos in 1995 and eBay by Pierre Omidyar the same year.
By 1994, Marc Andreessen's Netscape Navigator superseded Mosaic in popularity, holding the position for some time. Bill Gates outlined Microsoft's strategy to dominate the Internet in his Tidal Wave memo in 1995. With the release of Windows 95 and the popular Internet Explorer browser, many public companies began to develop a Web presence. At first, people mainly anticipated the possibilities of free publishing and instant worldwide information. By the late 1990s, the directory model had given way to search engines, corresponding with the rise of Google Search, which developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines.
Netscape had a very successful IPO valuing the company at $2.9 billion despite the lack of profits and triggering the dot-com bubble. Increasing familiarity with the Web led to the growth of direct Web-based commerce (e-commerce) and instantaneous group communications worldwide. Many dot-com companies, displaying products on hypertext webpages, were added into the Web. Over the next 5 years, over a trillion dollars was raised to fund thousands of startups consisting of little more than a website.
During the dot-com boom, many companies vied to create a dominant web portal in the belief that such a website would best be able to attract a large audience that in turn would attract online advertising revenue. While most of these portals offered a search engine, they were not interested in encouraging users to find other websites and leave the portal and instead concentrated on "sticky" content. In contrast, Google was a stripped-down search engine that delivered superior results. It was a hit with users who switched from portals to Google. Furthermore, with AdWords, Google had an effective business model.
AOL bought Netscape in 1998. In spite of their early success, Netscape was unable to fend off Microsoft. Internet Explorer and a variety of other browsers almost completely replaced it.
Faster broadband internet connections replaced many dial-up connections from the beginning of the 2000s.
With the bursting of the dot-com bubble, many web portals either scaled back operations, floundered, or shut down entirely. AOL disbanded Netscape in 2003.
Web server software
Web server software was developed to allow computers to act as web servers. The first web servers supported only static files, such as HTML (and images), but now they commonly allow embedding of server side applications. Web framework software enabled building and deploying web applications. Content management systems (CMS) were developed to organize and facilitate collaborative content creation. Many of them were built on top of separate content management frameworks.
After Robert McCool joined Netscape, development on the NCSA HTTPd server languished. In 1995, Brian Behlendorf and Cliff Skolnick created a mailing list to coordinate efforts to fix bugs and make improvements to HTTPd. They called their version of HTTPd, Apache. Apache quickly became the dominant server on the Web. After adding support for modules, Apache was able to allow developers to handle web requests with a variety of languages including Perl, PHP and Python. Together with Linux and MySQL, it became known as the LAMP platform.
Following the success of Apache, the Apache Software Foundation was founded in 1999 and produced many open source web software projects in the same collaborative spirit.
Browser wars
After graduating from UIUC, Andreessen and Jim Clark, former CEO of Silicon Graphics, met and formed Mosaic Communications Corporation in April 1994 to develop the Mosaic Netscape browser commercially. The company later changed its name to Netscape, and the browser was developed further as Netscape Navigator, which soon became the dominant web client. They also released the Netsite Commerce web server which could handle SSL requests, thus enabling e-commerce on the Web. SSL became the standard method to encrypt web traffic. Navigator 1.0 also introduced cookies, but Netscape did not publicize this feature. Netscape followed up with Navigator 2 in 1995 introducing frames, Java applets and JavaScript. In 1998, Netscape made Navigator open source and launched Mozilla.
Microsoft licensed Mosaic from Spyglass and released Internet Explorer 1.0 that year and IE2 later the same year. IE2 added features pioneered at Netscape such as cookies, SSL, and JavaScript. The browser wars became a competition for dominance when Explorer was bundled with Windows. This led to the United States v. Microsoft Corporation antitrust lawsuit.
IE3, released in 1996, added support for Java applets, ActiveX, and CSS. At this point, Microsoft began bundling IE with Windows. IE3 managed to increase Microsoft's share of the browser market from under 10% to over 20%. IE4, released the following year, introduced Dynamic HTML setting the stage for the Web 2.0 revolution. By 1998, IE was able to capture the majority of the desktop browser market. It would be the dominant browser for the next fourteen years.
Google released their Chrome browser in 2008 with the first JIT JavaScript engine, V8. Chrome overtook IE to become the dominant desktop browser in four years, and overtook Safari to become the dominant mobile browser in two. At the same time, Google open sourced Chrome's codebase as Chromium.
Ryan Dahl used Chromium's V8 engine in 2009 to power an event driven runtime system, Node.js, which allowed JavaScript code to be used on servers as well as browsers. This led to the development of new software stacks such as MEAN. Thanks to frameworks such as Electron, developers can bundle up node applications as standalone desktop applications such as Slack.
Acer and Samsung began selling Chromebooks, cheap laptops running ChromeOS capable of running web apps, in 2011. Over the next decade, more companies offered Chromebooks. Chromebooks outsold MacOS devices in 2020 to become the second most popular OS in the world.
Other notable web browsers emerged including Mozilla's Firefox, Opera's Opera browser and Apple's Safari.
Web 1.0
Web 1.0 is a retronym referring to the first stage of the World Wide Web's evolution, from roughly 1989 to 2004. According to Graham Cormode and Balachander Krishnamurthy, "content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content". Personal web pages were common, consisting mainly of static pages hosted on ISP-run web servers, or on free web hosting services such as Tripod and the now-defunct GeoCities.
Some common design elements of a Web 1.0 site include:
Static pages rather than dynamic HTML.
Content provided from the server's filesystem rather than a relational database management system (RDBMS).
Pages built using Server Side Includes or Common Gateway Interface (CGI) instead of a web application written in a dynamic programming language such as Perl, PHP, Python or Ruby.
The use of HTML 3.2-era elements such as frames and tables to position and align elements on a page. These were often used in combination with spacer GIFs.
Proprietary HTML extensions, such as the <blink> and <marquee> tags, introduced during the first browser war.
Online guestbooks.
GIF buttons, graphics (typically 88×31 pixels in size) promoting web browsers, operating systems, text editors and various other products.
HTML forms sent via email. Support for server side scripting was rare on shared servers during this period. To provide a feedback mechanism for web site visitors, mailto forms were used. A user would fill in a form, and upon clicking the form's submit button, their email client would launch and attempt to send an email containing the form's details. The popularity and complications of the mailto protocol led browser developers to incorporate email clients into their browsers.
Terry Flew, in his third edition of New Media, described the differences between Web 1.0 and Web 2.0 as a
Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 "craze".
2004–present: The Web as platform, ubiquity
Web 2.0
Web pages were initially conceived as structured documents based upon HTML. They could include images, video, and other content, although the use of media was initially relatively limited and the content was mainly static. By the mid-2000s, new approaches to sharing and exchanging content, such as blogs and RSS, rapidly gained acceptance on the Web. The video-sharing website YouTube launched the concept of user-generated content. As new technologies made it easier to create websites that behaved dynamically, the Web attained greater ease of use and gained a sense of interactivity which ushered in a period of rapid popularization. This new era also brought into existence social networking websites, such as Friendster, MySpace, Facebook, and Twitter, and photo- and video-sharing websites such as Flickr and, later, Instagram which gained users rapidly and became a central part of youth culture. Wikipedia's user-edited content quickly displaced the professionally-written Microsoft Encarta. The popularity of these sites, combined with developments in the technology that enabled them, and the increasing availability and affordability of high-speed connections made video content far more common on all kinds of websites. This new media-rich model for information exchange, featuring user-generated and user-edited websites, was dubbed Web 2.0, a term coined in 1999 by Darcy DiNucci and popularized in 2004 at the Web 2.0 Conference. The Web 2.0 boom drew investment from companies worldwide and saw many new service-oriented startups catering to a newly "democratized" Web.
JavaScript made the development of interactive web applications possible. Web pages could run JavaScript and respond to user input, but they could not interact with the network. Browsers could submit data to servers via forms and receive new pages, but this was slow compared to traditional desktop applications. Developers that wanted to offer sophisticated applications over the Web used Java or nonstandard solutions such as Adobe Flash or Microsoft's ActiveX.
Microsoft added a little-noticed feature called XMLHttpRequest to Internet Explorer in 1999, which enabled a web page to communicate with the server while remaining visible. Developers at Oddpost used this feature in 2002 to create the first Ajax application, a webmail client that performed as well as a desktop application. Ajax apps were revolutionary. Web pages evolved beyond static documents to full-blown applications. Websites began offering APIs in addition to webpages. Developers created a plethora of Ajax apps including widgets, mashups and new types of social apps. Analysts called it Web 2.0.
Browser vendors improved the performance of their JavaScript engines and dropped support for Flash and Java. Traditional client server applications were replaced by cloud apps. Amazon reinvented itself as a cloud service provider.
The use of social media on the Web has become ubiquitous in everyday life. The 2010s also saw the rise of streaming services, such as Netflix.
In spite of the success of Web 2.0 applications, the W3C forged ahead with their plan to replace HTML with XHTML and represent all data in XML. In 2004, representatives from Mozilla, Opera, and Apple formed an opposing group, the Web Hypertext Application Technology Working Group (WHATWG), dedicated to improving HTML while maintaining backward compatibility. For the next several years, websites did not transition their content to XHTML; browser vendors did not adopt XHTML2; and developers eschewed XML in favor of JSON. By 2007, the W3C conceded and announced they were restarting work on HTML and in 2009, they officially abandoned XHTML. In 2019, the W3C ceded control of the HTML specification, now called the HTML Living Standard, to WHATWG.
Microsoft rewrote their Edge browser in 2021 to use Chromium as its code base in order to be more compatible with Chrome.
Security, censorship and cybercrime
The increasing use of encrypted connections (HTTPS) enabled e-commerce and online banking. Nonetheless, the 2010s saw the emergence of various controversial trends, such as internet censorship and the growth of cybercrime, including web-based cyberattacks and ransomware.
Mobile
Early attempts to allow wireless devices to access the Web used simplified formats such as i-mode and WAP. Apple introduced the first smartphone in 2007 with a full-featured browser. Other companies followed suit and in 2011, smartphone sales overtook PCs. Since 2016, most visitors access websites with mobile devices which led to the adoption of responsive web design.
Apple, Mozilla, and Google have taken different approaches to integrating smartphones with modern web apps. Apple initially promoted web apps for the iPhone, but then encouraged developers to make native apps. Mozilla announced Web APIs in 2011 to allow webapps to access hardware features such as audio, camera or GPS. Frameworks such as Cordova and Ionic allow developers to build hybrid apps. Mozilla released a mobile OS designed to run web apps in 2012, but discontinued it in 2015.
Google announced specifications for Accelerated Mobile Pages (AMP), and progressive web applications (PWA) in 2015. AMPs use a combination of HTML, JavaScript, and Web Components to optimize web pages for mobile devices; and PWAs are web pages that, with a combination of web workers and manifest files, can be saved to a mobile device and opened like a native app.
Web 3.0 and Web3
The extension of the Web to facilitate data exchange was explored as an approach to create a Semantic Web (sometimes called Web 3.0). This involved using machine-readable information and interoperability standards to enable context-understanding programs to intelligently select information for users. Continued extension of the Web has focused on connecting devices to the Internet, coined Intelligent Device Management. As Internet connectivity becomes ubiquitous, manufacturers have started to leverage the expanded computing power of their devices to enhance their usability and capability. Through Internet connectivity, manufacturers are now able to interact with the devices they have sold and shipped to their customers, and customers are able to interact with the manufacturer (and other providers) to access a lot of new content.
This phenomenon has led to the rise of the Internet of Things (IoT), where modern devices are connected through sensors, software, and other technologies that exchange information with other devices and systems on the Internet. This creates an environment where data can be collected and analyzed instantly, providing better insights and improving the decision-making process. Additionally, the integration of AI with IoT devices continues to improve their capabilities, allowing them to predict customer needs and perform tasks, increasing efficiency and user satisfaction.
Web3 (sometimes also referred to as Web 3.0) is an idea for a decentralized Web based on public blockchains, smart contracts, digital tokens and digital wallets.
Beyond Web 3.0
The next generation of the Web is often termed Web 4.0, but its definition is not clear. According to some sources, it is a Web that involves artificial intelligence, the internet of things, pervasive computing, ubiquitous computing and the Web of Things among other concepts. According to the European Union, Web 4.0 is "the expected fourth generation of the World Wide Web. Using advanced artificial and ambient intelligence, the internet of things, trusted blockchain transactions, virtual worlds and XR capabilities, digital and real objects and environments are fully integrated and communicate with each other, enabling truly intuitive, immersive experiences, seamlessly blending the physical and digital worlds".
Historiography
Historiography of the Web poses specific challenges, including disposable data, missing links, lost content and archived websites, which have consequences for web historians. Sites such as the Internet Archive aim to preserve content.
See also
Fediverse
History of email
History of hypertext
History of the Internet
History of telecommunication
History of web syndication technology
List of websites founded before 1995
Webring
Online services before the World Wide Web
Minitel
NABU Network
Quantum Link / AOL
CompuServe
GEnie
Usenet
Bulletin board system
Prestel
Scrapbook
:Category:Pre–World Wide Web online services
References
Further reading
External links
Web History: first 30 years
"A Little History of the World Wide Web: from 1945 to 1995", Dan Connolly, W3C, 2000
"The World Wide Web: Past, Present and Future", Tim Berners-Lee, August 1996
The History of the Web
Web Development History
A Brief(ish) History of the Web Universe, Brian Kardell
Web History Community Group, W3C
The history of the Web, W3C
info.cern.ch, the first website
World Wide Web
World Wide Web
World Wide Web | History of the World Wide Web | Technology | 7,962 |
71,493,287 | https://en.wikipedia.org/wiki/Time%20in%20Mauritania | Time in Mauritania is given by a single time zone, denoted as Greenwich Mean Time (GMT; UTC+00:00). Mauritania shares this time zone with several other countries, including fourteen in western Africa. Mauritania does not observe daylight saving time (DST).
IANA time zone database
In the IANA time zone database, Mauritania is given one zone in the file zone.tab – Africa/Nouakchott, which is an alias to Africa/Abidjan. "MR" refers to the country's ISO 3166-1 alpha-2 country code. Data for Mauritania directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
See also
Time in Africa
List of time zones by country
References
External links
Current time in Mauritania at Time.is
Time in Mauritania at TimeAndDate.com
Time by country
Geography of Mauritania
Time in Africa | Time in Mauritania | Physics | 206 |
4,439,177 | https://en.wikipedia.org/wiki/Violent%20disorder | Violent disorder is a statutory offence in England and Wales. It is created by section 2(1) of the Public Order Act 1986. Sections 2(1) to (4) of that Act provide:
(1) Where 3 or more persons who are present together use or threaten unlawful violence and the conduct of them (taken together) is such as would cause a person of reasonable firmness present at the scene to fear for their personal safety, each of the persons using or threatening unlawful violence is guilty of violent disorder.
(2) It is immaterial whether the 3 or more use or threaten unlawful violence simultaneously.
(3) No person of reasonable firmness need actually be, or be likely to be, present at the scene.
(4) Violent disorder may be committed in private as well as in public places.
"3 or more persons"
See the following cases:
R v Mahroof [1988] 88 Cr App R 317, CA
R v Fleming and Robinson [1989] Crim LR 658, CA
R v McGuigan and Cameron [1991] Crim LR 719, CA
"Violence"
This word is defined by section 8.
Mens rea
For the mens rea, see section 6(2).
Indictment
As to particularisation, see R v Mahroof [1988], 88 Cr App R 317, CA.
Alternative verdict
See sections 7(3) and (4).
Arrest
Before 1 January 2006 this offence was classified as an arrestable offence by virtue of section 24(1)(b) of the Police and Criminal Evidence Act 1984. See now sections 24 and 24A of that Act, as substituted by the Serious Organised Crime and Police Act 2005.
Mode of trial and sentence
Violent disorder is triable either way. A person guilty of violent disorder is liable on conviction on indictment to imprisonment for a term not exceeding five years, or to a fine, or to both, or, on summary conviction, to imprisonment for a term not exceeding six months, or to a fine not exceeding the statutory maximum, or to both.
The following cases are relevant:
R v Tomlinson, 157 JP 695, CA
R v Hebron and Spencer, 11 Cr App R (S) 226, [1989] Crim LR 839, CA
R v Watson & others (1990) 12 Cr App R (S) 477
R v Tyler and others, 96 Cr App R 332, [1993] Crim LR 60, CA
R v Green [1997] 2 Cr App R (S) 191
R v Chapman (2002) 146 SJ
R v Rees [2006] 2 Cr App R (S) 20
References
Blackstones Police Manual Volume 4 General police duties, Fraser Simpson (2006). pp. 246. Oxford University Press.
Legal terminology
Violence
English laws | Violent disorder | Biology | 574 |
9,215,935 | https://en.wikipedia.org/wiki/Traymore%20Hotel | The Traymore Hotel was a resort in Atlantic City, New Jersey. Begun as a small boarding house in 1879, the hotel expanded and became one of the city's premier resorts. As Atlantic City began to decline in its popularity as a resort town, during the 1950s and 1960s, the Traymore diminished in popularity. By the early 1970s the hotel was abandoned and severely run down. It was imploded and demolished between April and May 1972, a full four years before the New Jersey Legislature passed the referendum that legalized gambling in Atlantic City.
Beginnings
Like most of the pre-casino Atlantic City resorts, the Traymore went through several incarnations. It started off as a modest ten-room wooden cottage boarding house located at Illinois Avenue and the Boardwalk. The name "Traymore" came from the hotel's steadiest customer, "Uncle Al Harvey", a rich Marylander who had named his estate "Traymore" after his home town in Ireland.
The first hotel was rather flimsy, as it was destroyed by a severe winter storm on January 10, 1884. It was quickly rebuilt and enlarged. When rebuilt, the owners made the hotel stronger and more modern, adding indoor plumbing and bathrooms. They also added a spacious lawn between the hotel and the Boardwalk that proved to be valuable when a September 1889 storm struck the city. The lawn protected the hotel from any serious damage. The hotel's modern appointments led to it becoming very popular. It stayed open year-round, and by 1898 it grew into the city's largest hotel with over 450 rooms. By 1906 the Traymore's owner, Daniel White, hired the firm of Price and McLanahan to construct a new tower which brought the hotel right up to the boardwalk.
Expansion
By 1914, the Traymore, which had been the city's most popular hotel, now had stiff competition from the Marlborough-Blenheim Hotel, located across from the Traymore on Ohio Avenue and the Boardwalk. Owner Josiah White III, Daniel White's half brother, had contracted the services of Price and McLanahan to build an extension to his Marlborough House which had opened in 1902. The result was the modern Blenheim hotel which was one of the first hotels constructed using reinforced concrete.
Built during the autumn and winter of 1914–15, White contracted with Price and McLanahan to replace the existing wooden-frame Traymore with a massive concrete structure that would rival the Marlborough-Blenheim. Price's Traymore was built directly behind the 1906 tower, and was designed to take advantage of its ocean views: hotel wings jutted out further from the central tower toward Pacific Avenue, thus affording more guests ocean views. The new Traymore opened in time for the 1915 season, and was a success. Built with tan brick and capped by yellow-tiled domes, the Traymore instantly became the city's architectural showpiece when it opened in June 1915. The hotel was such a success that White commissioned a 25-story additional tower to be built, but was unable to secure funding for the project due to World War I.
The Traymore catered to an upscale clientele, and was described in 1924 as "the Taj Mahal of Atlantic City," decades before Donald Trump opened a casino resort with that name.
The Traymore featured four faucets in every bathtub: hot and cold city water, hot and cold ocean water. There was a fifth faucet in the sink for ice water.
The Traymore was leased by the US Military during World War II, as part of Army Air Force Basic Training Center No. 7. The forty-seven Atlantic City resort hotels taken over by the United States Military were collectively dubbed "Camp Boardwalk". The Traymore was operated jointly with the adjacent Chalfonte-Haddon Hall Hotel as the England General Hospital, which opened on April 28, 1944. The hospital was named for Lt. Col. Thomas Marcus England, who had worked with Walter Reed researching yellow fever in Cuba in 1900. The Traymore served as the convalescent reconditioning section of the hospital. The last patients left the hospital in June 1946 and the Traymore was returned to its owners and reopened soon after.
The Traymore Hotel Outdoor and Indoor Swimming Pools were built 1954 to designs by architect Samuel Juster of New York City.
Demise and present status
The hotel remained popular well into the 1950s, but as Atlantic City declined in the 1960s, the Traymore did as well. The availability of home air conditioning and swimming pools, coupled with inexpensive and frequent airline services to destinations in Florida and the Caribbean, led to the decline of Atlantic City as the premier ocean resort. By the early 1970s, the hotel was defunct and was causing its owners large financial deficits. It was decided to demolish the hotel, despite a campaign to save the architectural landmark.
On April 27, 1972 the hotel experienced the first of four planned controlled implosions implemented by Jack Loizeaux. By May 1972 the hotel was completely demolished. For a time, the once-famous hotel held the Guinness World Record for largest controlled demolition—with a capacity of nearly , the Traymore was the largest (though not highest) structure yet demolished. The spectacle is captured in the 1980 film Atlantic City.
As well as the 1974 Walt Disney Film Herbie Rides Again in the beginning of the film where Alonzo Hawk demolishes numerous buildings.
Caesars Atlantic City purchased the land in the late 1970s and utilized it as a parking lot. The casino intended to develop a hotel there, however, the plan did not materialize. In 2006, Pinnacle Entertainment announced that it purchased the Traymore site and the adjacent Sands Atlantic City casino hotel. Pinnacle demolished the Sands and planned to develop a new casino on the combined parcels. Harsh economic times later caused Pinnacle to delay construction of the new resort. In February 2010, the company announced that it had canceled its construction plans and would instead seek to sell the land. Most of the Traymore site remains a parking lot.
Popular culture
Traymore Hotel is one of the locations featured in Grace Livingston Hill's 1911 novel Aunt Crete's Emancipation.
It can be seen in several exterior scenes of the 1972 Bob Rafelson film The King of Marvin Gardens, which was shot in Atlantic City only a few months before the building was demolished.
Footage of the Traymore's demolition features in the opening of Louis Malle's 1980 film Atlantic City. Nevertheless, the Traymore was demolished in 1972 for financial reasons and not in anticipation of legalized gambling (a 1974 referendum to allow casinos throughout the state was not approved by New Jersey voters). Gambling was legalized four years after the demolition in 1976 with Resorts International being the first legal casino to open in 1978.
The HBO drama Boardwalk Empire used the Atlantic City skyline, circa 1920, as the back drop for the series opening titles, including both the Traymore and the famed Marlborough-Blenheim Hotel.
See also
List of tallest buildings in Atlantic City
References
Further reading
Includes numerous reproductions of architectural renderings and construction photographs.
Hotel buildings completed in 1915
Skyscraper hotels in Atlantic City, New Jersey
Demolished hotels in New Jersey
Hotels established in 1879
Buildings and structures demolished by controlled implosion
Buildings and structures demolished in 1972
1972 disestablishments in New Jersey
Former skyscraper hotels
Former National Register of Historic Places in New Jersey | Traymore Hotel | Engineering | 1,489 |
9,878,560 | https://en.wikipedia.org/wiki/Factual%20relativism | Factual relativism (also called epistemic relativism, epistemological relativism, alethic relativism, and cognitive relativism) argues that truth is relative. According to factual relativism, facts used to justify claims are understood to be relative and subjective to the perspective of those proving or falsifying the proposition.
This form of relativism has its own particular problem, what Maurice Mandelbaum in 1962 termed the "self-excepting fallacy." Largely because of the self-excepting fallacy, few authors in the philosophy of science accept alethic cognitive relativism.
Viewpoints
One school of thought compares scientific knowledge to the mythology of other cultures, arguing that it is merely our society's set of myths based on societal assumptions. Paul Feyerabend's comments in Against Method that "The similarities between science and myth are indeed astonishing" and "First-world science is one science among many" (from the introduction to the Chinese edition) are sometimes cited, although it is not clear whether Feyerabend meant them to be taken entirely seriously.
The strong program in the sociology of science is (in the words of founder David Bloor) "impartial with respect to truth and falsity". Elsewhere, Bloor and Barry Barnes have said "For the relativist [such as us] there is no sense attached to the idea that some standards or beliefs are really rational as distinct from merely locally accepted as such." In France, Bruno Latour has claimed that "Since the settlement of a controversy is the cause of Nature's representation, not the consequence, we can never use the outcome—Nature—to explain how and why a controversy has been settled."
Yves Winkin, a Belgian professor of communications, responded to a popular trial in which two witnesses gave contradicting testimony by telling the newspaper Le Soir that "There is no transcendent truth. [...] It is not surprising that these two people, representing two very different professional universes, should each set forth a different truth. Having said that, I think that, in this context of public responsibility, the commission can only proceed as it does."
The philosopher of science Gérard Fourez wrote, "What one generally calls a fact is an interpretation of a situation that no one, at least for the moment, wants to call into question."
British archaeologist Roger Anyon told The New York Times that "science is just one of many ways of knowing the world... The Zuni's world view is just as valid as the archeological viewpoint of what prehistory is about."
According to the Stanford Encyclopedia of Philosophy, "Relativism has been, in its various guises, both one of the most popular and most reviled philosophical doctrines of our time. Defenders see it as a harbinger of tolerance and the only ethical and epistemic stance worthy of the open-minded and tolerant. Detractors dismiss it for its alleged incoherence and uncritical intellectual permissiveness."
Related views and criticism
Larry Laudan's book Science and Relativism outlines the various philosophical points of view on the subject in the form of a dialogue.
Cognitive relativism has been criticized by both analytic philosophers and scientists.
See also
Aesthetic relativism
Alternative facts
Cultural relativism
Moral relativism
Notes
References
Maria Baghramian, Relativism, London: Routledge, 2004,
Ernest Gellner, Relativism and the Social Sciences, Cambridge University Press, 1985,
Nelson Goodman, Ways of Worldmaking. Indianapolis: Hackett, 1978, , Paperback
Barry Barnes, David Bloor, "Relativism, Rationalism and the Sociology of Knowledge". In Martin Hollis, Steven Lukes (eds.). Rationality and Relativism. MIT, 1982
Jack W. Meiland, Michael Krausz, Relativism, Cognitive and Moral, Notre Dame: University of Notre Dame Press, 1982,
Diederick Raven, Lieteke van Vucht Tijssen, Jan de Wolf, Cognitive Relativism and Social Science, 1992,
Markus Seidel, Epistemic Relativism: A Constructive Critique, Basingstoke: Palgrave Macmillan, 2014,
External links
Westacott, E. Cognitive Relativism, 2006, Internet Encyclopedia of Philosophy
Westacott, E. Relativism, 2005, Internet Encyclopedia of Philosophy
Relativism
Epistemological theories
Social epistemology
Internalism and externalism | Factual relativism | Technology | 927 |
4,397,993 | https://en.wikipedia.org/wiki/Decoupling%20capacitor | In electronics, a decoupling capacitor is a capacitor used to decouple (i.e. prevent electrical energy from transferring to) one part of a circuit from another. Noise caused by other circuit elements is shunted through the capacitor, reducing its effect on the rest of the circuit. For higher frequencies, an alternative name is bypass capacitor as it is used to bypass the power supply or other high-impedance component of a circuit.
Discussion
Active devices of an electronic system (e.g. transistors, integrated circuits, vacuum tubes) are connected to their power supplies through conductors with finite resistance and inductance. If the current drawn by an active device changes, the voltage drop from the power supply to the device will also change due to these impedances. If several active devices share a common path to the power supply, changes in the current drawn by one element may produce voltage changes large enough to affect the operation of others – voltage spikes or ground bounce, for example – so the change of state of one device is coupled to others through the common impedance to the power supply. A decoupling capacitor provides a bypass path for transient currents, instead of flowing through the common impedance.<ref name=TTL75> Don Lancaster, TTL Cookbook', Howard W. Sams, 1975, no ISBN, pp.23-24 </ref>
The decoupling capacitor works as the device’s local energy storage. The capacitor is placed between the power line and the ground to the circuit the current is to be provided. According to the capacitor current–voltage relation
a voltage drop between a power line and the ground results in a current drawn out from the capacitor to the circuit. When capacitance is large enough, sufficient current is supplied to maintain an acceptable range of voltage drop. The capacitor stores a small amount of energy that can compensate for the voltage drop in the power supply conductors to the capacitor. To reduce undesired parasitic equivalent series inductance, small and large capacitors are often placed in parallel, adjacent to individual integrated circuits (see § Placement).
In digital circuits, decoupling capacitors also help prevent radiation of electromagnetic interference from relatively long circuit traces due to rapidly changing power supply currents.
Decoupling capacitors alone may not suffice in such cases as a high-power amplifier stage with a low-level pre-amplifier coupled to it. Care must be taken in the layout of circuit conductors so that heavy current at one stage does not produce power supply voltage drops that affect other stages. This may require re-routing printed circuit board traces to segregate circuits, or the use of a ground plane to improve the stability of power supply.
Decoupling
A bypass capacitor is often used to decouple a subcircuit from AC signals or voltage spikes on a power supply or other line. A bypass capacitor can shunt energy from those signals, or transients, past the subcircuit to be decoupled, right to the return path. For a power supply line, a bypass capacitor from the supply voltage line to the power supply return (neutral) would be used.
High frequencies and transient currents can flow through a capacitor to circuit ground instead of to the harder path of the decoupled circuit, but DC cannot go through the capacitor and continues to the decoupled circuit.
Another kind of decoupling is stopping a portion of a circuit from being affected by switching that occurs in another portion of the circuit. Switching in subcircuit A may cause fluctuations in the power supply or other electrical lines, but you do not want subcircuit B, which has nothing to do with that switching, to be affected. A decoupling capacitor can decouple subcircuits A and B so that B doesn't see any effects of the switching.
Switching subcircuits
In a subcircuit, switching will change the load current drawn from the source. Typical power supply lines show inherent inductance, which results in a slower response to changes in current. The supply voltage will drop across these parasitic inductances for as long as the switching event occurs. This transient voltage drop would be seen by other loads as well if the inductance between two loads is much lower compared to the inductance between the loads and the output of the power supply.
To decouple other subcircuits from the effect of the sudden current demand, a decoupling capacitor can be placed in parallel with the subcircuit, across its supply voltage lines. When switching occurs in the subcircuit, the capacitor supplies the transient current. Ideally, by the time the capacitor runs out of charge, the switching event has finished, so that the load can draw full current at normal voltage from the power supply and the capacitor can recharge. The best way to reduce switching noise is to design a PCB as a giant capacitor by sandwiching the power and ground planes across a dielectric material.
Sometimes parallel combinations of capacitors are used to improve response. This is because real capacitors have parasitic inductance, which causes the impedance to deviate from that of an ideal capacitor at higher frequencies.
Transient load decoupling
Transient load decoupling as described above is needed when there is a large load that gets switched quickly. The parasitic inductance in every (decoupling) capacitor may limit the suitable capacity and influence the appropriate type if switching occurs very fast.
Logic circuits tend to do sudden switching (an ideal logic circuit would switch from low voltage to high voltage instantaneously, with no middle voltage ever observable). So logic circuit boards often have a decoupling capacitor close to each logic IC connected from each power supply connection to a nearby ground. These capacitors decouple every IC from every other IC in terms of supply voltage dips.
These capacitors are often placed at each power source as well as at each analog component in order to ensure that the supplies are as steady as possible. Otherwise, an analog component with a poor power supply rejection ratio (PSRR) will copy fluctuations in the power supply onto its output.
In these applications, the decoupling capacitors are often called bypass capacitors to indicate that they provide an alternate path for high-frequency signals that would otherwise cause the normally steady supply voltage to change. Those components that require quick injections of current can bypass'' the power supply by receiving the current from the nearby capacitor. Hence, the slower power supply connection is used to charge these capacitors, and the capacitors actually provide large quantities of high-availability current.
Placement
A transient load decoupling capacitor is placed as close as possible to the device requiring the decoupled signal. This minimizes the amount of line inductance and series resistance between the decoupling capacitor and the device. The longer the conductor between the capacitor and the device, the more inductance is present.
Since capacitors differ in their high-frequency characteristics, decoupling ideally involves the use of a combination of capacitors. For example in logic circuits, a common arrangement is ~100 nF ceramic per logic IC (multiple ones for complex ICs), combined with electrolytic or tantalum capacitor(s) up to a few hundred μF per board or board section.
Example uses
These photos show old printed circuit boards with through-hole capacitors, where as modern boards typically have tiny surface-mount capacitors.
See also
Ceramic capacitor
Equivalent series inductance
Equivalent series resistance
Film capacitor
E-series of preferred numbers
References
External links
Choosing and Using Bypass Capacitors – application note from Intersil
Decoupling – decoupling guide for various frequencies by Henry W. Ott
Power Supply Noise Reduction – how to design effective supply bypassing and decoupling networks by Ken Kundert
ESR and Bypass Capacitor Self Resonant Behavior: How to Select Bypass Caps – article written by Douglas Brooks
Circuit Board Decoupling Information – decoupling guidelines for various types of circuit boards
Basic Principles of Signal Integrity – Altera whitepaper
Bypass Capacitors, an Interview With Todd Hubing – by Douglas Brooks
Capacitors | Decoupling capacitor | Physics | 1,749 |
4,628,618 | https://en.wikipedia.org/wiki/Making%20the%20Most%20of%20the%20Micro | Making the Most of the Micro is a TV series broadcast in 1983 as part of the BBC's Computer Literacy Project. It followed the earlier series The Computer Programme. Unlike its predecessor, Making the Most of the Micro delved somewhat deeper into the technicalities and uses that microcomputers could be put to, once again mainly using the BBC Micro in the studio for demonstration purposes. The series was followed by Micro Live.
Presenters
Ian McNaught-Davis (known as 'Mac') was once again the anchorman but Chris Serle and Gill Nevill were absent, instead various experts were brought in as required to demonstrate some of the more technical aspects of the microcomputers and their uses. John Coll was the main technical 'bod' (he had also written the User Guide for the BBC Micro along with other manuals) and Ian Trackman also featured - he wrote most of the software that was used for demonstrating certain features of the microcomputer, not only for this series but also The Computer Programme and Computers in Control. The programme also featured location reports to demonstrate various practical and business uses of microcomputers.
The title and incidental music was by Roger Limb of the BBC Radiophonic Workshop.
Programmes
The series was split into 10 programmes, each about 25 minutes long and dealing with a particular subject area. They were as follows (original airdates in brackets):
The Versatile Machine (10 January 1983)
Getting Down to BASIC (17 January 1983)
Strings and Things (24 January 1983)
Introducing Graphics (31 January 1983)
Keeping a Record (7 February 1983)
Getting Down to Business (14 February 1983)
Sounds Interesting (21 February 1983)
Everything Under Control (28 February 1983)
Moving Pictures (7 March 1983)
At the End of the Line (14 March 1983)
See also
Micro Men
The Computer Programme
Computers in Control
Micro Live
External links
Ian Trackman's web site
BBC Two original programming
Computer science education in the United Kingdom
Computer television series
1983 British television series debuts
1983 British television series endings
British English-language television shows | Making the Most of the Micro | Technology | 418 |
13,629,244 | https://en.wikipedia.org/wiki/Listeria%20Hfq%20binding%20LhrA | Listeria Hfq binding LhrA is a ncRNA that was identified by screening for RNA molecules which co-immunoprecipitated with the RNA chaperone Hfq. This RNA is transcribed from a region overlapping with a predicted protein of unknown function (Lmo2257) and is located between a putative intracellular protease and a putative protein of the ribulose-phosphate 3 epimerase family. It is highly expressed in the stationary growth phase but the function is unknown. It is proposed to be a regulatory RNA which controls gene expression at the post transcriptional level by binding the target mRNA in an Hfq dependent fashion. This RNA molecule appears to be conserved amongst Listeria species but has not been identified in other bacterial species.
See also
Listeria Hfq binding LhrC
References
External links
Non-coding RNA | Listeria Hfq binding LhrA | Chemistry | 178 |
7,741,384 | https://en.wikipedia.org/wiki/Split%20nut | A split nut is a nut that is split lengthwise into two pieces (opposed halves) so that its female thread may be opened and closed over the male thread of a bolt or leadscrew. This allows the nut, when open, to move along the screw without the screw turning (or, vice versa, to allow the screw to pass through the nut without turning). Then, when the nut is closed, it resumes the normal movement of a nut on a screw (in which axial travel is linked to rotational travel)
A split nut assembly is often used in positioning systems, for example in the leadscrew of a lathe. It is one of the machine elements that makes single-point threading practical on manual (non-CNC) lathes. The very earliest screw-cutting lathes (in the late 18th and early 19th centuries) did not have them, but within a few decades, split nuts were common on lathes.
The two halves of the nut have chamfered ends (60° to the axis), which helps the threads to find engagement during the closing action. Usually, the screw and nut are also oiled for lubrication. Such provisions prolong the service life of the threads by minimizing wear.
Split nuts work best with trapezoidal threads.
Split nuts may not engage and disengage with multi start threads due to the overlapping leads.
References
Nuts (hardware) | Split nut | Engineering | 294 |
62,897,234 | https://en.wikipedia.org/wiki/Penis%20clamp | Penis clamp is an external penis compression device that treats male urinary incontinence. Incontinence clamps for men are applied to compress the urethra to compensate for the malfunctioning of the natural urinary sphincter, preventing leakage from the bladder with minimal restriction of blood flow.
Description
These devices are crafted to block or compress the urethra, thus preventing urine leakage. They are applied externally and are typically user-friendly and comfortable to use. Compression devices may vary in shape and size, but they are generally made of flexible and soft materials that adapt to the anatomical contour. Some models come with adjustable settings to customize the level of compression according to individual needs. There were models of urethra clamping devices that date back from the 1920s. They are most commonly made from stainless steel and plastic on the outer surface and silicone or rubber on the inner surface. They are usually applied as a cost-effective solution to urinary incontinence.
Types of devices
Cunningham Penile Clamp: This type of clamp is placed around the penis. It helps to stop urine leakage by compressing the urethra, the tube through which urine exits. This is achieved by controlled squeezing, preventing urine from escaping.
Flexible Uriclak Device: Another type of clamp that is also placed on the penis. It functions similarly to the Cunningham penile clamp by compressing the urethra, but in this case, it does not have closures. It is flexible and comfortably adapts to halt urine flow.
Advantages and benefits
Non-invasive: These devices do not require surgery or invasive procedures, making them an attractive option for individuals looking to avoid surgical interventions.
Customization: Penile clamps allow for personalized adjustments to cater to individual needs and preferences.
Independence and Freedom: By offering greater control over urinary incontinence, incontinence devices help individuals maintain an active lifestyle and engage in various activities without worries.
Effectiveness: Many users have experienced a significant reduction in urine leaks and an improvement in their quality of life after using penile devices.
Risks
Usually, these devices are safe and effective. However, none of the penile compression devices cause sustained irritation or impaired blood flow, and generally, patients yield good recovery around 40 minutes after the removal of the devices. It is recommended to de-clamp these devices at a regular interval of four hours.
In the instruction manuals, manufacturers allow continuous use of penis clamps and recommend repositioning them every 2-3 hours (each time after urination).
Customers should not buy this type of health products on non-specialized websites. Patients may incur serious risks. Urinary clamps imported directly from Asia may not have passed the necessary medical checks.
See also
Urinary incontinence management
Artificial urinary sphincter (AUS)
Urinary catheterization
Intermittent catheterisation
References
Urology
Urological conditions
Medical devices
Urologic procedures | Penis clamp | Biology | 618 |
16,265,386 | https://en.wikipedia.org/wiki/Kepler%20scientific%20workflow%20system | Kepler is a free software system for designing, executing, reusing, evolving, archiving, and sharing scientific workflows.
Kepler's facilities provide process and data monitoring, provenance information, and high-speed data movement. Workflows in general, and scientific workflows in particular, are directed graphs where the nodes represent discrete computational components, and the edges represent paths along which data and results can flow between components.
In Kepler, the nodes are called 'Actors' and the edges are called 'channels'. Kepler includes a graphical user interface for composing workflows in a desktop environment, a runtime engine for executing workflows within the GUI and independently from a command-line, and a distributed computing option that allows workflow tasks to be distributed among compute nodes in a computer cluster or computing grid. The Kepler system principally targets the use of a workflow metaphor for organizing computational tasks that are directed towards particular scientific analysis and modeling goals. Thus, Kepler scientific workflows generally model the flow of data from one step to another in a series of computations that achieve some scientific goal.
Scientific workflow
A scientific workflow is the process of combining data and processes into a configurable, structured set of steps that implement semi-automated computational solutions to a scientific problem. Scientific workflow systems often provide graphical user interfaces to combine different technologies along with efficient methods for using them, and thus increase the efficiency of the scientists.
Access to scientific data
Kepler provides direct access to scientific data that has been archived in many of the commonly used data archives. For example, Kepler provides access to data stored in the Knowledge Network for Biocomplexity (KNB) Metacat server and described using Ecological Metadata Language. Additional data sources that are supported include data accessible using the DiGIR protocol, the OPeNDAP protocol, GridFTP, JDBC, SRB, and others.
Models of Computation
Kepler differs from many of the other bioinformatics workflow management systems in that it separates the structure of the workflow model from its model of computation, such that different models for the computation of the workflow can be bound to a given workflow graph. Kepler inherits several common models of computation from the Ptolemy system, including Synchronous Data Flow (SDF), Continuous Time (CT), Process Network (PN), and Dynamic Data Flow (DDF), among others.
Hierarchical workflows
Kepler supports hierarchy in workflows, which allows complex tasks to be composed of simpler components. This feature allows workflow authors to build re-usable, modular components that can be saved for use across many different workflows.
Workflow semantics
Kepler provides a model for the semantic annotation of workflow components using terms drawn from an ontology. These annotations support many advanced features, including improved search capabilities, automated workflow validation, and improved workflow editing.
Sharing workflows
Kepler components can be shared by exporting the workflow or component into a Kepler Archive (KAR) file, which is an extension of the JAR file format from Java. Once a KAR file is created, it can be emailed to colleagues, shared on web sites, or uploaded to the Kepler Component Repository. The Component Repository is centralized system for sharing Kepler workflows that is accessible via both a web portal and a web service interface. Users can directly search for and utilize components from the repository from within the Kepler workflow composition GUI.
Provenance
Provenance is a critical concept in scientific workflows, since it allows scientists to understand the origin of their results, to repeat their experiments, and to validate the processes that were used to derive data products. In order for a workflow to be reproduced, provenance information must be recorded that indicates where the data originated, how it was altered, and which components and what parameter settings were used. This will allow other scientists to re-conduct the experiment, confirming the results.
Little support exists in current systems to allow end-users to query provenance information in scientifically meaningful ways, in particular when advanced workflow execution models go beyond simple DAGs (as in process networks).
Kepler history
The Kepler Project was created in 2002 by members of the Science Environment for Ecological Knowledge (SEEK) project and the Scientific Data Management (SDM) project. The project was founded by researchers at the National Center for Ecological Analysis and Synthesis (NCEAS) at the University of California, Santa Barbara and the San Diego Supercomputer Center at the University of California, San Diego. Kepler extends Ptolemy II, which is a software system for modeling, simulation, and design of concurrent, real-time, embedded systems developed at UC Berkeley. Collaboration on Kepler quickly grew as members of various scientific disciplines realized the benefits of scientific workflows for analysis and modeling and began contributing to the system. As of 2008, Kepler collaborators come from many science disciplines, including ecology, molecular biology, genetics, physics, chemistry, conservation science, oceanography, hydrology, library science, computer science, and others.
Kepler is a workflow orchestration engine which is used to make workflows for making work much easier, in the form of actor.
See also
Apache Taverna
Discovery Net
VisTrails
LONI Pipeline
Bioinformatics workflow management systems
DataONE Investigator Toolkit
References
External links
Kepler Project website
Kepler Component Repository
Ptolemy II project website
Knowledge Network for Biocomplexity (KNB) Data archive
List of software tools related to workflows on the DataONE website
Workflow applications
Bioinformatics software
Free and open-source software
Software using the BSD license
Free software programmed in Java (programming language) | Kepler scientific workflow system | Biology | 1,149 |
30,419,251 | https://en.wikipedia.org/wiki/%CE%91-Neoendorphin | α-Neoendorphin is an endogenous opioid peptide with a decapeptide structure and the amino acid sequence Tyr-Gly-Gly-Phe-Leu-Arg-Lys-Tyr-Pro-Lys.
α-Neoendorphin is a neuropeptide. Prodynorphin or Proenkephalin B is its precursor. Researchers and anatomists have not yet studied the distribution of α-neoendorphin in the human in detail. However, some studies have been done which supports the presence of α-neoendorphin immunoreactive fibers throughout the human brainstem. According to a study done by Duque, Ewing, Arturo Mangas, Pablo Salinas, Zaida Díaz-cabiale, José Narváez, and Rafael Coveñas; α-neoendorphin immunoreactive fibers can be found in the caudal part of the solitary nucleus, in the caudal and the gelatinosa parts of the spinal trigeminal nucleus, and only low density was found in the central grey matter of medulla.
See also
β-Neoendorphin
References
Opioid peptides
Decapeptides | Α-Neoendorphin | Chemistry,Biology | 262 |
58,455,439 | https://en.wikipedia.org/wiki/Small%20planet%20radius%20gap | The small planet radius gap (also called the Fulton gap, photoevaporation valley, or Sub-Neptune Desert) is an observed scarcity of planets with radii between 1.5 and 2 times Earth's radius, likely due to photoevaporation-driven mass loss. A bimodality in the Kepler exoplanet population was first observed in 2011 and attributed to the absence of significant gas atmospheres on close-in, low-mass planets. This feature was noted as possibly confirming an emerging hypothesis that photoevaporation could drive atmospheric mass loss This would lead to a population of bare, rocky cores with smaller radii at small separations from their parent stars, and planets with thick hydrogen- and helium-dominated envelopes with larger radii at larger separations. The bimodality in the distribution was confirmed with higher-precision data in the California-Kepler Survey in 2017, which was shown to match the predictions of the photoevaporative mass-loss hypothesis later that year.
Despite the implication of the word 'gap', the Fulton gap does not actually represent a range of radii completely absent from the observed exoplanet population, but rather a range of radii that appear to be relatively uncommon. As a result, 'valley' is often used in place of 'gap'. The specific term "Fulton gap" is named for Benjamin J. Fulton, whose doctoral thesis included precision radius measurements that confirmed the scarcity of planets between 1.5 and 2 Earth radii, for which he won the Robert J. Trumpler Award, although the existence of this radius gap had been noted along with its underlying mechanisms as early as 2011, 2012 and 2013.
Within the photoevaporation model of Owen and Wu, the radius gap arises as planets with H/He atmospheres that double the core's radius are the most stable to atmospheric mass-loss. Planets with atmospheres larger than this are vulnerable to erosion and their atmospheres evolve towards a size that doubles the core's radius. Planets with smaller atmospheres undergo runaway loss, leaving them with no H/He dominated atmosphere.
Other possible explanations
Runaway gas accretion by larger planets.
Observational bias favoring easier detection of hot ocean planets with extended steam atmospheres.
See also
References
Exoplanetology
Planetary science
Radii | Small planet radius gap | Astronomy | 481 |
34,974,817 | https://en.wikipedia.org/wiki/C19H18O6 | {{DISPLAYTITLE:C19H18O6}}
The molecular formula C19H18O6 (molar mass: 342.34 g/mol, exact mass: 342.1103 u) may refer to:
Zapotin, a flavone
Decarboxylated 8,5'-diferulic acid, a diferulic acid | C19H18O6 | Chemistry | 83 |
39,154,794 | https://en.wikipedia.org/wiki/Tunable%20resistive%20pulse%20sensing | Tunable resistive pulse sensing (TRPS) is a single-particle technique used to measure the size, concentration and zeta potential of particles as they pass through a size-tunable nanopore.
The technique adapts the principle of resistive pulse sensing, which monitors current flow through an aperture, combined with the use of tunable nanopore technology, allowing the passage of ionic current and particles to be regulated by adjusting the pore size. The addition of the tunable nanopore allows for the measurement of a wider range of particle sizes and improves accuracy.
Technique
Particles crossing a nanopore are detected one at a time as a transient change in the ionic current flow, which is denoted as a blockade event with its amplitude denoted as the blockade magnitude. As blockade magnitude is proportional to particle size, accurate particle sizing can be achieved after calibration with a known standard. This standard is composed of particles of a known size and concentration. For TRPS, carboxylated polystyrene particles are often used.
Nanopore-based detection allows particle-by-particle assessment of complex mixtures. By selecting an appropriately sized nanopore and adjusting its stretch, the nanopore size can be optimized for particle size and improve measurement accuracy.
Adjustments to nanopore stretch, in combination with a fine-control of pressure and voltage allow TRPS to determine sample concentration and to accurately derive individual particle zeta potential in addition to particle size information.
Applications
TRPS was developed by Izon Science Limited, producer of commercially available nanopore-based particle characterization systems. Izon Science Limited currently sell one TRPS device, known as the "Exoid". Previous devices include the "qNano", the "qNano Gold" and the "qViron". These systems have been applied to measure a wide range of biological and synthetic particle types including viruses and nanoparticles. TRPS has been applied in both academic and industrial research fields, including:
Drug delivery research (e.g. lipid nanoparticles and liposomes)
Extracellular vesicles such as exosomes
Virology and vaccine production
Biomedical diagnostics
Microfluidics
References
Nanotechnology
Nanoparticles | Tunable resistive pulse sensing | Materials_science,Engineering | 448 |
53,754,179 | https://en.wikipedia.org/wiki/Zindoxifene | Zindoxifene (INN; former developmental code names D-16726, NSC-341952) is a nonsteroidal selective estrogen receptor modulator (SERM) that was under development in the 1980s and early 1990s for the treatment of breast cancer but was not marketed. It showed estrogenic-like activity in preclinical studies and failed to demonstrate effectiveness as a treatment for breast cancer in clinical trials. Zindoxifene was the lead compound of the distinct 2-phenylindole class of SERMs, and the marketed SERM bazedoxifene was derived from the major active metabolite of zindoxifene, D-15414. Zindoxifene was first described in 1984.
References
External links
Zindoxifene - AdisInsight
Abandoned drugs
Acetate esters
Hormonal antineoplastic drugs
Indoles
Selective estrogen receptor modulators | Zindoxifene | Chemistry | 193 |
759,298 | https://en.wikipedia.org/wiki/Linear%20density | Linear density is the measure of a quantity of any characteristic value per unit of length. Linear mass density (titer in textile engineering, the amount of mass per unit length) and linear charge density (the amount of electric charge per unit length) are two common examples used in science and engineering.
The term linear density or linear mass density is most often used when describing the characteristics of one-dimensional objects, although linear density can also be used to describe the density of a three-dimensional quantity along one particular dimension. Just as density is most often used to mean mass density, the term linear density likewise often refers to linear mass density. However, this is only one example of a linear density, as any quantity can be measured in terms of its value along one dimension.
Linear mass density
Consider a long, thin rod of mass and length . To calculate the average linear mass density, , of this one dimensional object, we can simply divide the total mass, , by the total length, :
If we describe the rod as having a varying mass (one that varies as a function of position along the length of the rod, ), we can write:
Each infinitesimal unit of mass, , is equal to the product of its linear mass density, , and the infinitesimal unit of length, :
The linear mass density can then be understood as the derivative of the mass function with respect to the one dimension of the rod (the position along its length, )
The SI unit of linear mass density is the kilogram per meter (kg/m).
Linear density of fibers and yarns can be measured by many methods. The simplest one is to measure a length of material and weigh it. However, this requires a large sample and masks the variability of linear density along the thread, and is difficult to apply if the fibers are crimped or otherwise cannot lay flat relaxed. If the density of the material is known, the fibers are measured individually and have a simple shape, a more accurate method is direct imaging of the fiber with a scanning electron microscope to measure the diameter and calculation of the linear density. Finally, linear density is directly measured with a vibroscope. The sample is tensioned between two hard points, mechanical vibration is induced and the fundamental frequency is measured.
Linear charge density
Consider a long, thin wire of charge and length . To calculate the average linear charge density, , of this one dimensional object, we can simply divide the total charge, , by the total length, :
If we describe the wire as having a varying charge (one that varies as a function of position along the length of the wire, ), we can write:
Each infinitesimal unit of charge, , is equal to the product of its linear charge density, , and the infinitesimal unit of length, :
The linear charge density can then be understood as the derivative of the charge function with respect to the one dimension of the wire (the position along its length, )
Notice that these steps were exactly the same ones we took before to find .
The SI unit of linear charge density is the coulomb per meter (C/m).
Other applications
In drawing or printing, the term linear density also refers to how densely or heavily a line is drawn.
The most famous abstraction of linear density is the probability density function of a single random variable.
Units
Common units include:
kilogram per meter (using SI base units)
ounce (mass) per foot
ounce (mass) per inch
pound (mass) per yard: used in the North American railway industry for the linear density of rails
pound (mass) per foot
pound (mass) per inch
tex, a unit of measure for the linear density of fibers, defined as the mass in grams per 1,000 meters
denier, a unit of measure for the linear density of fibers, defined as the mass in grams per 9,000 meters
decitex (dtex), a unit for the linear density of fibers, defined as the mass in grams per 10,000 meters
See also
Density
Area density
Columnar density
Paper density
Linear number density
References
Density
Length | Linear density | Physics,Mathematics | 826 |
962,148 | https://en.wikipedia.org/wiki/Beehive%20Cluster | The Beehive Cluster (also known as Praesepe (Latin for "manger", "cot" or "crib"), M44, NGC 2632, or Cr 189), is an open cluster in the constellation Cancer. One of the nearest open clusters to Earth, it contains a larger population of stars than other nearby bright open clusters holding around 1,000 stars. Under dark skies, the Beehive Cluster looks like a small nebulous object to the naked eye, and has been known since ancient times. Classical astronomer Ptolemy described it as a "nebulous mass in the breast of Cancer". It was among the first objects that Galileo studied with his
telescope.
Age and proper motion coincide with those of the Hyades, suggesting they may share similar origins. Both clusters also contain red giants and white dwarfs, which represent later stages of stellar evolution, along with many main sequence stars.
Distance to M44 is often cited to be between 160 and 187 parsecs (520–610 light years), but the revised Hipparcos parallaxes (2009) for Praesepe members and the latest infrared color-magnitude diagram favors an analogous distance of 182 pc. There are better age estimates of around 600 million years (compared to about 625 million years for the Hyades). The diameter of the bright inner cluster core is about 7.0 parsecs (23 light years).
At 1.5° across, the cluster easily fits within the field of view of binoculars or low-powered small telescopes. Regulus, Castor, and Pollux are guide stars.
History
In 1609, Galileo first telescopically observed the Beehive and was able to resolve it into 40 stars. Charles Messier added it to his famous catalog in 1769 after precisely measuring its position in the sky. Along with the Orion Nebula and the Pleiades cluster, Messier's inclusion of the Beehive has been noted as curious, as most of Messier's objects were much fainter and more easily confused with comets. Another possibility is that Messier simply wanted to have a larger catalog than his scientific rival Lacaille, whose 1755 catalog contained 42 objects, and so he added some well-known bright objects to boost his list. Wilhelm Schur, as director of the Göttingen Observatory, drew a map of the cluster in 1894.Ancient Greeks and Romans saw this object as a manger from which two donkeys, the adjacent stars Asellus Borealis and Asellus Australis, are eating; these are the donkeys that Dionysos and Silenus rode into battle against the Titans.
Hipparchus (c.130 BC) refers to the cluster as Nephelion ("Little Cloud") in his star catalog. Claudius Ptolemy's Almagest includes the Beehive Cluster as one of seven "nebulae" (four of which are real), describing it as "The Nebulous Mass in the Breast (of Cancer)". Aratus (c.260–270 BC) calls the cluster Achlus or "Little Mist" in his poem Phainomena.
Johann Bayer showed the cluster as a nebulous star on his Uranometria atlas of 1603, and labeled it Epsilon. The letter is now applied specifically to the brightest star of the cluster Epsilon Cancri, of magnitude 6.29.
This perceived nebulous object is in the Ghost (Gui Xiu), the 23rd lunar mansion of ancient Chinese astrology. Ancient Chinese skywatchers saw this as a ghost or demon riding in a carriage and likened its appearance to a "cloud of pollen blown from willow catkins". It was also known by the somewhat less romantic name of Jishi qi (積屍氣, also transliterated Tseih She Ke), the "Exhalation of Piled-up Corpses". It is also known simply as Jishi (積屍), "cumulative corpses".
Morphology and composition
Like many star clusters of all kinds, Praesepe has experienced mass segregation. This means that bright massive stars are concentrated in the cluster's core, while dimmer and less massive stars populate its halo (sometimes called the corona). The cluster's core radius is estimated at 3.5 parsecs (11.4 light years); its half-mass radius is about 3.9 parsecs (12.7 light years); and its tidal radius is about 12 parsecs (39 light years). However, the tidal radius also includes many stars that are merely "passing through" and not bona fide cluster members.
Altogether, the cluster contains at least 1000 gravitationally bound stars, for a total mass of about 500–600 Solar masses. A recent survey counts 1010 high-probability members, of which 68% are M dwarfs, 30% are Sun-like stars of spectral classes F, G, and K, and about 2% are bright stars of spectral class A. Also present are five giant stars, four of which have spectral class K0 III and the fifth G0 III.
So far, eleven white dwarfs have been identified, representing the final evolutionary phase of the cluster's most massive stars, which originally belonged to spectral type B. Brown dwarfs, however, are rare in this cluster, probably because they have been lost by tidal stripping from the halo. A brown dwarf has been found in the eclipsing binary system AD 3116.
The cluster has a visual brightness of magnitude 3.7. Its brightest stars are blue-white and of magnitude 6 to 6.5. 42 Cancri is a confirmed member.
Planets
In September 2012, two planets which orbit separate stars were discovered in the Beehive Cluster. The finding was significant for being the first planets detected orbiting stars like Earth's Sun that were situated in stellar clusters. Planets had previously been detected in such clusters, but not orbiting stars like the Sun.
The planets have been designated Pr0201 b and Pr0211 b. The 'b' at the end of their names indicates that the bodies are planets. The discoveries are what have been termed hot Jupiters, massive gas giants that, unlike the planet Jupiter, orbit very close to their parent stars.
The announcement describing the planetary finds, written by Sam Quinn as the lead author, was published in the Astrophysical Journal Letters. Quinn's team worked with David Latham of the Harvard–Smithsonian Center for Astrophysics, utilizing the Smithsonian Astrophysical Observatory's Fred Lawrence Whipple Observatory.
In 2016 additional observations found a second planet in the Pr0211 system, Pr0211 c. This made Pr0211 the first multi-planet system to be discovered in an open cluster.
The Kepler space telescope, in its K2 mission, discovered planets around several more stars in the Beehive Cluster. The stars K2-95, K2-100, K2-101, K2-102, K2-103, and K2-104 host a single planet each, and K2-264 has a two-planet system.
See also
List of Messier objects
Cancer (Chinese astronomy)
List of open clusters
Messier object
New General Catalogue
Open cluster family
Open cluster remnant
References
External links
M44 Photo detail Dark Atmospheres
Messier 44, SEDS Messier pages
NightSkyInfo.com – M44, the Beehive Cluster
Praesepe (M44) at Constellation Guide
Cancer (constellation)
Orion–Cygnus Arm
Open clusters
Beehive Cluster
NGC objects
Astronomical objects known since antiquity
Dionysus
Silenus | Beehive Cluster | Astronomy | 1,558 |
24,326,721 | https://en.wikipedia.org/wiki/C25H31NO3 | {{DISPLAYTITLE:C25H31NO3}}
The molecular formula C25H31NO3 (molar mass: 393.52 g/mol, exact mass: 393.2304 u) may refer to:
HT-0712, also known as IPL-455903
Testosterone nicotinate
Molecular formulas | C25H31NO3 | Physics,Chemistry | 75 |
1,643,572 | https://en.wikipedia.org/wiki/Jos%C3%A9%20Leite%20Lopes | José Leite Lopes (October 28, 1918 – June 12, 2006) was a Brazilian theoretical physicist who worked in the field of quantum field theory and particle physics.
Life
Leite Lopes began his university studies in 1935, enrolling in industrial chemistry at the Chemistry School of Pernambuco. In 1937, while presenting a paper to a scientific conference in Rio de Janeiro, the young student met Brazilian physicist Mário Schenberg and was introduced by him in São Paulo to Italian physicists Luigi Fantappiè and Gleb Wataghin. All three were working on research in physics at the then recently created University of São Paulo, amid a climate of great intellectual excitement and a breeding ground for a bright young generation of what would become the élite of Brazilian physics, such as César Lattes, Oscar Sala, Roberto Salmeron, Jayme Tiomno and Marcelo Damy de Souza Santos. Encouraged to study physics by what he saw, Leite Lopes moved to Rio de Janeiro after hist graduation in 1939. He took the entrance examinations to the National Faculty of Philosophy of the University of Brazil in 1940 and graduated a bachelor in physics in 1942. Accepting an invitation by Carlos Chagas Filho, Leite Lopes started to work in the same year the Institute of Biophysics of the Federal University of Rio de Janeiro, but soon moved to the University of São Paulo to take up graduate studies in quantum mechanics with his teacher, friend and sponsor, Mário Schenberg. His main work during this time was on the calculation of Dirac's radiation field of electrons.
In 1944, Leite Lopes got an American fellowship to study at Princeton University, in New Jersey, United States, under Josef-Maria Jauch. There, he had the opportunity to learn and work with giants of theoretical physics, such as Albert Einstein, Wolfgang Pauli and John von Neumann, despite the fact that most of the faculty was absent, involved in the Manhattan Project (the development of the first atomic bombs). In 1946, he finished his doctoral dissertation, on the topic of the influence of the recoil of heavy particles on the nuclear potential energy, and returned to Rio de Janeiro. He accepted the interim chair of Theoretical and Superior Physics at the University of Brazil, and started to lecture on quantum mechanics and quantum theory of radiation. In 1948 he was confirmed as chairman after presenting a thesis on the theory of nuclear forces.
Together with César Lattes, a young physicist from São Paulo who had achieved international fame due to his co-discovery of a new kind of nuclear particle, the pion (pi-meson), Leite Lopes was instrumental in creating in January 1949, in Rio de Janeiro, the Centro Brasileiro de Pesquisas Físicas (Brazilian Center for Research in Physics) (CBPF), a research center in theoretical physics (the first in Latin America), maintained by funds from Confederação Nacional de Indústrias (Brazilian Confederation of Industries), then presided by Euvaldo Lodi. In the same year Leite Lopes was invited by Robert Oppenheimer to spend another year of study at the Institute for Advanced Study in Princeton, where he attended lectures by Richard Feynman, Victor Weisskopf and Paul Dirac. In 1957 he again visited the US on a fellowship, by invitation of Richard Feynman, at the California Institute of Technology.
In 1969, the new military regime in Brazil took away his political rights, together with several other professors, supposedly on the basis of his participation in a "communist conspiracy". He was dismissed summarily from the very Center he had created and exiled himself voluntarily in the USA (at the Carnegie Mellon University) but after evidence of USA collaboration with the 1964 military coup was manifest he went to Université Louis Pasteur, in Strasbourg, France. From 1974 to 1978, Leite Lopes was appointed full professor with the Université Louis Pasteur, taking up the directorship of the Division of High Energy and the position of vice-director of the Centre de Recherches Nucléaires, a part of the Centre National de la Recherche Scientifique (CNRS). He returned to Brazil in 1986, as the director of the Centro Brasileiro de Pesquisas Físicas. He was also an honorary president of the Brazilian Society for the Advancement of Science.
Among many international and national honors and prizes, Leite Lopes received the 1999 UNESCO Science Prize and received the Great Cross of the Brazilian Order of Scientific Merit.
Works
Leite Lopes is internationally recognized for his many contributions to theoretical physics, particularly in the following areas:
Prediction of the existence of neutral vectorial bosons (Z0 boson), in 1958, by devising an equation which showed the analogy of the weak nuclear interactions with electromagnetism. Steven Weinberg, Sheldon Glashow and Abdus Salam used later these results to develop the electroweak unification. They were awarded with the Nobel Prize of Physics in 1979.
The vector dominance model in nuclear electroweak interactions
Nuclear shell structure in photonuclear reactions
Construction of the Fock relativistic space
Meson pseudoscalar potential in deuteron theory
Scalar meson pairs
Models of lepton and quark structures
External links
José Leite Lopes Virtual Library
Published papers. José Leite Lopes Virtual Library, CNPq, Brazil.
1918 births
2006 deaths
Brazilian physicists
Brazilian nuclear physicists
Particle physicists
Members of the Brazilian Academy of Sciences
Recipients of the Great Cross of the National Order of Scientific Merit (Brazil)
UNESCO Science Prize laureates
University of São Paulo alumni
People associated with Federal University of Rio de Janeiro
Institute for Advanced Study visiting scholars
Presidents of the Brazilian Physical Society | José Leite Lopes | Physics | 1,166 |
13,968,216 | https://en.wikipedia.org/wiki/Cattle%20chute | A cattle chute (North America) or cattle race (Australia, British Isles and New Zealand) also called a run or alley, is a narrow corridor built for cattle that separates them from the rest of the herd and allows handlers and veterinarians to provide medical care or restrain the animal for other procedures. A conventional cattle chute consists of parallel panels or fences with a space between them that is slightly wider than one animal so they are unable to turn around. Cattle chutes gently restrain the animal using a squeeze mechanism. The chute is connected to an alley, forming the animals into a queue that only allows them to go forward. Cattle tubs or a BudBox can also be used to help with animal flow and maintain low-stress cattle handling principles. It is used for routine husbandry activities such as drafting (sorting) or loading animals via ramp or loading chute into a vehicle; placing them one at a time in a cattle crush
(variations also called a squeeze chute or standing stock)
for examination, marking or veterinary treatment. They are also used at packing plants to move animals into a crush designed for slaughter.
Overview
An experimental humane design of cattle handling system, by Temple Grandin, gradually narrows so that cattle have ample time to form the queue, and curves to encourage cattle to move forward in a controlled manner (see photo). It uses the principles of animal science research and animal behavior to encourage cattle flow.
Calves (and other smaller animals such as sheep) can turn around in an adult cattle handling system, so a narrower race is required for proper handling. Thus the width of some cattle chutes are adjustable to accommodate different sized animals.
Cattle chutes may be portable, or may be a permanent fixture with a concrete walkway. There are manual and hydraulic options. Portable chutes may be made of steel, iron or aluminum; but modern permanent ones are usually of steel or iron (sometimes timber or even concrete) which is usually set in concrete, with solid or railed sides and a non-slip floor. Anti-bruise chutes do not have sharp edges, and instead use pipe with rounded edges such as oval rails; alternatively sides with sheet iron or steel can be found or built onto the chutes, which improves livestock movement and also prevents injuries from animals getting their legs or heads caught between the rails. Cattle chutes that have concrete floors have the flooring made wider than the race itself to prevent hooves catching between the bottom rail and the edge of the concrete. The concrete is also not smooth like that on city sidewalks but roughed out to give the animals more traction to prevent slipping and injury. Lower parts of the chute have removable side panels for access points and in the event of an animal becoming cast (stuck after falling down) or caught up in which the animal is needed to be freed to prevent injury. Some cattle chutes also have veterinarian access, allowing safe inspection for the veterinarian.
The length of the cattle chute is usually determined by the size of the herd – a longer one requires less penning-up of a larger herd. Longer cattle chutes with alleys may be curved, to improve the movement and forward flow of the animals towards the chute. However alleys longer than tend to cause trouble with the flow of the animals into the loading or cattle chute. A walkway may be provided on the outside of the alleys and chute, on one or both sides, to allow handlers easier handling, examination or treatment of animals from above, while adhering to the best practices of livestock handling outlined in low-stress cattle handling principles and through the research of Temple Grandin.
There are gates at the start and end of the race to regulate the movement of animals. The entrance is from a small funnel-shaped or semi-circular holding pen (or crowding tub or BudBox), where a gate is used to move cattle into the chute. Hybrid versions of this model are also available, and prevent cattle from turning around in the box. The gates are usually arranged so the cattle handler cannot become trapped or injured by the cattle. Most systems allow extra gates to be added so the system can be adapted too. This is achieved in several ways:
using a sliding gate operated from outside the alleys and chute, commonly found between the exit of the crowding tub and the entrance to the chute, in the middle of the alley itself, or at the end of the alley in entrance to the chute;
unlatching exit gates by a remotely operated cord;
or for a holding pen gate which uses a self-locking brake-latch that will lock if animals move back on it but be pushed forward by the handler. A latch is pulled back to unlock the gate so that it can open to another batch of livestock waiting to be moved in behind the previous batch. This holding pen gate can swing at an angle of 180° to 300°. Newer models can swing to 360°.
The exit from the chute may be through a head gate, which swings or slides to open one or another of several exits for separating animals into various groups.
Calf chute or race/cradle
A calf race and cradle, or chute in North America, makes calf branding and castration much quicker and cleaner. The calf is forced into a chute or crush, like that of a normal chute or crush, except it is pushed to one side and cradled in place by steel bars. Note that the bars are nowhere near the belly region of the calf, only located on the neck and in front of the stifle. Then one side of the crush is tipped 90°, exposing the side of the calf to be branded or examined. Calf cradles are available in temporary or permanent styles like that described above. The steel transportable race and table cradle, as shown in the photo, are very popular in Australia and New Zealand, but are also found in North America. Full-size versions are also used in North America for safely trimming hooves on cattle.
References
External links
Beef cattle yards for less than 100 head (5th ed), New South Wales Department of Agriculture, 2003
Circular cattle yard - 250 head capacity (2nd ed), New South Wales Department of Agriculture, 1999
Cattle Handling Facilities: Department of Agriculture and Environmental Affairs KwaZulu-Natal
Beef cattle yards: Queensland Government Department of Primary Industries and Fisheries
Animal equipment
Cattle
Buildings and structures used to confine animals
Livestock herding equipment | Cattle chute | Biology | 1,307 |
10,029,655 | https://en.wikipedia.org/wiki/Responsive%20architecture | Responsive architecture is an evolving field of architectural practice and research. Responsive architectures are those that measure actual environmental conditions (via sensors) to enable buildings to adapt their form, shape, color or character responsively (via actuators).
Responsive architectures aim to refine and extend the discipline of architecture by improving the energy performance of buildings with responsive technologies (sensors / control systems / actuators) while also producing buildings that reflect the technological and cultural conditions of our time.
Responsive architectures distinguish themselves from other forms of interactive design by incorporating intelligent and responsive technologies into the core elements of a building's fabric. For example: by incorporating responsive technologies into the structural systems of buildings architects have the ability to tie the shape of a building directly to its environment. This enables architects to reconsider the way they design and construct space while striving to advance the discipline rather than applying patchworks of intelligent technologies to an existing vision of "building".
History
The common definition of responsive architecture, as described by many authors, is a class of architecture or building that demonstrates an ability to alter its form, to continually reflect the environmental conditions that surround it.
The term responsive architecture was introduced by Nicholas Negroponte, who first conceived of it during the late 1960s when spatial design problems were being explored by applying cybernetics to architecture. Negroponte proposes that responsive architecture is the natural product of the integration of computing power into built spaces and structures, and that better performing, more rational buildings are the result. Negroponte also extends this mixture to include the concepts of recognition, intention, contextual variation, and meaning into computing and its successful (ubiquitous) integration into architecture. This cross-fertilization of ideas lasted for about eight years. Several important theories resulted from these efforts, but today Nicholas Negroponte’s contributions are the most obvious to architecture. His work moved the field of architecture in a technical, functional, and actuated direction.
Since Negroponte’s contribution, new works of responsive architecture have also emerged, but as aesthetic creations—rather than functional ones. The works of Diller & Scofidio (Blur), dECOi (Aegis Hypo-Surface), and NOX (The Freshwater Pavilion, NL) are all classifiable as types of responsive architecture. Each of these works monitors fluctuations in the environment and alters its form in response to these changes. The Blur project by Diller & Scofidio relies upon the responsive characteristics of a cloud to change its form while blowing in the wind. In the work of dECOi, responsiveness is enabled by a programmable façade, and finally in the work of NOX, a programmable audio–visual interior.
All of these works depend upon the abilities of computers to continuously calculate and join digital models that are programmable, to the real world and the events that shape it.
Finally an account of the development of the use of responsive systems and their history in respect to recent architectural theory can be found in Tristan d'Estree Sterk's recent opening keynote address (ACADIA 2009) entitled "Thoughts for Gen X— Speculating about the Rise of Continuous Measurement in Architecture"
Current work
While a considerable amount of time and effort has been spent on intelligent homes in recent years, the emphasis here has been mainly on developing computerized systems and electronics to adapt the interior of the building or its rooms to the needs of residents. Research in the area of responsive architecture has had far more to do with the building structure itself, its ability to adapt to changing weather conditions and to take account of light, heat and cold. This could theoretically be achieved by designing structures consisting of rods and strings which would bend in response to wind, distributing the load in much the same way as a tree. Similarly, windows would respond to light, opening and closing to provide the best lighting and heating conditions inside the building.
This line of research, known as actuated tensegrity, relies on changes in structures controlled by actuators which in turn are driven by computerized interpreters of the real world conditions.
Climate adaptive building shells (CABS) can be identified as a sub-domain of responsive architecture, with special emphasis on dynamic features in facades and roofs. CABS can repeatedly and reversibly change some of its functions, features or behavior over time in response to changing performance requirements and variable boundary conditions, with the aim of improving overall building performance.
Some key contributors
Tristan Sterk of The Bureau For Responsive Architecture and The School of the Art Institute of Chicago and Robert Skelton of UCSD in San Diego are working together on actuated tensegrity, experimenting with pneumatically controlled rods and wires which change the shape of a building in response to sensors both outside and inside the structure. Their goal is to limit and reduce the impact of buildings on natural environments.
MIT's Kinetic Design Group has been developing the concept of intelligent kinetic systems which are defined as "architectural spaces and objects that can physically re-configure themselves to meet changing needs." They draw on structural engineering, embedded computation and adaptable architecture. The objective is to demonstrate that energy use and the environmental quality of buildings could be rendered more efficient and affordable by making use of a combination of these technologies.
Daniel Grünkranz of the University of Applied Arts Vienna is currently undertaking PhD research in the field of Phenomenology as it applies to Responsive Architectures and Technologies.
Depicted left: A full scale actuated tensegrity prototype built from cast aluminium, stainless steel components and pneumatic muscles (pneumatic muscles provided by Shadow Robotics UK) by Tristan d'Estree Sterk and The Office for Robotic Architectural Media (2003). These types of structural systems use variable and controllable rigidity to provide architects and engineers with systems that have a controllable shape. As a form of ultra-lightweight structure these systems offer a primary method for reducing the embodied energy used in construction processes.
Bibliography
Sterk, T.: 'Thoughts for Gen X— Speculating about the Rise of Continuous Measurement in Architecture' in Sterk, Loveridge, Pancoast "Building A Better Tomorrow" Proceedings of the 29th annual conference of the Association of Computer Aided Design in Architecture, The Art Institute of Chicago, 2009.
Beesley, Philip; Hirosue, Sachiko; Ruxton, Jim; Trankle, Marion; Turner, Camille: Responsive Architectures: Subtle Technologies, Riverside Architectural Press, 2006, 239 p.,
Bullivant, Lucy, 'Responsive Environments: architecture, art and design', V&A Contemporary, 2006. London:Victoria and Albert Museum. A detailed analysis of the emergence of responsive environments as a multidisciplinary phenomenon, nurtured by museums, arts agencies and resulting from self-initiated activities by practitioners working in different cultural contexts.
Bullivant, Lucy, 'Interactive Design Environments'. London: AD/John Wiley & Sons, 2007. The follow-up to '4dspace', '4dsocial' is similarly a group of essays by different authors. It accents the creative role of museums in incubating new practices, terminology in this field, and the impact of interactive media installations in public spaces with a social message.
Bullivant, Lucy, '4dspace: Interactive Design Environments'. London: AD/John Wiley & Sons, 2005. An in-depth, multi-author investigation of the factors leading to and shaping the evolution of this hybrid field, featuring international practitioners.
Negroponte, N.: Soft Architecture Machines, Cambridge, MA: MIT Press, 1975. 239 p.,
See also
Climate adaptive building shells
Four-dimensional product
List of home automation topics
Responsive computer-aided design
References
External links
"Interactive Architecture Lab - Research Group at the Bartlett, University College London"
Réalisations.net - Design firm
WIRED article on Responsive Architecture
The Economist article on Intelligent and Responsive Buildings
Hoberman Associates - Transformable Design
DesignIntelligence article on Adaptive Structures
"AIA’s Multidisciplinary Innovation panel draws packed house"
"It’ll Take a Team to Design a Sustainable Future"
Article on Responsive Cities
"Responsive Facade`s case study"
Architectural design
Sustainable architecture
Building engineering
Home automation | Responsive architecture | Technology,Engineering,Environmental_science | 1,673 |
4,662,960 | https://en.wikipedia.org/wiki/Intermittency | In dynamical systems, intermittency is the irregular alternation of phases of apparently periodic and chaotic dynamics (Pomeau–Manneville dynamics), or different forms of chaotic dynamics (crisis-induced intermittency).
Experimentally, intermittency appears as long periods of almost periodic behavior interrupted by chaotic behavior. As control variables change, the chaotic behavior become more frequent until the system is fully chaotic. This progression is known as the intermittency route to chaos.
Pomeau and Manneville described three routes to intermittency where a nearly periodic system shows irregularly spaced bursts of chaos. These (type I, II and III) correspond to the approach to a saddle-node bifurcation, a subcritical Hopf bifurcation, or an inverse period-doubling bifurcation. In the apparently periodic phases the behaviour is only nearly periodic, slowly drifting away from an unstable periodic orbit. Eventually the system gets far enough away from the periodic orbit to be affected by chaotic dynamics in the rest of the state space, until it gets close to the orbit again and returns to the nearly periodic behaviour. Since the time spent near the periodic orbit depends sensitively on how closely the system entered its vicinity (in turn determined by what happened during the chaotic period) the length of each phase is unpredictable.
Another kind, on-off intermittency, occurs when a previously transversally stable chaotic attractor with dimension less than the embedding space begins to lose stability. Near unstable orbits within the attractor orbits can escape into the surrounding space, producing a temporary burst before returning to the attractor.
In crisis-induced intermittency a chaotic attractor suffers a crisis, where two or more attractors cross the boundaries of each other's basin of attraction. As an orbit moves through the first attractor it can cross over the boundary and become attracted to the second attractor, where it will stay until its dynamics moves it across the boundary again.
Intermittent behaviour is commonly observed in fluid flows that are turbulent or near the transition to turbulence. In highly turbulent flows, intermittency is seen in the irregular dissipation of kinetic energy and the anomalous scaling of velocity increments. Understanding and modeling atmospheric flow and turbulence under such conditions are further complicated by “turbulence intermittency,” which manifests as periods of strong turbulent activity interspersed in a more quiescent airflow. It is also seen in the irregular alternation between turbulent and non-turbulent fluid that appear in turbulent jets and other turbulent free shear flows. In pipe flow and other wall bounded shear flows, there are intermittent puffs that are central to the process of transition from laminar to turbulent flow. Intermittent behavior has also been experimentally demonstrated in circuit oscillators and chemical reactions.
See also
Pomeau–Manneville scenario
Crisis (dynamical systems)
Turbulent flow
Fluorescence intermittency (blinking) of organic molecules and colloidal quantum dots (nanocrystals)
References
Dynamical systems | Intermittency | Physics,Mathematics | 620 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.