id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
60,128,647
https://en.wikipedia.org/wiki/Neepa%20Maitra
Neepa T. Maitra is a theoretical physicist and was a professor of physics at Hunter College of the City University of New York and the Graduate Center of the City University of New York. She now works as a professor at Rutgers, in the field of theoretical chemical physics. She is most well known for her contributions to theoretical chemistry and chemical physics, especially in the development of accurate functionals in time-dependent density functional theory and correlated electron-ion dynamics. Early life and education Maitra was born in September 1972, raised in New Zealand, and completed her bachelor's degree in physics at the University of Otago. She went on to get her Ph.D. in physics at Harvard University in the lab of Eric "Rick" Heller and postdoctoral at the University of California, Berkeley, and Rutgers University. Maitra is currently in the Department of Physics at Rutgers University-Newark. Research projects and publications Time-dependent density functional theory is an area to investigate the properties of various functionals and has a wide range of implications and applications. < Exact factorization approach is a way to explore numerical stability of equations, and improve understanding of exact potentials and equations. < Polaritonic chemistry is a field that arose from manipulating molecules. Maitra's group has been investigating these phenomena through an extension of the exact factorization approach. ' Presentations At the CECAM workshop on Triggering Out-of-Equilibrium Dynamics in Molecular Systems in Lausanne, Switzerland in March 2023, Maitra participated and gave an invited talk remotely. In August 2023, Maitra presented remotely at the Progress in Non-Equilibrium Green’s Function Workshop in Orebro, Sweden discussing recent work on perspectives on TDDFT beyond linear response. Awards Maitra received an NSF Career Award for her work in Theoretical and Computational Chemistry. Maitra received an NSF Award for her work on Molecules in Classical and Quantized Fields. Maitra was elected a Fellow of the American Physical Society in 2024 for "fundamental contributions to the development of time-dependent density functional theory, identifying rigorous properties of the time-dependent exchange-correlation functional, and seminal work on the correlated motion of electrons and nuclei beyond the Born-Oppenheimer approximation."(From APS) References New Zealand physicists Hunter College faculty University of Otago alumni Harvard University alumni Living people Theoretical physicists Women physicists 1972 births
Neepa Maitra
[ "Physics" ]
475
[ "Theoretical physics", "Theoretical physicists" ]
60,129,076
https://en.wikipedia.org/wiki/Amalie%20Frischknecht
Amalie L. Frischknecht is an American theoretical polymer physicist at Sandia National Laboratories in Albuquerque, New Mexico. She was elected a fellow of the American Physical Society (APS) in 2012 for "her outstanding contributions to the theory of ionomers and nanocomposites including the development and application of density functional theory to polymers". Her research focuses on understanding the structure, phase behavior, and self-assembly of polymer systems, such as complex fluids polymer nanocomposites, lipid bilayer assemblies, and ionomers. Education Frischknecht graduated from Pomona College in Claremont, California, with a Bachelor of Arts (B.A.) in physics and mathematics in 1992. She moved to University of California, Santa Barbara (UCSB) to her graduate studies, where she received a Ph.D. in physics in 1998. At UCSB, she worked under the supervision of James S. Langer. Frischknecht research thesis was on phase-separation of binary fluids in shear flow. Career and research After graduating with her PhD, Frischknecht went to work at ExxonMobil Research & Engineering Co. as a postdoctoral fellow. She stayed there from 1998 until 2000 and worked on polymer rheology with Scott Milner, who is now a professor of physics at Pennsylvania State University. They investigated the dynamics of polymer melts made star-shaped polymers, which are a branched polymer in which several chains are linked together via a central core. They also studied diffusion of linear polymers. In 2000, Frischknecht moved to Sandia National Laboratories, working first as a postdoctoral fellow and then becoming a permanent member of staff. To understand the behavior of polymers, she relies mostly on molecular modeling techniques like density functional theory and molecular dynamics simulations. Notable works include simulations of ionic polymers (polymers that contain ions that are chemically bound to their structure) to find their structures that they form. She has also studied the rheology of polymer-nanoparticle blends, finding that when the blend is placed on a substrate a first-order phase transition occurs that expels the polymer from the surface, causing the particles to form a monolayer. Committees Frischknecht serves as the Chair of the Division of Polymer Physics (DPOLY) at the American Physical Society (APS), a position that runs from 2020 to 2021. She previously served as Chair-Elect from 2019 to 2020, and as a member at large for DPOLY from 2013 to 2015. She chaired the 2018 Gordon Research Conference (GRC) on Polymer Physics titled "New Developments in Hierarchical Structure and Dynamics of Polymers." The theme of the conference was new experimental, simulation, and theoretical developments in polymer physics. Notable publications A. L. Frischknecht and K. I. Winey, "The Evolution of Acidic and Ionic Aggregates in Ionomers during Microsecond Simulations," J. Chem. Phys. 150, 064901 (2019). E. G. Sorte, B. A. Paren, C. G. Rodriquez, C. Fujimoto, C. Poirier, L. J. Abbott, N. A. Lynd, K. I. Winey, A. L. Frischknecht, and T. M. Alam, "Impact of Hydration and Sulfonation on the Morphology and Ionic Conductivity of Sulfonated Poly(phenylene) Proton Exchange Membranes," Macromolecules 52, 857-876 (2019). J. P. Koski and A. L. Frischknecht, "Fluctuation Effects on the Brush Structure of Mixed Brush Nanoparticles in Solution," ACS Nano 12, 1664 (2018). L. J. Abbott and A. L. Frischknecht, "Nanoscale Structure and Morphology of Sulfonated Polyphenylenes via Atomistic Simulations," Macromolecules 50, 1184 (2017). L. R. Middleton, J. D. Tarver, J. Cordaro, M. Tyagi, C. L. Soles, A. L. Frischknecht, and K. I. Winey, "Heterogeneous Chain Dynamics and Aggregate Lifetimes in Precise Acid-Containing Polyethylenes: Experiment and Simulations," Macromolecules 49, 9176-9185 (2016) [featured on front cover]. K. M. Salerno, A. L. Frischknecht, and M. J. Stevens, "Charged nanoparticle attraction in multivalent salt solution: A classical-fluids density functional theory and molecular dynamics study," J. Phys. Chem. B 120, 5927-5937 (2016). C. L. Ting, R. J. Composto, and A. L. Frischknecht, "Orientational control of polymer grafted nanorods," Macromolecules 49, 1111-1119 (2016). C. K. Simocko, A. L. Frischknecht, and D. L. Huber, “Phase behavior of ternary polymer brushes,” ACS Macro Lett. 5, 149-153 (2016). C. F. Buitrago, D. S. Bolintineanu, M. E. Seitz, K. L. Opper, K. B. Wagener, M. J. Stevens, A. L. Frischknecht, and K. I. Winey, “Direct comparisons of X-ray scattering and atomistic molecular dynamics simulations for precise acid copolymers and ionomers,” Macromolecules 48, 1210 (2015). C. L. Ting, M. J. Stevens, and A. L. Frischknecht, "Structure and dynamics of coarse-grained ionomer melts in an external electric field," Macromolecules 48, 809 (2015). See all publications on google scholar: https://scholar.google.com/citations?user=z1YWynYAAAAJ&hl=en References American women physicists Polymer physics Fellows of the American Physical Society American physicists Pomona College alumni University of California, Santa Barbara alumni
Amalie Frischknecht
[ "Chemistry", "Materials_science" ]
1,333
[ "Polymer physics", "Polymer chemistry" ]
57,179,860
https://en.wikipedia.org/wiki/Geochemical%20Ocean%20Sections%20Study
The Geochemical Ocean Sections Study (GEOSECS) was a global survey of the three-dimensional distributions of chemical, isotopic, and radiochemical tracers in the ocean. A key objective was to investigate the deep thermohaline circulation of the ocean, using chemical tracers, including radiotracers, to establish the pathways taken by this. Expeditions undertaken during GEOSECS took place in the Atlantic Ocean from July 1972 to May 1973, in the Pacific Ocean from August 1973 to June 1974, and in the Indian Ocean from December 1977 to March 1978. Measurements included those of physical oceanographic quantities such as temperature, salinity, pressure and density, chemical / biological quantities such as total inorganic carbon, alkalinity, nitrate, phosphate, silicic acid, oxygen and apparent oxygen utilisation (AOU), and radiochemical / isotopic quantities such as carbon-13, carbon-14 and tritium. See also Global Ocean Data Analysis Project (GLODAP) Joint Global Ocean Flux Study (JGOFS) World Ocean Atlas (WOA) World Ocean Circulation Experiment (WOCE) References External links GEOSECS data, International Research Institute for Climate and Society GEOSECS data, Ocean Data View Rivers of the Sea: The Story of GEOSECS, 1975 documentary about the study Biological oceanography Carbon Chemical oceanography Oceanography Physical oceanography
Geochemical Ocean Sections Study
[ "Physics", "Chemistry", "Environmental_science" ]
284
[ "Hydrology", "Applied and interdisciplinary physics", "Oceanography", "Chemical oceanography", "Physical oceanography", "Geochemistry stubs" ]
57,180,939
https://en.wikipedia.org/wiki/List%20of%20WWII%20Maybach%20engines
This is an incomplete list of gasoline engines designed by Maybach AG, manufactured by Maybach and other firms under licence, and fitted in various German tanks (German: , French: ) and half-tracks before and during World War II. Until the mid 1930s, German military vehicle manufacturers could source their power plants from a variety of engine makers; by October 1935 the design and manufacture of almost all tank and half-track engines was concentrated in one company, Maybach AG, located in Friedrichshafen on Lake Constance. The firm designed and made a wide range of 4, 6, and 12-cylinder engines from 2.5 to 23 litres; these powered the basic chassis designs for approximately ten tank types (including tank hunters and assault guns), six half-track artillery tractor designs, plus two series of derived armoured personnel carriers. Maybach also designed a number of gearboxes fitted to these vehicles, made under licence by other manufacturers. Friedrichshafen was also home to the Zahnradfabrik (ZF) factory which made gearboxes for Panzer III, IV, and Panther tanks. Both Maybach and ZF (and Dornier) were originally subsidiaries of Luftschiffbau Zeppelin GmbH, which also had a factory in the town. Maybach used various combinations of factory letter codes (discussed below) which specified the particular ancillaries to be supplied with each engine variant: the same basic model could be fitted in a number of vehicles, according to the original manufacturer's design requirements. For example, the basic 3.8 and 4.2 litre straight-6 engines (the NL38 and HL42) fitted in various half-tracks could be supplied in at least 9 different configurations, although every component was to be found in a single unified parts list. However, as the war progressed, a number of problems hampered the German armaments production effort. The factory's inability to manufacture enough complete engines as well as a huge range of spare parts, meant that there was often a lack of both. Conflicts between the civilian Reich Ministry of Armaments and Munitions and the German Army led to a failure to set up an adequate distribution system, and consequent severe shortages of serviceable combat vehicles. In April 1944 an Allied bombing raid put the Maybach factory out of action for several months, and destroyed the ZF gearbox factory. By the end of the war Maybach produced over 140,000 engines and 30,000 semi-automatic transmissions for the German Wehrmacht. Maybach history, 1935–1945 In order to rationalise Germany's military vehicle production, sweeping changes were made to its entire automotive industry. The re-organisation was overseen by , head of Wa.Prüf. 6 (Weapons Inspectorate 6, responsible for tanks, armoured vehicles and motorized equipment) of the Heereswaffenamt (HWA). By late October 1935, Maybach had been designated the sole designer and manufacturer of tank and half-track engines for the entire Wehrmacht, with production later outsourced to other firms including its subsidiary Nordbau (Norddeutsche Motorenbau GmbH) in the south-eastern Berlin suburb of Niederschöneweide beside the River Spree. Maybach AG made very few complete parts of its engines at all. Almost everything was bought in from other suppliers. Its main activity was precision machining of the castings and forgings of its own design, made by outside manufacturers, and producing complete assembled engines on a separate assembly line. Completely finished crankshafts were supplied by , in Remscheid- :de:Hasten. In addition, machined pistons (Mahle KG), piston rings, roller and ball bearings, fuel pumps, carburettors (Solex), and complete electrical equipment (Bosch) were acquired as finished parts from outside sources. Although a steady supply of spare parts is essential to an army in the field, the production of complete engines always took priority over providing spares. According to Albert Speer, Hitler himself never realised this importance: "One of his worst failings was that he simply did not understand the necessity for supplying the armies with sufficient spare parts." Germany never achieved the industrial capacity needed to keep its military vehicles running efficiently: when the Russian campaign got underway, the deficiencies of the armaments industry and the organisation of maintenance depots became obvious. The German armed forces suffered from continual shortages of spare parts for tanks and half-tracks until the end of the war. When the first Tiger I tanks arrived in Russia in autumn 1942, there was only one spare engine and one transmission for every 10 tanks. A critical lack of spare parts meant that most of them were out of commission within a short period, sometimes for weeks on end. Despite various attempts at re-organisation, friction between the distribution systems of the German Army (das Heer) and the civilian Ministry of Armaments (and from 1944 the 'Rüstungsstab') often led to confrontation and inefficiency. Some of this can be blamed on Karl-Otto Saur of the Ministry of Armaments, whose ruthless drive for greater overall production figures tended to override the need for testing and durability concerns, and the manufacture of enough spare parts. According to Stieler von Heydekampf, president of the Panzer Kommission from 1943, German tank production was at a major disadvantage throughout the war because the main firms involved were heavy equipment manufacturers. It would have been more effective if the programme had been given to Ford Germany and Opel (owned by General Motors) because of their real mass production experience, but this was not done because of their American associations. Maybach's monopoly on engine production proved to be the bottleneck in German tank production. From 1942, after the German invasion of the Soviet Union, Maybach started dispersing its manufacturing activities, licensing eight other firms to manufacture its engines. Adler Werke in Frankfurt/Main built the HL42 from January 1942; Saurer Werke in Vienna, Krauss-Maffei (Munich), and Borgward in Bremen were licensed to build the HL62 & HL64; Maschinenfabrik Bahn Bedarf (MBB) in Nordhausen made the HL109, and also the HL120—along with Maybach's subsidiary Nordbau in Berlin and MAN in Nurnberg; and Auto Union in Chemnitz (Siegmar Werke) made HL230s, having tooled up from October 1943–March 1944. Henschel & Sohn in Kassel made large quantities of major components for Maybach in 1943–1944: 2,434 crankshafts, 1,850 crank cases, 32,121 connecting rods and 8,196 'closing covers' (undefined - maybe valve covers or possibly cylinder heads). Maybach from August 1943 also organised 11 of its own dispersal machining sites located from a few miles away to some 60 miles distant; the finished parts were then sent to a designated factory for assembly. These precautions allowed manufacture of complete engines to take place away from Friedrichshafen. On Hitler's orders in March 1944, the extensive cellars below the town of Leitmeritz (now Litoměřice, Czech Republic) on the river Elbe were to be used for the anticipated assembly for HL120 and HL230 tank engines, in case a manufacturing plant were to be bombed. Despite these precautions, by late 1943 there was still a severe shortage of spare tank engines. Rather than concentrate on proven designs, Maybach continued to bring out new, relatively untested models; the wide variety of engine types seriously hampered efforts to fix the multiple defects which Maybach engines developed under combat conditions. The extreme difficulty of stocking so many spares at the front, several thousand kilometres away from the factory, swiftly led to vehicles being unserviceable for combat. Because the armaments industry was already working at full capacity, it was not possible to completely replace obsolete models with new versions. Instead, the number of tank models and types within each series issued to the field forces increased steadily, which only made the maintenance and repair situation worse. Severely damaged tanks from the Russian front were initially shipped back to Germany, or to the Nibelungenwerk or the Vienna Arsenal for repair; but the prospect of inevitable delays often meant that vehicles were instead cannibalised at the front for parts. Often when a new engine was delivered, there was little left except the hull of the tank it was intended for. Nevertheless, the maintenance crews did their best, often retrieving knocked-out tanks under considerable difficulties. As the war progressed, new Maybach engines tended to be rushed into production, without adequate testing and improvement. As a result, they were viewed as unreliable (although this would be expected of any un-developed engine). All the 325 new Panther tanks delivered to Russia in early 1943 had to be returned because of serious defects in the steering; they were underpowered by the HL210 P30 engine, and its replacement, the HL230 P30 (which didn't arrive until late 1943) suffered from over-heating, fires in the engine compartment and blown gaskets. By way of comparison, the Soviet Army used a single basic engine (the V-12 diesel Kharkiv V-2) to power the majority of its tanks – with a few modifications – starting with the BT-7M and its successor the T-34, producing @ 1800 rpm in 1939; the SU-85 and SU-100; the KV-1 and KV-2 (600 hp with supercharging in 1939); and the IS-2, ISU-122 and ISU-152 and the T-10. Maybach didn't produce a more powerful acceptable engine until late 1943 with the HL230 P30. Starting in March 1944, a series of Allied precision and area bombing raids put the Maybach factory out of action for several months. Those of 27/28 April and 20 July especially inflicted heavy damage on the plant. However, engine production continued at the various dispersed machining sites and manufacturers. If the various firms making Maybach motors under license had not been in a position to continue producing engines, the German Army's entire tank program would have been seriously jeopardised. Although the German Army used various combat vehicles appropriated from other countries, they continued to be powered by their original engines. Maybach engines were fitted to the German fighting vehicles for which they had been designed. General design All Maybach engines for AFVs which reached series production were gasoline four-stroke water-cooled designs. The firm's managing director, Dr. Karl Maybach (son of the founder Wilhelm Maybach), had stated that "he was born water cooled and wanted to die water cooled." Before the war the fuel industry had indicated that petroleum was going to be easier to produce than synthetic diesel, and development of gasoline engines was therefore favoured. By around 1943 the situation had turned around, but by then it was too late to change. Dr. Ferdinand Porsche had consistently pushed for air-cooled diesels, but his organisation's designs never functioned satisfactorily. The twin large Porsche gasoline V-10 engines slated for the Tiger (P) never worked satisfactorily, and two over-worked Maybach HL120s were fitted instead to drive the electric generators and final drive motors in the subsequent Ferdinand. A number of Maybach motors shared the same basic design but had different engine sizes, the larger engines having bigger cylinders to increase the capacity. Similar engine designs had shared parts lists, e.g. the NL38 and HL42; the HL57 and HL62; and the HL108 and HL120. The 6-cylinder Maybach engines used a single Solex 40 JFF II down-draught () carburetor, and earlier V-12s used two. Later V-12s used Solex 52 JFFs. A hand-cranked inertia starter (Schwungkraftanlasser) was fitted to the V-12 engines to supplement the Bosch 24V electric starter motor (powered by two 12V batteries) in cold weather. Nomenclature Introduction Maybach used a series of letter codes and numbers to identify specific engine models, namely: NL / HL – performance TU / TR – lubrication K – clutch R / RR – V-belt drive for compressor and/or radiator fans M – magneto ignition Although these codes usually indicate what ancillary equipment was fitted at the factory (e.g. the HL42 TUKRRM and the HL57 TR), there are some exceptions, discussed below. The individual engine number and its capacity, the model type, and year of manufacture are hand-stamped on each crankcase. On 6-cylinder models with schnapper magneto ignition, this information is found on the magneto housing: e.g.MOTOR Nr 730192 4198 ccM. HL42 TUKRM 1943 And on the HL210, stamped at the top end of the crankcase above the flywheel cover: Mot. Nr. 46302 HL210P45 Performance NL = Normalleistung (normal performance motor) HL = Hochleistung (high performance motor) This is followed (without space) by the approximate engine capacity (e.g. HL42 = approx. 4.2 litres.) Compared to the NL motors, the HL (high performance) series had a higher compression ratio, which increased the power output. This advantage was somewhat lost when a mandatory requirement to run on lower-quality OZ 74 (74 octane) gasoline after October 1938 necessitated the compression ratio of the HL series to be lowered, achieved by fitting shorter pistons and a new cylinder head. This may partially explain the similar power outputs of engines with different capacities, shown in the table further below. Lubrication TR = Trockensumpfschmierung (dry sump lubrication), generally fitted to tanks - because of low ground clearance - and to the Sd.Kfz. 10 and 250 half-tracks. There is no sump below the crankcase: the engine oil is contained in a reservoir on one side. On later V-12s there is a tunnel through the oil reservoir, through which the hand crank for the inertia starter passes, operated from the outside rear of the vehicle. In a number of cases, especially the dry sump tank engines (e.g. the HL108 TR), this is the complete designation of an engine: in other words, there is no factory-fitted clutch (K) attached to the engine; no extra drive belts driving a compressor (R) and/or dual fans (RR) on custom pulleys; ignition is achieved by a magneto driven off the camshaft () rather fitted in its own housing (M) (); and no specific vehicular installation (P, S, or Z) is implied. TU = Tiefer Unterteil ('deep lower part' i.e. wet sump), only fitted to some half-tracks. The sump generally has an inverted triangle shape, bolted to the underneath of the crankcase housing. Most of the TU (wet sump) type engines were installed in half-track artillery tractors Sd.Kfz 6, 7, 8, 9 and 11, and were fitted with some or all of the ancillaries (K, R, or M). There appear, nevertheless, to be exceptions. For example, the HL57 TU was apparently only installed in some versions of the Sd.Kfz. 7, which was in fact fitted with a factory clutch, integral compressor and magneto. The extra equipment was fitted as standard and the extra letter codes were not included in the model number. In addition, 'T' by itself has no meaning; it is always directly followed by either R or U, but 'R' in this position should not be confused with an (R) signifying a V-belt drive for a compressor (see below). Furthermore, in some sources engines may be referred to simply as e.g. "a Maybach HL 120 of 300 metric horsepower", which indicates that further information is needed to identify the particular model number. Transmission K = Kupplung or Kupplungsgehäuse (clutch housing): a clutch is attached directly to the flywheel end of the crankshaft, generally driving a manual gearbox with 4 forward speeds and 1 reverse, plus a high/low reduction gearbox, giving 8 forward and 2 reverse ratios (4+1 x2). This type of transmission was fitted to all the half-tracks with a TU-type engine, and to early Panzer Is. The transmission could also have a rear power take-off (PTO) shaft fitted to power a winch; or turntables for either a gun, or crane on e.g. the Sd.Kfz. 9/1. The Sd.Kfz. 10 had a unique arrangement with a conventional clutch attached to the engine driving a pre-selector Maybach 'Variorex' VG 102 128H gearbox. See also § Compressor below. If there is no factory-fitted clutch (K), this indicates a tank engine (except early Panzer Is). . Instead, a horizontal cardan shaft connects the flywheel to a separate gearbox next to the driver. This could be a pneumatically controlled, pre-selector Maybach-Variorex (e.g. certain Panzer IIIs and Stug III); or a synchromesh ZF 'Aphon' (e.g. later Panzer III and IVs); or a hydraulically controlled Maybach-Olvar (e.g. Tiger I and II). A 10-speed Maybach-Variorex SRG 328 145 gearbox was fitted in Panzer IIIs Ausf. E–G, operated by vacuum pressure generated by a compressor (R) - see next section. The main clutch is integral to the gearbox housing. (See also diagram on right.) Other tank gearboxes included the synchromesh ZF Aphon SSG 5x and 7x series gearboxes (the SSG 75 fitted in early Panzer IV had five forward gears and one reverse: the 76 and 77 had six forward and one reverse). The main clutch () (LA 120 HD) was bolted to the gearbox on the SSG 75, and incorporated into the main housing in the 77. The SSG 77 gearbox replaced the mechanically vulnerable Variorex in the Stug. III Ausf. C. Bigger tank engines (e.g. the HL230) used a hydraulically controlled Maybach-Olvar gearbox such as the Olvar OG 40 12 16 (8 forward gears, 4 reverse), fitted to Tiger Is and IIs. Some half-track gearboxes also included a power take-off shaft (PTO) driving an external winch (). Compressor R = Riemenantrieb für Luftpresser (V-belt drive for air compressor), driven at the radiator end by a pulley with an extra groove. Most of the half-track engines had a compressor fitted, to power various types of equipment (discussed below). On some engines (e.g. the NL38 TUK) the compressor was an integral part of the engine, driven by internal gears and mounted on top of the cam cover at the flywheel end. The compressor is not specifically indicated in the model number. In similar fashion, on the HL 57 TU and 62 TUK the compressor was located in a gear-driven housing next to the clutch on the inlet side. On other models, the compressor was an external belt-driven ancillary denoted by an (R) in the model number (e.g. HL38 TUKR), it was mounted on one or other side of the engine, driven by an extra V-belt at the radiator end. Thus the lack of an (R) in the model number doesn't necessarily mean that a compressor wasn't fitted. The compressor was used to power various types of equipment, including: Sd.Kfz. 10 and 250 – Variorex VG 10 2 128H pre-selector gearbox Sd.Kfz. 11 and 251 – air brakes on towed equipment (e.g. Pak 40 anti-tank gun) Sd.Kfz. 6–9 – pneumatic foot/parking brake + towed equipment (e.g. 15 cm sIG 33 towed by the Sd.Kfz 7) Panzer III Ausf. E–G, and Stug III Ausf. A (only 20 made)§ – Maybach Variorex SRG 32 8 145 pre-selector gearbox On certain Panzer IIIs, and Stug III, and on the Sd.Kfz. 10 with its derivative the Sd.Kfz. 250, the compressor provided the (reverse) pressure for a pneumatically operated pre-selector gearbox. The air inlet of the compressor is connected to the system, not the outlet: the compressor works "in reverse" to create a vacuum. To shift gears, the pre-selector lever is set in the desired position or slot, and when the next gear is needed, the clutch pedal is depressed for about one second. This opens a valve inside the Variorex gearbox, which operates specific vacuum-actuated pistons attached to selector forks: these move dog clutches, which select the desired gearing. After about one second the driver releases the clutch pedal with the desired gear semi-automatically engaged with minimum effort on the driver's part. KR = Clutch and compressor: production versions of the Demag half-tracks, the Sd.Kfz. 10 (manufacturer type D7) and Sd.Kfz. 250 (D7p) were fitted with a Maybach SRG semi-automatic gearbox, type VG 102 128H, with 7 forward and 3 reverse gears.Although they worked on the same vacuum principle as the bigger tank pre-selector gearboxes (e.g. Variorex SRG 32 8 145, installed in Panzer III Ausf. E-G), these gearbox types had no integral clutch, and were much smaller than those fitted to tanks. The drive passed through a standard clutch attached to the engine via a cardan shaft into the gearbox: depressing and releasing the clutch pedal simultaneously disengaged the main clutch and actuated the vacuum pistons to engage the pre-selected gear ratio. KRR = Clutch, compressor, and extra belt drives for radiator fans: fitted to a number of Sd.Kfz. 251 variants, which had a different radiator from the unarmored Sd.Kfz. 11 on which it was based. A triple V-belt pulley mounted at the top of the engine also drove the twin cooling fans mounted directly between the engine and the radiator. Ignition All Maybach engines used a Bosch 12-volt magneto for the ignition. There were two main types: Driven off the camshaft () (or the camshaft pinion), located at the top of the engine at the flywheel end. This type of magneto can often be identified at the top of the engine at the flywheel end by a circular, slightly domed cover, and a tubular duct (sometimes corrugated) which fed the ignition leads out of sight behind an engine cover plate. This type of installation () was part of the standard specification and not included in the model letters (e.g.HL98 TUK). This applies to some 6-cylinder models and some V-12s. On the HL210 the magnetos are separately located above the ends of the camshafts, and on the HL230 they are centrally installed between the cylinder heads. M = (impulse magneto ignition). Some 6-cylinder models had this type of magneto in its own housing, driven off the starter ring on the flywheel, located on the right-hand side. This type of installation is indicated with an (M) in the model number, e.g. HL42 TUKRM. A number of engines of the same basic design were first fitted with the camshaft-driven () type and later with the type (e.g. HL62 TR/TRM, HL120 TR/TRM). The HL120 TRM Ausführung "A" used in the Panzer III and Stug III used a single schnapper-type magneto serving all 12 cylinders, located in the V of the cylinder block at the radiator end . Most models were also fitted with a belt-driven Bosch generator for charging the two 12-volt batteries for the 24-volt electric starter motor; and for 12-volt lighting, etc. On 4- and 6-cylinder engines the generator was usually connected by a short drive shaft to the separate belt-driven coolant pump, located close to the cylindrical oil cooler. Installation P = Panzerkampfwageneinbau (tank installation?) Z = Zerstörereinbau (tank destroyer installation?) S = Schleppereinbau (military tractor installation?) These letters were only used on some models, e.g. HL42 TRKMS, HL45 Z, HL157 P. The HL230 P30 and P45 appear to fall into this category, being named according to their original project specification: the HL230 P30 was designed to be fitted in the Panther, whose prototype was the 30-ton class VK30.02; and the HL230 P45 went in the Tiger, whose final 45-ton class prototype was numbered VK45.01. Examples NL38 TRKM = Normal performance 3.8 litre, dry sump, clutch, schnapper magneto (Panzer I Ausf. B) HL42 TUKRRM = High performance 4.2 litres, wet sump, clutch, belt-driven compressor, twin radiator fans, schnapper magneto (Sd.Kfz. 251) HL62 TR = High performance 6.2 litre, dry sump, no clutch (K), no external compressor (R), camshaft-driven magneto (no M) (some Panzer II) HL108 TUKRM = High performance 10.8 litre, wet sump, clutch, belt-driven compressor, schnapper magneto (Sd.Kfz. 9) HL120 TRM = High performance 12.0 litre, dry sump, no clutch (K), schnapper magneto (Panzer III) Gallery Lists of Maybach engines Between 1934 and 1950, Maybach designed approximately 100 different types of HL engines, of which about 70 reached at least bench testing. Some were 'proof of concept' single-cylinder designs. Many of these engines were the direct result of orders for an engine of a specific power and physical size, originating from ('Weapons Testing [division] 6', , responsible for tanks, armoured vehicles and motorized equipment) of the Heereswaffenamt. Fewer than twenty of these basic designs were actually manufactured as quantity series production engines, and are shown in the first table. Many these engines were manufactured in their thousands by Maybach and its licensed manufacturers. The second table lists Maybach engines which, although fully functioning, were only made in small quantities and often assigned to projects in the VK series (, "research/experimental fighting vehicle"). Others in the second list were intended for tanks and other AFVs which never even left the drawing board, the so-called 'Paper Panzers' such as the Entwicklung series, from de:, "development"). Table 1: Maybach WWII engines which reached series production Table 2: Maybach research/test/experimental engines made in small quantities (under 100) Development of the HL210 and HL230 A proposed replacement for the Panzer IV had been considered since around 1937. What became the Tiger tank went through a series of specifications, with the final revision (VK 4501) being made in May 1941. Only a month later, the German armies invading Russia encountered the superior T-34 and KV-1: by December 1941 a specification for a 30-ton medium tank (which became the Panther) had been proposed as an immediate response to the Soviet tank threat. Development of the two tanks continued simultaneously: the Tiger prototype was demonstrated to Hitler on his birthday in April 1942, and the first of two Panther prototypes was ready in August 1942. The weight of the Tiger had increased considerably since its inception, and although it was now considerably heavier than the Panther medium tank, Maybach proposed fitting almost exactly the same 21-litre V-12 650 hp engine in both tanks. To save weight, the cylinder block was cast in aluminium alloy, with cast iron liners. The pistons were made of low-expansion aluminium-silicon alloy with Si content of nearly 20%. The engine for the original 30-ton Panther project was the Maybach HL210 P30, while the 45-ton specification for the Tiger received the HL210 P45. The main visible difference was the arrangement of the coolant ducts exiting the cylinder heads, since the Panther and Tiger had different flows through their radiators. Quantity series production of the PzKpfw VI Tiger (Ausf. H) with the HL210 P45 engine began in August 1942, and it is possible that production of the Panther's HL210 P30 was begun at much the same time. The first battalions to be equipped with the Tigers were the 502nd Heavy Panzer Battalion on the Eastern Front near Leningrad, and the 501st Heavy Panzer Battalion which was sent to Tunisia. Unfortunately, it swiftly became apparent that the Tiger was seriously underpowered, and the rush into production of the new engines meant that the inevitable design defects had not been ironed out. Nevertheless, when the new Tigers arrived in Russia, there was only one spare engine and one transmission for every 10 tanks. A critical lack of spare parts meant that most of them were out of commission within a short period. The first PzKpfw V Panthers (Ausf. D) were similarly ill-fated; series production began in January 1943, but when they arrived in Russia in the spring the faults (including the steering and leaking engine gaskets) were so egregious that the entire batch had to be returned to Germany. A special plant for rebuilding the Panthers was established near Berlin. A report by Oberstleutnant Reinhold, attached to the 4th Panzer Army during Operation Citadel in July 1943, stated: "Mechanical Deficiencies: The cause for motor failures is still not known. It is possibly traceable to the short run-in time and unskilled drivers. Motors were over-revved. This caused overheating and broken connecting rods. In many cases fuel pumps failed. The pump seals leaked and pump membranes were defective. Leaks in oil line and fuel line connections increased the danger of fire." Another report from Oberstleutnant Mildebrath for Heinz Guderian, the Generalinspekteur der Panzertruppen in September 1943, about the 96 Panthers of the 2nd Battalion (Abteilung) of the 23rd Panzer Regiment, part of the 23rd Panzer Division: As before, the troops are still excited about the tactical capabilities of the Panther, but deeply disappointed that the majority of the Panthers can't engage in combat due to a miserable motor and other mechanical weaknesses. They would gladly give up some speed, if automotive reliability could be gained. Until the same automotive reliability as the Panzer III and IV is achieved, the Abteilung must be provided with extra repair parts, especially motors and final drives, and the necessary equipment and personnel to perform maintenance and repairs. At Kursk, 5–13 July 1943, 25 engines failed within 9 days (these would probably have been HL210 P30s) faults included piston rod bearing damage, broken con rods, damaged pistons, tears (cracks) in the cylinder sleeves, burnt cylinder head gaskets, and water in the exhaust. Also high oil consumption, and spark plugs oiling up. Fuel lines weren't sealed properly, leading to fires in the engine compartment. Final drives were too weak and had a high failure rate. The main clutch was fine except when used for towing, and the gearbox also functioned without problems - it always seems to have worked well, with very few problems ever reported. The running gear also functioned well. In the meantime, Maybach re-designed the HL210, replacing the alloy cylinder block with a traditional cast-iron one. Although there was no space for a physically larger engine, the cylinders were capable of being bored out without compromising the engine's integrity. The cast-iron HL230 engines weighed around , considerably more than the of the HL210. The new HL230 23-litre engines were installed from May 1943 in the latest production Panthers as the P30, and in Tigers as the P45. Although they produced 700PS @3,000 rpm, from November 1943 they were governed at the factory to 2,500 rpm to increase engine life, which limited them to the same 650 PS as the HL210. Despite all the changes, the up-engined Panther Ausf. A with the HL230 P30 (which didn't arrive in Russia until late 1943) suffered from over-heating, fires in the engine compartment and blown head gaskets. The head gasket problem was solved in August 1943 by pressing copper rings into grooves to seal the head. A new design of piston was fitted to the HL230 P45 which reduced the compression ratio slightly. In November 1943 a governor was installed in the HL230 P45 which limited the maximum revs to 2,500 rpm, and the maximum speed under full load to . Some new and rebuilt motors from October had faulty bearings installed causing frequent failures: improved bearings were installed in new HL230 P45s from January 1944. As a result of these improvements the Panther became much more reliable. In Nachtrichtenblatt der Panzertruppen ('Newssheet of the Panzer Troops') for March 1944, Guderian could include the combat report of an unnamed Panther battalion (possibly 1/1st Panzer Regiment) which had travelled an average of 700 kilometers per tank, with only 11 engines needing replacement. And in a situation report to Hitler on late June 1944 on the Battle for Normandy, he comments on the Panther's propensity to catch fire, and the mismatch between the durability of the engine and the transmission: "However, the Panther burns astonishingly quickly. The lifespan of the Panther's engine (1400 to 1500 kilometers) is considerably higher than that of the Panther's final drives. A solution is urgently needed!" Such a solution was never found. A French post-war report The Panther 1947 stated that although the engine could last for up to 1500 km, average 1000 km, the final drives only had a fatigue life of 150 km. The engine could be replaced in 8 hours by a trained mechanic Unteroffizier and 8 men with a tripod beam crane or Bergepanther. Maybach didn't separate the production statistics of the 210 from the 230. Altogether, production of both types amounted to 153 in 1942, 4,346 in 1943, and 1,785 HL230s up to April 1944. In late April 1944 an Allied bombing raid put the Maybach factory out of action for six months. Production was transferred to the Auto Union factory in Chemnitz, which delivered 219 HL230 engines to Henschel in 1944. A total of 4,366 HL230s from April for Panthers and Tigers were delivered from April 1944 to 1945. Identifying HL210 and HL230 types HL210: three air filters; magnetos are located separately at the end of each camshaft; on the oil cooler side the oil filter sits at a relatively upright angle, approx. 70°. HL230: two air filters: magnetos are located centrally in a twin housing between the cylinder heads; oil filter sits at approx. 45°. P30: the twin cast iron hot coolant ducts are symmetrical and visually similar, with separate feeds to l.h and r.h. radiators.. P45: the coolant ducts are siamesed into a single pipe leading to the r.h. radiator. Despite their similar appearances, the P30 and P45 versions had numerous small differences. The 230 P30 could be swapped with the P45 from a Tiger, but 105 separate parts needed to be removed from the P45 and replaced by 107 parts from the P30. According to the head of Henschel's design office in 1945, the assembly shop felt that the engine layout of the P30 version of the HL230 had much better attributes and was better developed for assembly work than the HL230 P45 fitted to the Tiger Ausf. E. HL234 Maybach continued to develop increasingly powerful 4-stroke water-cooled gasoline-powered engines during the war. One such which never reached series production was the HL234, a development of the HL230. The intention was to develop a fuel-injected and supercharged engine, but only the fuel injection mechanism (by Bosch) was working by the end of the war. The engine displaced approximately 23.4 litres, and the un-supercharged version was capable of developing 850 PS @2,800 rpm, with maximum torque of @1,750 rpm, and 900 PS @3,000 rpm Only a few pilot fuel-injection engines were built. The fuel-injected and supercharged version (one engine completed) would hopefully deliver around 1200 PS. The main supercharger was to have been driven by its own twin-cylinder supercharged 1 litre engine of 70 PS mounted in the V of the HL234 (where the carburetors were located in a normally-aspirated engine), but this part of the design was never completed. By April 1943 the crankshaft bearings and connecting rods from the HL230 had also been strengthened, and the direct fuel injection system was working - but the supercharger was not yet fully developed. Other improvements over the HL230 included water-cooled spark plugs; an improved intake manifold for better airflow; and improved exhaust manifold as well. Instead of coil-type valve springs the HL 234 used much stronger Belleville washers, which reduced valve opening times. Problems with rubber seals and copper [head] gaskets were solved by adopting designs used in the Rolls-Royce Merlin engine. The first HL234 was planned to be delivered in early 1945 to the Kummersdorf proving ground and was proposed in January 1945 as an upgraded power plant for the Tiger II, but had not yet been tested in a tank by that date. It was also proposed for the Panther II at a later prototype stage, but the project was discontinued. Similarly, the E.50/E.75 tank series for which the engine was also intended were never built before the war's end, with only development of individual components taking place. Maybach also developed a smaller 12-litre version on similar lines to the HL234. It weighed 600 kg, developing 500 PS without supercharger and 700 PS at 3,800 rpm supercharged, but like so many other German war-time projects, it never came to fruition. DSO8 An exception to Maybach's detailed naming system described above is the Maybach DSO8 V-12 engine fitted to early Sd.Kfz. 8s. It was derived from the DS7 () (i.e. Double-Six, 7 litres) fitted in the Maybach Zeppelin luxury car from 1929, a 7.0 litre (6,971 cc) V12 engine that produced 150 horsepower at 2,800 rpm, and from the later DS8 8-litre (bore x stroke=92*100 mm, 7977 cc, 486 cubic inches) which developed 200 bhp (149 kW; 203 PS) at 3200 rpm. The engine block and pistons were made of light aluminium alloy with cast iron liners. A 1938 Maybach Zeppelin DS8 also fitted with a Maybach Variorex vacuum shift eight-speed gearbox (both the first 8-speed and first 8-speed manual gearbox), sold at auction in 2012 for 1.3 million Euros. John Milsom mentions two versions of the DSO8, one with a power output of 150 bhp fitted to the prototype DB ZD5 as early as 1931, and one of 200 bhp found in the early production Sd.Kfz. 8 (DB s 7) from 1934 to 1936. A DSO8 developing 155 PS @2600 rpm was also recommended for export models of the Panzer III MKA ("mittlerer Kampfwagen fur Ausland") in August 1937, since the proposed 200 PS Maybach HL76 was "slow to come into production", and may never have reached series production at all. The DSO8 also powered three Swedish Stridsvagn m/31 prototypes in the early 1930s. A 150 hp DSO8 is also found in the Strv FM/31 Landsverk L-30 dating from 1931, examples of both are preserved in the Arsenalen Försvarsfordonsmuseum in Strängnäs, central Sweden. Half-tracks German WWII half-track prime mover numbering may appear not to be strictly logical: the two smallest vehicles were introduced after most of the larger artillery tractors were in production. In ascending order of engine size and therefore towing capacity, they were designed to tow the following: Sd.Kfz. 10 (1-ton), 3.7 cm PaK 36 & 5cm PaK 38, and SP 2cm Flak 30 Sd.Kfz. 11 (3-ton), 7.5 cm Pak 40 & 41, 10.5 cm leFH 18 and 15 cm sIG 33, 7.5 cm Flak. L/60, standard and Nebelwerfer ammunition trailers Sd.Kfz. 6 (5-ton), 10.5 cm leFH 18, 7.5 cm Flak. L/60. Mainly used as engineer/Pioneer equipment and personnel carrier Sd.Kfz. 7 (8-ton), 8.8 cm Flak, 10 cm K.18, 15 cm sFH 18, 15 cm Kanone 18 (2 separate loads); SP for 3.7 cm Flak & 2cm Flakvierling Sd.Kfz. 8 (12-ton), 10.5 cm FlaK 38, 17 cm Kanone 18 and 21 cm Mörser 18 (2 separate loads) Sd.Kfz. 9 (18-ton), 24 cm Kanone 3 (5 separate loads), 35.5 cm Mörser (7 separate loads), 6 or 10-ton crane, or tank recovery As Maybach designed new, more powerful engines, all these vehicle types received at least two and up to four different engine models during production of the latest batches. There remained the necessity of attempting to produce either spare parts or complete new engines, just to keep the older vehicles running. See also Maybach HL230 GT 101, BMW-based turboshaft engine project for German AFVs Maybach I and II, high command bunkers near Berlin References Notes Citations Bibliography External links Photo gallery of various Maybach engine types at Fahrzeuge der Wehrmacht (in German), including NL38 TR, HL42 TRKM, HL54 TUKRM, HL62 TUK, HL85 TUKRM, HL90, HL108 TUKRM, HL120, HL230 P30 & P45, and fuel-injection HL295 fitted in post-war AMX-50 prototype Engines
List of WWII Maybach engines
[ "Physics", "Technology" ]
9,182
[ "Physical systems", "Machines", "Engines" ]
57,182,965
https://en.wikipedia.org/wiki/Archaeal%20initiation%20factors
Archaeal initiation factors are proteins that are used during the translation step of protein synthesis in archaea. The principal functions these proteins perform include ribosome RNA/mRNA recognition, delivery of the initiator Met-tRNAiMet, methionine bound tRNAi, to the 40s ribosome, and proofreading of the initiation complex. Conservation of archaeal initiation factors Of the three domains of life, archaea, eukaryotes, and bacteria, the number of archaeal TIFs is somewhere between eukaryotes and bacteria; eukaryotes have the largest number of TIFs, and bacteria, having streamlined the process, have only three TIFs. Not only are archaeal TIF numbers between that of bacteria and eukaryote numbers, but archaeal initiation factors are seen to have both traits of eukaryotic and prokaryotic initiation factors. Two core TIFs, IF1/IF1A and IF2/IF5B are conserved across the three domains of life. There is also a semi-universal TIF found in all archaea and eukaryote called SUI1, but only in certain bacterial species (YciH). In archaea and eukaryotes, these TIFs help correct the identification of the initiation codon, while its function is unknown in bacteria. Just between eukaryote and archaea, a/eIF2 (trimer) and aIF6 in archaea are conserved in eukaryotes as eIF2 (trimer) and eIF6 TIFs. Archaea may also carry homologs of eukaryotic eIF2B (the GTP-exchange factor for eIF2). However, only the α subunit is definitively identified, so it probably does not act as a GTP-exchange factor in archaea. There is also a homolog of eIF4A, but it does not seem to participate in translation initiation. List of initiation factors aIF1: SUI1 (eIF1) homolog. aIF1A: IF1/eIF1A homolog. Plays a role in occupying the ribosomal A site, helping the unambiguous placement of tRNAi in the P site of in the large ribosomal subunit. aIF2: Trimeric, eIF2 homolog. Binds to the 40S small subunit of the ribosome to help guide the start translation of mRNA into proteins. Can substitute for eIF2. aIF5A: EF-P/eIF5A homolog. Contains hypusine, just like the eukaryotic one. Actually an elongation factor. aIF5B: IF2/eIF5B homolog. Join the ribosomal subunits (small and large) to form the complete single (monomeric) mRNA bound ribosome unit in the late stages of initiation. aIF6: eIF6 homolog. Keeps the two ribosomal subunits apart. References Proteins Prokaryotes
Archaeal initiation factors
[ "Chemistry", "Biology" ]
631
[ "Biomolecules by chemical classification", "Tree of life (biology)", "Prokaryotes", "Molecular biology", "Proteins", "Microorganisms" ]
57,184,567
https://en.wikipedia.org/wiki/Arlo%20Technologies
Arlo Technologies is an American company that makes wireless surveillance cameras. Prior to an initial public offering (IPO) on the New York Stock Exchange in August 2018, Arlo was a brand of such products by Netgear, which retained majority control after the IPO. According to the company, it has shipped 21.6 million devices, has 5.82 million registered accounts, and has 877,000 paid accounts, as of January 2022. History On February 6, 2018, Netgear made the announcement that its board of directors had unanimously approved the separation of its Arlo business from Netgear. During the second quarter of 2018, Netgear's Arlo unit became a holding of Arlo Technologies, Inc. Netgear issued less than 20% of the Arlo common stock in the IPO, allowing it to retain majority control. The CEO of Arlo is Matthew McRae. McRae joined Netgear in October 2017 when he was hired as senior vice president of strategy. Products Arlo makes products such as the Arlo Security Camera, as well as portable and baby monitoring cameras. Arlo cameras are designed to save energy by use of a low-power standby mode. Manufacturing Arlo manufacturing is outsourced to Foxconn and Pegatron. References External links 2018 initial public offerings Companies listed on the New York Stock Exchange Companies based in San Jose, California American companies established in 2018 Home automation companies Netgear Arlo
Arlo Technologies
[ "Technology" ]
301
[ "Netgear", "Home automation", "Wireless networking", "Home automation companies" ]
57,185,095
https://en.wikipedia.org/wiki/Prix%20Francoeur
The Prix Francoeur, or Francoeur Prize, was an award granted by the Institut de France, Academie des Sciences, Fondation Francoeur to authors of works useful to the progress of pure and applied mathematics. Preference was given to young scholars or to geometricians not yet established. It was established in 1882 and has been discontinued. Prize winners 1882–1888 — Emile Barbier 1889–1890 — Maximilien Marie 1891–1892 — Augustin Mouchot 1893 — Guy Robin 1894 — J. Collet 1895 — Jules Andrade 1896 — Alphonse Valson 1897 — Guy Robin 1898 — Aimé Vaschy 1899 — Le Cordier 1900 — Edmond Maillet 1901 — Léonce Laugel 1902–1904 — Emile Lemoine 1905 — Xavier Stouff 1906–1912 — Emile Lemoine 1913–1914 — A. Claude 1915 — Joseph Marty 1916 — René Gateaux 1917 — Henri Villat 1918 — Paul Montel 1919 — Georges Giraud 1920–1921 — René Baire 1922 — Louis Antoine 1923 — Gaston Bertrand 1924 — Ernest Malo 1925 — Georges Valiron 1926 — Gaston Julia 1927 — Georges Cerf 1928 — Szolem Mandelbrojt 1929 — Paul Noaillon 1930 — Eugène Fabry 1931 — Jacques Herbrand 1932 — Henri Milloux 1933 — Paul Mentre 1934 — Jean Favard 1935 — André Weil 1936 — Claude Chevalley 1937 — Jean Leray 1938 — Jean Dieudonné 1939 — Marcel Brelot 1940 — Charles Ehresmann 1941 — Paul Vincensini 1942 — Paul Dubreil 1943 — René de Possel 1944 — No award 1945 — No award 1946 — Laurent Schwartz 1952 — No award 1957 — Jean-Pierre Serre 1962 — Jean-Louis Koszul 1967 — Jacques Neveu 1972 — Pierre Gabriel 1977 — Jean-Claude Tougeron 1982 — François Laudenbach 1987 — Jean-Louis Loday 1992 — Georges Skandalis See also List of mathematics awards References Mathematics awards French awards 1882 establishments in France
Prix Francoeur
[ "Technology" ]
408
[ "Science and technology awards", "Mathematics awards" ]
55,295,163
https://en.wikipedia.org/wiki/Lithium%20hexafluorogermanate
Lithium hexafluorogermanate is the inorganic compound with the formula Li2GeF6. It forms a solid off-white deliquescent powder. When exposed to moisture, it easily hydrolyses to release hydrogen fluoride and germanium tetrafluoride gases. Reactions and applications Lithium hexafluorogermanate can be dissolved in a solution of hydrogen fluoride, which forms a precipitate of lithium fluoride. It can be used as a densification aid in the sintering of gadolinium oxysulfide, and as a lithium salt additive in a lithium-ion battery electrolyte. References Lithium salts Fluorometallates Germanium(IV) compounds
Lithium hexafluorogermanate
[ "Chemistry" ]
153
[ "Lithium salts", "Salts" ]
55,295,948
https://en.wikipedia.org/wiki/1%2C1-Dimethyldiborane
1,1-Dimethyldiborane is the organoboron compound with the formula (CH3)2B(μ-H)2BH2. A pair of related 1,2-dimethyldiboranes are also known. It is a colorless gas that ignites in air. Formation The methylboranes were first prepared by H. I. Schlesinger and A. O. Walker in the 1930s. Methylboranes are formed by the reaction of diborane and trimethylborane. This reaction produces four different substitution of methyl with hydrogen on diborane. Produced are 1-methyldiborane, 1,1-dimethyldborane, 1,1,2-trimethyldiborane, and 1,1,2,2-tetramethyldiborane. Tetramethyl lead reacts with diborane in a 1,2-dimethoxyethane solvent at room temperature to make a range of methyl substituted diboranes, ending up at trimethylborane, but including 1,1-dimethyldiborane, and trimethyldiborane. The other outputs of the reaction are hydrogen gas and lead metal. Other methods to form methyldiboranes include heating trimethylborane with hydrogen. Alternatively trimethylborane reacts with borohydride salts with in the presence of hydrogen chloride, aluminium chloride, or boron trichloride. If the borohydride is sodium borohydride, then methane is a side product. If the metal is lithium then no methane is produced. dimethylchloroborane and methyldichloroborane are also produced as gaseous products. When Cp2Zr(CH3)2 reacts with borane dissolved in tetrahydrofuran, a borohydro group inserts into the zirconium carbon bond, and methyl diboranes are produced. In ether dimethylcalcium reacts with diborane to produce dimethyldiborane and calcium borohydride: Ca(CH3)2 + 2 B2H6 → Ca(BH4)2 + B2H4(CH3)2 1,2-dimethyldiborane slowly converts on standing to 1,1-dimethyldiborane. Gas chromatography can be used to determine the amounts of the methyl boranes in a mixture. The order they elute are diborane, monomethyldiborane, trimethylborane, 1,1-dimethyldiborane, 1,2-dimethyldiborane, trimethyldiborane, and finally tetramethyldiborane. Selected properties 1,1-Dimethyldiborane has a dipole moment of 0.87 d. The predicted heat of formation for the liquid is ΔH0f=-31 kcal/mol, and for the gas -25 kcal/mol. Heat of vapourisation was measured at 5.5 kcal/mol. Reactions At −78.5 °C, methyldiborane disproportionates slowly, first to diborane and 1,1-dimethyldiborane. In solution methylborane is more stable against disproportionation than dimethylborane. 2 MeB2H5 → 1,1-Me2B2H4 + B2H6, K = 2.8, Me = CH3 3 [1,1-Me2B2H4] → 2 Me3B2H3 + B2H6, K = 0.00027 Trimethyldiborane partially disproportionates over a period of hours at room temperature to yield tetramethyldiborane and 1,2-dimethyldiborane. Over a period of weeks 1,1-dimethyldiborane appears as well. Gentler oxidation of 1,1-dimethyldiborane at 80 °C yields 2,5-dimethyl-1,3,4-trioxadiboralane, a volatile liquid that contains a ring of two boron and three oxygen atoms. An intermediate in this reaction is two molecules of dimethylborylhydroperoxide (CH3)2BOOH. (CAS 41557-62-5) When methyldiborane is oxidised around 150 °C a similar substance methyltrioxadiboralane is produced. At the same time dimethyltrioxadiboralane and trimethylboroxine are also formed, and also hydrocarbons, diborane, hydrogen, and dimethoxyborane (dimethyl methylboronic ester). References Alkylboranes Gases
1,1-Dimethyldiborane
[ "Physics", "Chemistry" ]
1,022
[ "Statistical mechanics", "Gases", "Phases of matter", "Matter" ]
55,296,319
https://en.wikipedia.org/wiki/Decamethylsilicocene
Decamethylsilicocene, (C5Me5)2Si, is a group 14 sandwich compound. It is an example of a main-group cyclopentadienyl complex; these molecules are related to metallocenes but contain p-block elements as the central atom. It is a colorless, air sensitive solid that sublimes under vacuum. Synthesis The first synthesis of decamethylsilicocene was reported by Jutzi and coworkers in 1986. It involved reduction of bis(pentamethylcyclopentadienyl)silicon(IV) dichloride with two equivalents of sodium naphthalenide to generate decamethylsilicocene, naphthalene, and sodium chloride. Generation of the sterically crowded bis(pentamethylcyclopentadienyl)silicon(IV) dichloride required several steps, beginning with double deprotonation of (C5Me4H)2SiCl2 using tert-butyllithium, followed by treatment of the resultant (C5Me4Li)2SiCl2 with methyl iodide. Decamethylsilicocene is soluble in aprotic solvents such as hexane, benzene, and chlorinated solvents. Molecular weight determinations show that decamethylsilicocene exists as a monomer in benzene. The 1H NMR spectrum shows one sharp signal and the 13C-{1H} shows two signals one for the ring carbons and one for the methyl group carbons, consistent with the proposed averaged five-fold symmetric structure in solution and η5 coordination of the pentamethylcyclopentadienyl groups. A recent synthesis directly forms decamethylsilicocene through salt metathesis from an N-heterocyclic carbene-stabilized silylene. This synthetic route avoids the synthesis of the bis(pentamethylcyclopentadienyl)silicon(IV) dichloride starting material. In this synthesis, the NHC-stabilized silylene –2,6– was treated with the potassium salt of pentamethylcyclopentadiene at , followed by extraction of decamethylsilicocene into hexane at to remove the NHC and KCl byproducts. Structure and bonding The x-ray crystallographically determined structure of decamethylsilicocene contains two isomers in a 2:1 ratio. The major isomer adopts a Cs geometry reminiscent of a bent metallocene, with the cyclopentadienyl planes forming an angle of about 25° and the methyl groups staggered. In this isomer, the lone pair on silicon is described as stereochemically active and the distance from the silicon atom to each Cp* centroid is 2.12 Å. The minor isomer adopts a D5d geometry, the same as decamethylferrocene, with the cyclopentadienyl rings parallel to one another and the methyl groups staggered. The distance from the silicon atom to each Cp* centroid is 2.11 Å. The presence of two isomers is thought to be due to packing effects. Computational studies carried out on the parent silicocene, (C5H5)2Si, reveal a very small (~4 kJ/mol) energetic change upon distorting the molecule from the D5d geometry to either a C2v (bent, hydrogen atoms eclipsed) or Cs (bent, hydrogen atoms staggered) geometry. A qualitative molecular orbital diagram predicts that the HOMO would have silicon(3s)-cyclopentadienyl antibonding character and the LUMO would have silicon(3p)-cyclopentadienyl antibonding character. NBO calculations are consistent with the predictions from a qualitative molecular orbital diagram, showing antibonding character between the silicon and the cyclopentadienyl ligands in both the HOMO and the LUMO. Calculated NBO valence orbital occupation numbers suggest that significant bonding occurs between the cyclopentadienyl ligands and the silicon 3s, 3px and 3py orbitals. In comparison, the carbocene congener, silicon is calculated to bond more strongly to the cyclopentadienyl ligands due to the greater radial extension of the 3p orbitals compared to 2p orbitals. Additionally, the energetic separation between the 3s and 3p orbitals is greater than for the 2s and 2p orbitals, leading to less sp mizing which decreases the favorability of distortion to a silylene geometry in which each cyclopentadienyl ligand is bound η1 to the silicon atom. Atoms in molecules (AIM) calculations are consistent with this view. A plot of the Laplacian of the electron density between the central silicon atom and one cyclopentadienyl carbon shows less localization of the charge towards the central atom as compared to equivalent calculations for carbocene. Reactivity Decamethylsilicocene reacts with aldehydes and ketones to give products with a silicon (IV) central atom and a carbon-carbon bond formed between two equivalents of the aldehyde or ketone. The two resultant alkoxides are coordinated to the silicon atom to form a five-membered ring. The coordination of the cyclopentadienyl ring changes from η5 to η1 over the course of these reactions Similar changes in the hapticity of the pentamethylcyclopentadientyl rings occur when decamethylsilicocene reacts with carbon-nitrogen triples bonds. With organic cyanates and thiocyanates, carbon-carbon bond formation occurs and the resultant organic fragment is coordinated to the silicon atom through two anionic nitrogens. Decamethylsilicocene reacts with inorganic cyanides such as BrCN and through oxidative addition to form a silicon (IV) product with a cyanide ligand along with either a Br or Me3Si ligand. Decamethylsilicocene can be protonated using strong acids such as . Upon protonation, one equivalent of pentamethylcyclopentadiene is eliminated to produce the pentamethylcyclopentadienylsilicon(II) cation with a . The pentamethylcyclopentadienylsilicon(II) cation reacts with a variety of cyclopentadienyl salts to produce substituted silicocenes. Silicocene derivatives synthesized this way include (Me5C5)((i-Pr)5C5)Si, ((Me5C5)(1,3,4-Me3H2C5)Si and (Me5C5)(H5C5)Si. The latter compound is stable at but begins to decompose at . Additionally, the pentamethylcyclopentadienylsilicon(II) cation can react with metal precursors to generate complexes with metal-silicon multiple bonds. References Sandwich compounds Silicon compounds
Decamethylsilicocene
[ "Chemistry" ]
1,484
[ "Organometallic chemistry", "Sandwich compounds" ]
55,297,477
https://en.wikipedia.org/wiki/Phosphirenium%20ion
Phosphirenium ions () are a series of organophosphorus compounds containing unsaturated three-membered ring phosphorus (V) heterocycles and σ*-aromaticity is believed to be present in such molecules. Many of the salts containing phosphirenium ions have been isolated and characterized by NMR spectroscopy and X-ray crystallography. Synthesis The first series of phosphirenium ions were synthesized by reacting alkynes with methyl- or phenylphosphonous dichloride and aluminum trichloride. These reactions may be regarded as formal addition of "RClP+" to alkynes. [2+1]-cycloaddition reactions between phosphaalkynes and chlorocarbene give phosphirenes, which serve as starting materials for the generation of phosphirenium species. Treatment of diphenylphosphine oxide with diphenylacetylene affords phosphirenium species. Phosphirenium ions can also be obtained from reaction between phosphiranes and alkynes, where "RClP+" is formally transferred from alkenes to alkynes. Characterizations In the literature, 31P NMR spectra of phosphirenium ions show upfield shifts (−57.3 ppm when R1 = R2 = Y1 = CH3, Y2 = Cl). Large coupling constants J are also found in 1H NMR, and are comparable to those found in cyclopropenium ions. The first phosphirenium ion characterized by X-ray Crystallography has the following structural formula: In the refined crystal structure, average phosphorus–cyclic carbon distance has been found to be 1.731(12) Å, roughly corresponding to a bond order of 1.5. For comparison, typical single- and double-bond P–C distances are 1.86 Å and 1.68 Å, respectively. Reactivity Reminiscent of π-ligand exchange in coordination compounds, a phosphirenium ion may undergo alkyne exchange with other alkynes to give a mixture of phospinirenium species in equilibrium. Kinetically, elimination of alkyne from the cation is suggested to be rate-determining step. In addition, the three-membered ring of phosphirenium ion may be broken. Successive reactions with suitable nucleophile are able to proceed on the electrophilic phosphorus atom. With the presence of an alkyne: With the presence of water or alcohol: Electrophilic B(C6F5)3 readily reacts with phosphinylalkynes at room temperature to give phosphirenium-borate zwitterions as intermediates, which then generate carbon-phosphorus σ bond activation products at higher temperature. The products are of interest to material science. Dotted line in product indicates weak interaction between boron and phosphor atoms (see frustrated Lewis pair). σ*-Aromaticity Qualitative molecular orbital (MO) diagram of a phophirenium ion can be obtained by linear combination of orbitals from a fragment and a bent RC=CR fragment. A low-lying σ* orbital from the former with ungerade symmetry interacts with both π and π* orbitals of the latter, creating a 2π-Hückel system, analogous to the one in cyclopropenium ion. This effect has been named as σ*-aromaticity. It is noteworthy that unlike the case of cyclopropenium ion, interaction between the filled σ orbital of the fragment and π orbitals also leads to some degree of antiaromatic character. Therefore, net 3-center conjugative effect is a combination of both σ* stabilizing contribution and σ destabilizing contribution. Electronegativity of each substituent on phosphorus plays a role as more electron-donating ones give greater degrees of antiaromatic sigma destabilization. This has been confirmed by Natural Population Analysis (NPA), where the energy changes of the reactions below were calculated with interactions between the C–C double bond and phosphorus both turned on and off by manipulating Fock matrix elements: Destabilization energies were the differences between corresponding reactions: Destabilization energy = energy (1) − energy (2) Destabilization Energy with different Y groups: Y = F > OH > Cl > NH2 > Br > I > CH3 > H This series is in accordance with the trend of electronegativity of the ligand atoms. Natural bond orbital (NBO) analysis provides possible Lewis structures of a molecule and has been carried out to assess the structure of . Similar to the aromatic cyclopropenium ion, the phosphorus analog shows a resonance between the structure with carbon-carbon double bond (1, 72.02%) and the ones with carbon-phosphorus double bond (3a and 3b, 7.88% combined). In addition, the ring-opening forms 2a and 2b combined also occupy 9.08% weight. References Organophosphorus compounds Cations Quaternary phosphonium compounds
Phosphirenium ion
[ "Physics", "Chemistry" ]
1,078
[ "Matter", "Functional groups", "Organic compounds", "Organophosphorus compounds", "Cations", "Ions" ]
55,300,055
https://en.wikipedia.org/wiki/Jaswant%20Singh%E2%80%93Bhattacharji%20stain
Jaswant Singh–Bhattacharji stain, commonly referred to as JSB stain, is a rapid staining method for detection of malaria. It is useful for the diagnosis of malaria in thick smear samples of blood. The JSB stain is commonly used throughout India, but rarely used in other countries. Composition The JSB stain consists of two solutions which are used in sequence to stain various parts of the sample. The first solution consists of methylene blue, potassium dichromate, and sulfuric acid diluted in water. This solution is heated for several hours to oxidize the methylene blue. The second solution is eosin dissolved in water. See also Giemsa stain Wright stain References Microscopy Microbiology techniques Laboratory techniques Histopathology Histotechnology Staining dyes Staining Romanowsky stains
Jaswant Singh–Bhattacharji stain
[ "Chemistry", "Biology" ]
173
[ "Staining", "Microbiology techniques", "nan", "Microscopy", "Cell imaging", "Histopathology" ]
55,300,439
https://en.wikipedia.org/wiki/Corey-Pauling%20rules
In biochemistry, the Corey-Pauling rules are a set of three basic statements that govern the secondary nature of proteins, in particular, the CO-NH peptide link. They were originally proposed by Robert Corey and Linus Pauling. The rules are as follows: The atoms in a peptide link all lie on the same plane. The nitrogen, hydrogen, and oxygen atoms in a hydrogen bond are approximately in a straight line. The carbon-oxygen and nitrogen-hydrogen groups are all involved in bonding. References Molecular geometry
Corey-Pauling rules
[ "Physics", "Chemistry" ]
105
[ "Molecular geometry", "Molecules", "Stereochemistry", "Stereochemistry stubs", "Matter" ]
55,305,101
https://en.wikipedia.org/wiki/Rope%20drive
A rope drive is a form of belt drive, used for mechanical power transmission. Rope drives use a number of circular section ropes, rather than a single flat or V-belt. Multiple rope drive The first multiple rope drive was a 9-rope drive of 200 bhp produced by Combe Barbour for their Falls Foundry, Belfast, in 1863. James Combe experimented first with circular ropes laid from leather strips, then from manila hemp. The idea of using rope drives had arisen from his earlier, 1856, experiments in using a rope drive together with an expanding vee pulley, as part of a Van Doorne or Variomatic transmission. Combe Barbour were makers of textile machinery and differential speed gearing was often needed as part of the spinning process, where one shaft could be smoothly adjusted to run slightly faster or slower than another. Usage Rope drives were most widely used for power-transmission in mills and factories, where a single mill engine would have a large rope drive to each floor, where lineshafts across each floor distribute power to the individual machines. These multiple rope drives replaced the earlier technique of a vertical wrought iron shaft with bevel gears at each floor. They remained in use for as long as mills were driven by central steam engines, rather than individual electric motors. Some were used with early electric motors, where these were large single motors driving a whole floor of machinery. A 1907 installation at Droylesden split the output of one motor between two floors with two new rope drives. Rope drives were rarely used in the internal-combustion era, although some were used with gas engines running on producer gas. A Yorkshire mill converted to use a 1,000 hp Allen diesel engine in 1938, and retained the rope drives. Shaft drives had often used gearing from the engines to increase their speed, and thus their power transmission. This was avoided for rope drives, as the rope's maximum useful speed could be achieved from the engine's flywheel and flexibility of the ropes led to backlash in the gearing. Power Power transmitted was typically 50 bhp per rope, for ropes working at 5,000 feet / minute. Groups of ropes could drive different floors and they also allowed individual ropes to be replaced separately, and without losing all power to a mill floor after a rope breakage. US practice sometimes used a single rope, looped between floors and tensioned by an idler pulley, but this system was not used in the UK and each loop was tensioned between its two pulleys by one of them being movable. Rope drives were also cheaper than belts - around a quarter of the price. Factory power distribution The rope drives were placed in a large diagonal shaft at the side of the building, usually windowless and distinctively visible from outside the building. Rope drives required a larger such shaft than comparable belt or shaft drives. As the open shaft represented a channel for transmitting fires, unlike the narrow holes of a shaft drive, it needed careful fireproofing from the loom floors. It was sometimes arranged for large drives that the engine drove a set of horizontal ropes to a pulley on a layshaft or 'second motion shaft' alongside the engine house, then diagonally up through the shaft. References : a treatise on the transmission of power by means of fibrous ropes. Flather, John Joseph. 1895. Mechanical power transmission
Rope drive
[ "Physics" ]
683
[ "Mechanical power transmission", "Mechanics" ]
36,668,180
https://en.wikipedia.org/wiki/Ethyl%20chloroacetate
Ethyl chloroacetate is an organic compound with the chemical formula . It is used primarily in the chemical industry. It is used as a solvent for organic synthesis and as an intermediate in the production of pesticides (such as sodium fluoroacetate). Use An example for the use of this agent was in the synthesis of cinepazet. Synthesis of Fenmetramide. References Alkylating agents Ethyl esters Acetate esters Organochlorides
Ethyl chloroacetate
[ "Chemistry" ]
100
[ "Alkylating agents", "Reagents for organic chemistry" ]
36,672,108
https://en.wikipedia.org/wiki/Vinciane%20Despret
Vinciane Despret (born November 12, 1959) is a Belgian philosopher of science. She is an associate professor at the University of Liège and also teaches at the . Career Vinciane Despret first graduated in philosophy before studying psychology. She graduated in 1991 and is now most known for having provided a reflexive account on ethologists who observed and interpreted the complex dance moves of babblers in the Negev. She is considered to be a foundational thinker in what has now become the field of animal studies. More generally, at the heart of her work lies the question of the relationship between observers and the observed during the conduct of scientific research. Despret affiliates herself to such critical thinkers in philosophy and anthropology of science as Isabelle Stengers, Donna Haraway and Bruno Latour. She undertakes a critical understanding of how science is fabricated, following scientists doing fieldwork and the way they actively create links and specific relationships to their objects of study. Personal life Despret was born in Brussels. She is married to Jean-Marie Lemaire, a psychiatrist who works partly in Turin. They have one child, Jules-Vincent. Selected works "The Body We Care for: Figures of Anthropo-zoo-genesis", Body & Society, 2004, 10 (2-3): 111–134. Our Emotional Makeup. Ethnopsychology and Selfhood. New York: Other Press, 2004. "Sheep do have opinions", in Bruno Latour & Peter Weibel (eds.), Making Things Public. Atmospheres of Democracy, 2006, Cambridge (Massachusetts, US): MIT Press, pp. 360–370. "Ecology and Ideology: The Case of Ethology", International Problems, vol. XXXIII.63 (3-4): 45–61. "The Becoming of Subjectivity in Animal Worlds", Subjectivity, 2008, 23 (1): 123–129. What Would Animals Say If We Asked the Right Questions?, translated by Brett Buchanan, 2016, Minneapolis (Minnesota, US): University of Minnesota Press. Co-authored books in French With Isabelle Stengers: , Paris, La Découverte (Les empêcheurs de penser en rond), 2011. With Jocelyne Porcher: , Arles, Actes sud, 2007. Notes References 1959 births Living people 21st-century Belgian philosophers Writers from Brussels Belgian women philosophers Ethologists Sociologists of science
Vinciane Despret
[ "Biology" ]
509
[ "Ethology", "Behavior", "Ethologists" ]
50,965,416
https://en.wikipedia.org/wiki/H3K27ac
H3K27ac is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates acetylation of the lysine residue at N-terminal position 27 of the histone H3 protein. H3K27ac is associated with the higher activation of transcription and therefore defined as an active enhancer mark. H3K27ac is found at both proximal and distal regions of transcription start site (TSS). Lysine acetylation and deacetylation Proteins are typically acetylated on lysine residues, and the acetylation reaction relies on acetyl-coenzyme A as the acetyl group donor. In histone acetylation and deacetylation, histone proteins are acetylated and deacetylated on lysine residues in the N-terminal tail as part of gene regulation. Typically, these reactions are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activity, although HATs and HDACs can modify the acetylation status of non-histone proteins as well. The regulation of transcription factors, effector proteins, molecular chaperones, and cytoskeletal proteins by acetylation and deacetylation is a significant post-translational regulatory mechanism These regulatory mechanisms are analogous to phosphorylation and dephosphorylation by the action of kinases and phosphatases. Not only can the acetylation state of a protein modify its activity, but there has been a recent suggestion that this post-translational modification may also crosstalk with phosphorylation, methylation, ubiquitination, sumoylation, and others for dynamic control of cellular signaling. In the field of epigenetics, histone acetylation (and deacetylation) have been shown to be important mechanisms in the regulation of gene transcription. Histones, however, are not the only proteins regulated by post-translational acetylation. Nomenclature H3K27ac indicates acetylation of lysine 27 on histone H3 protein subunit: Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. Epigenetic implications The posttranslational modification of histone tails by either histone-modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to the complex, combinatorial transcriptional output. It is thought that a Histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell-specific gene regulation. Poising with H3K4me1 Since the H3K27ac and H3K27me3 modification is at the same location on the histone tail, they antagonize each other. H3K27ac is often used to find active enhancers and poised enhancers subtracting from another enhancer mark H3K4me1 that contains all enhancers. Upregulation of genes Acetylation is usually linked to the upregulation of genes. This is the case in H3K27ac which is an active enhancer mark. It is found in distal and proximal regions of genes. It is enriched in Transcriptional start sites (TSS). H3K27ac shares a location with H3K27me3 and they interact in an antagonistic manner. Alzheimer's H3K27ac is enriched in the regulatory regions of genes implicated in Alzheimer's disease, including those in tau and amyloid neuropathology. Methods The histone mark acetylation can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well-positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well-positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. See also Histone acetylation References Epigenetics Post-translational modification
H3K27ac
[ "Chemistry" ]
1,414
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
50,971,961
https://en.wikipedia.org/wiki/Alternative%20flatworm%20mitochondrial%20code
The alternative flatworm mitochondrial code (translation table 14) is a genetic code found in the mitochondria of Platyhelminthes and Nematodes. Code    AAs = FFLLSSSSYYY*CCWWLLLLPPPPHHQQRRRRIIIMTTTTNNNKSSSSVVVVAAAADDEEGGGG Starts = -----------------------------------M----------------------------  Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG  Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG  Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U). Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V) Differences from the standard code Systematic range and comments Platyhelminthes (flatworms) and Nematoda (roundworms). Code 14 differs from code 9 (the echinoderm and flatworm mitochondrial code) only by translating UAA to Tyr rather than STOP. A study in 2000 has found no evidence that the codon UAA codes for Tyr in the flatworms but other opinions exist. There are very few GenBank records that are translated with code 14 but a test translation shows that re-translating these records with code 9 can cause premature terminations. More recently, UAA has been found to code for tyrosine in the nematodes Radopholus similis and Radopholus arabocoffeae. See also List of genetic codes References Molecular genetics Gene expression Protein biosynthesis
Alternative flatworm mitochondrial code
[ "Chemistry", "Biology" ]
658
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
58,625,602
https://en.wikipedia.org/wiki/Angela%20Olinto
Angela Villela Olinto (born July 19, 1961) is an American astroparticle physicist who is the provost of Columbia University. Previously, she served as the Albert A. Michelson Distinguished Service Professor at the University of Chicago as well as the dean of the Physical Sciences Division. Her current work is focused on understanding the origin of high-energy cosmic rays, gamma rays, and neutrinos. Early life and education Olinto was born in Boston, Massachusetts, during her father's graduate studies at the Massachusetts Institute of Technology. The family moved back to Rio de Janeiro, Brazil, when she was a toddler. She lived in Rio and Brasilia and received her bachelor's degree in physics from Pontificia Universidade Catolica in 1981. As she was finishing her undergraduate studies, she became ill with what was later diagnosed as polymyositis. She pursue graduate studies at the Massachusetts Institute of Technology and received a Doctor of Philosophy in astrophysics in 1987. Career After her Ph.D., Olinto joined the Fermilab Theoretical Astrophysics Group as a postdoc. From Fermilab, Olinto moved to the University of Chicago where she became the first tenured woman in the Department of Astronomy and Astrophysics. She also has an appointment at the Enrico Fermi Institute and the Kavli Institute for Cosmological Physics at the University of Chicago. She served as chair of the Department of Astronomy and Astrophysics from 2003-2006 and again from 2012-2017. In 2006, she received the Chair d’ Excellence Award from the Agence Nationale de la Recherche and served as visiting professor in the Laboratoire d’AstroParticule et Cosmologie (APC). In 2018, she became the first female dean of the Physical Sciences Division at the University of Chicago. Olinto has given over 500 lectures worldwide and published over 250 papers. On April 1, 2024, she joined Columbia University as provost. Research Throughout Olinto's career, she has made theoretical and experimental contributions to astroparticle physics, including contributions to the study of the structure of neutron stars, inflationary theory, the origin and evolution of cosmic magnetic fields, the nature of dark matter, and the origin of the highest energy cosmic particles: cosmic rays, gamma-rays, and neutrinos. Olinto emerged as a leader of the science behind the 3,000 km2 Pierre Auger Observatory in Malargue, Argentina, built and operated by a 19-country collaboration. Her group pioneered in depth studies of the physics and astrophysics of ultra-high energy cosmic ray (UHECR) including the propagation and neutrino production of UHE nuclei and acceleration models based on newborn pulsars. Starting in 2012, Olinto served as the United States principal investigator of JEM-EUSO (Extreme Universe Space Observatory on-board of the Japanese Experiment Module of the International Space Station) mission—an international collaboration involving 16 countries to discover the origin of the highest energy cosmic rays. Olinto is the principal investigator of EUSO-SPB (Extreme Universe Space Observatory on a Super Pressure Balloon), a series of NASA super-pressure balloon missions. EUSO-SPB1 flew in April 2017 with a Fluorescence Telescope developed for JEM-EUSO. EUSO-SPB2 combines a more sensitive Fluorescence Telescope and a novel Cherenkov Telescope designed to search for up-going tau showers produced by astrophysical tau neutrinos. EUSO-SPB2 is scheduled fly from Wānaka, New Zealand, during the Spring 2023 campaign. Starting in 2017, Olinto serves as principal investigator for POEMMA (Probe of Extreme Multi-Messenger Astrophysics), providing the conceptual design for the NASA space mission. The study was presented to the Astronomy and Astrophysics 2020 Decadal Survey. EUSO-SPB2 is a pathfinder for the POEMMA mission. Awards and honors Elected to the National Academy of Sciences in 2021. Elected to the American Academy of Arts and Sciences in 2021. Elected to the Academia Brasileira de Ciencias in 2021. Albert A. Michelson Distinguished Service Professor in the Department of Astronomy and Astrophysics and the College, The University of Chicago. (2017) Faculty Award for Excellence in Graduate Teaching and Mentoring, The University of Chicago. (2014-2015) Homer J. Livingston Professor in the Department of Astronomy and Astrophysics and the College, The University of Chicago. (2013–2016) Hess Lecturer of the 33rd International Cosmic Ray Conference. (2013) Elected Fellow of the American Association for the Advancement of Science. (2012) Awarded the Llewellyn John and Harriet Manchester Quantrell Award for Excellence in Undergraduate Teaching, The University of Chicago (2011) Awarded Chaire d’Excellence of the French Agence Nationale de la Recherche. (2006) Speaker Award of the Particles and Nuclei International Conference (PANIC 05). (2005) Convocation Speaker for the 478th Convocation at the University of Chicago. (2004) Elected Fellow of the American Physical Society. (2001) Awarded the Arthur H. Compton Lecturer, Enrico Fermi Institute, The University of Chicago. (1991) Personal life Olinto is married to classical guitarist Sérgio Assad. References Astroparticle physics University of Chicago faculty Fermilab 1961 births Pontifical Catholic University of Rio de Janeiro alumni Scientists from Boston Academics from Boston Fellows of the American Physical Society 21st-century American physicists Brazilian physicists American women physicists Living people MIT Center for Theoretical Physics people American women academics Members of the United States National Academy of Sciences 21st-century American women scientists
Angela Olinto
[ "Physics" ]
1,155
[ "Astroparticle physics", "Particle physics", "Astrophysics" ]
58,628,281
https://en.wikipedia.org/wiki/Jean%20Lecomte
Jean Lecomte (August 5, 1898 - March 28, 1979) was a French physicist, researcher and professor of physics at CNRS. Career In 1919, Lecomte started working in the laboratory of physical research at the Sorbonne in Paris. Lecomte presented his Doctoral Thesis in 1924 on localized vibrations in molecules. He was one of the founding members of the European Congress on Molecular Spectroscopy (EUCMOS), together with French Nobel prize winning physicist Alfred Kastler (Paris) and German physicist Reinhard Mecke (Konstanz). Lecomte was elected as a member of the French Academy of Sciences (Physics Section) in 1959 and as president of the French Association for the Advancement of Science (L’Association française pour l’avancement des sciences) in 1968. He authored several books on Infrared spectroscopy. References 1898 births 1978 deaths French physicists Spectroscopists
Jean Lecomte
[ "Physics", "Chemistry" ]
185
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
58,629,308
https://en.wikipedia.org/wiki/Cooling%20and%20heating%20%28combinatorial%20game%20theory%29
In combinatorial game theory, cooling, heating, and overheating are operations on hot games to make them more amenable to the traditional methods of the theory, which was originally devised for cold games in which the winner is the last player to have a legal move. Overheating was generalised by Elwyn Berlekamp for the analysis of Blockbusting. Chilling (or unheating) and warming are variants used in the analysis of the endgame of Go. Cooling and chilling may be thought of as a tax on the player who moves, making them pay for the privilege of doing so, while heating, warming and overheating are operations that more or less reverse cooling and chilling. Basic operations: cooling, heating The cooled game (" cooled by ") for a game and a (surreal) number is defined by . The amount by which is cooled is known as the temperature; the minimum for which is infinitesimally close to is known as the temperature of ; is said to freeze to ; is the mean value (or simply mean) of . Heating is the inverse of cooling and is defined as the "integral" Multiplication and overheating Norton multiplication is an extension of multiplication to a game and a positive game (the "unit") defined by The incentives of a game are defined as . Overheating is an extension of heating used in Berlekamp's solution of Blockbusting, where overheated from to is defined for arbitrary games with as Winning Ways also defines overheating of a game by a positive game , as Note that in this definition numbers are not treated differently from arbitrary games. Note that the "lower bound" 0 distinguishes this from the previous definition by Berlekamp Operations for Go: chilling and warming Chilling is a variant of cooling by used to analyse the Go endgame of Go and is defined by This is equivalent to cooling by when is an "even elementary Go position in canonical form". Warming is a special case of overheating, namely , normally written simply as which inverts chilling when is an "even elementary Go position in canonical form". In this case the previous definition simplifies to the form References Combinatorial game theory
Cooling and heating (combinatorial game theory)
[ "Mathematics" ]
454
[ "Recreational mathematics", "Combinatorics", "Game theory", "Combinatorics stubs", "Combinatorial game theory" ]
42,297,996
https://en.wikipedia.org/wiki/Protein%20chemical%20shift%20prediction
Protein chemical shift prediction is a branch of biomolecular nuclear magnetic resonance spectroscopy that aims to accurately calculate protein chemical shifts from protein coordinates. Protein chemical shift prediction was first attempted in the late 1960s using semi-empirical methods applied to protein structures solved by X-ray crystallography. Since that time protein chemical shift prediction has evolved to employ much more sophisticated approaches including quantum mechanics, machine learning and empirically derived chemical shift hypersurfaces. The most recently developed methods exhibit remarkable precision and accuracy. Protein chemical shifts NMR chemical shifts are often called the mileposts of nuclear magnetic resonance spectroscopy. Chemists have used chemical shifts for more than 50 years as highly reproducible, easily measured parameters to map out the covalent structure of small organic molecules. Indeed, the sensitivity of NMR chemical shifts to the type and character of neighbouring atoms, combined with their reasonably predictable tendencies has made them invaluable for both deciphering and describing the structure of thousands of newly synthesized or newly isolated compounds The same sensitivity to a variety of important protein structural features has made protein chemical shifts equally valuable to protein chemists and biomolecular NMR spectroscopists. In particular, protein chemical shifts are sensitive not only to substituent or covalent atom effects (such as electronegativity, redox states or ring currents) but they are also sensitive to backbone torsion angles (i.e. secondary structure), hydrogen bonding, local atomic motions and solvent accessibility. Importance of protein chemical shift prediction Predicted or estimated protein chemical shifts can be used to assist with the chemical shift assignment process. This is especially true if a similar (or identical) protein structure has been solved by X-ray crystallography. In this case, the three-dimensional structure can be used to estimate what the NMR chemical shifts should be and thereby simplify the process of assigning the experimentally observed chemical shifts. Predicted/estimated protein chemical shifts can also be used to identify incorrect or mis-assignments, to correct mis-referenced or incorrectly referenced chemical shifts, to optimize protein structures via chemical shift refinement and to identify the relative contributions of different electronic or geometric effects to nucleus-specific shifts. Protein chemical shifts can also be used to identify secondary structures, to estimate backbone torsion angles, to determine the location of aromatic rings, to assess cysteine oxidation states, to estimate solvent exposure and to measure backbone flexibility. Progress in chemical shift prediction programs Significant progress in chemical shift prediction has been made through continuous improvements in our understanding of the key physico-chemical factors contributing to chemical shift changes. These improvements have also been helped along through significant computational advancements and the rapid expansion of biomolecular chemical shift databases . Over the past four decades, at least three different methods for calculating or predicting protein chemical shifts have emerged. The first is based on using sequence/structure alignment against protein chemical shift databases, the second is based on directly calculating shifts from atomic coordinates, and the third is based on using a combination of the two approaches. Predicting shifts via sequence homology: these are based on the simple observation that similar protein sequences share similar structures and similar chemical shifts Predicting shifts from coordinate data / structure: Semi-classical methods: employ empirical equations derived from classical physics and experimental data Quantum mechanical (QM) methods: employ density functional theory (DFT) Empirical methods: rely on using chemical shift ‘‘hypersurfaces" or related "structure/shift" tables Hybrid Methods: combining the above two methods The emergence of hybrid prediction methods By early 2000, several research groups realized that protein chemical shifts could be more efficiently and accurately calculated by combining different methods together as shown in Figure 1. This led to the development of several programs and web servers that rapidly calculate protein chemical shifts when provided with protein coordinate data. These “hybrid” programs, along with some of their features and URLs, are listed below in Table 1. Summary of protein chemical shift prediction programs Performance comparison of modern protein chemical shift prediction programs This table (Figure 2) lists the correlation coefficients between the experimentally observed backbone chemical shifts and the calculated/predicted backbone shifts for different chemical shift predictors using an identical test set of 61 test proteins. Coverage and speed Different methods have different levels of coverage and rates of calculation. Some methods only calculate or predict chemical shifts for backbone atoms (6 atom types). Some calculate chemical shifts for backbone and certain side chain atoms (C and N only) and still others are able to calculate shifts for all atoms (40 atom types). For chemical shift refinement there is a need for rapid calculation as thousands of structures are generated during a molecular dynamics or simulated annealing run and their chemical shifts must be calculated equally rapidly. All the computational speed tests for SPARTA, SPARTA+, SHIFTS, CamShift, SHIFTX and SHIFTX2 were performed on the same computer using the same set of proteins. The calculation speed reported for PROSHIFT is based on the response rate of its web server. See also RefDB (chemistry) SHIFTCOR References Nuclear magnetic resonance Nuclear magnetic resonance software Protein methods Protein structure Biophysics Scientific techniques Chemistry software
Protein chemical shift prediction
[ "Physics", "Chemistry", "Biology" ]
1,037
[ "Biochemistry methods", "Applied and interdisciplinary physics", "Nuclear magnetic resonance", "Chemistry software", "Nuclear magnetic resonance software", "Protein methods", "Protein biochemistry", "Biophysics", "Structural biology", "Nuclear physics", "nan", "Protein structure" ]
42,299,134
https://en.wikipedia.org/wiki/Epigenome%20editing
Epigenome editing or epigenome engineering is a type of genetic engineering in which the epigenome is modified at specific sites using engineered molecules targeted to those sites (as opposed to whole-genome modifications). Whereas gene editing involves changing the actual DNA sequence itself, epigenetic editing involves modifying and presenting DNA sequences to proteins and other DNA binding factors that influence DNA function. By "editing” epigenomic features in this manner, researchers can determine the exact biological role of an epigenetic modification at the site in question. The engineered proteins used for epigenome editing are composed of a DNA binding domain that target specific sequences and an effector domain that modifies epigenomic features. Currently, three major groups of DNA binding proteins have been predominantly used for epigenome editing: Zinc finger proteins, Transcription Activator-Like Effectors (TALEs) and nuclease deficient Cas9 fusions (CRISPR). General concept Comparing genome-wide epigenetic maps with gene expression has allowed researchers to assign either activating or repressing roles to specific modifications. The importance of DNA sequence in regulating the epigenome has been demonstrated by using DNA motifs to predict epigenomic modification. Further insights into mechanisms behind epigenetics have come from in vitro biochemical and structural analyses. Using model organisms, researchers have been able to describe the role of many chromatin factors through knockout studies. However knocking out an entire chromatin modifier has massive effects on the entire genome, which may not be an accurate representation of its function in a specific context. As one example of this, DNA methylation occurs at repeat regions, promoters, enhancers, and gene bodies. Although DNA methylation at gene promoters typically correlates with gene repression, methylation at gene bodies is correlated with gene activation, and DNA methylation may also play a role in gene splicing. The ability to directly target and edit individual methylation sites is critical to determining the exact function of DNA methylation at a specific site. Epigenome editing is a powerful tool that allows this type of analysis. For site-specific DNA methylation editing as well as for histone editing, genome editing systems have been adapted into epigene editing systems. In short, genome homing proteins with engineered or naturally occurring nuclease functions for gene editing, can be mutated and adapted into purely delivery systems. An epigenetic modifying enzyme or domain can be fused to the homing protein and local epigenetic modifications can be altered upon protein recruitment. Exceptionally for DNA methylation, the homing domain itself can be enough to interfere with normal epigenetic processes to lead to targeted epigenetic editing. Targeting proteins TALE The Transcription Activator-Like Effector (TALE) protein recognizes specific DNA sequences based on the composition of its DNA binding domain. This allows the researcher to construct different TALE proteins to recognize a target DNA sequence by editing the TALE's primary protein structure. The binding specificity of this protein is then typically confirmed using Chromatin Immunoprecipitation (ChIP) and Sanger sequencing of the resulting DNA fragment. This confirmation is still required on all TALE sequence recognition research. When used for epigenome editing, these DNA binding proteins are attached to an effector protein. Effector proteins that have been used for this purpose include Ten-eleven translocation methylcytosine dioxygenase 1 (TET1), Lysine (K)-specific demethylase 1A (LSD1) and Calcium and integrin binding protein 1 (CIB1). Zinc finger proteins The use of zinc finger-fusion proteins to recognize sites for epigenome editing has been explored as well. Maeder et al. has constructed a ZF-TET1 protein for use in DNA demethylation. These zinc finger proteins work similarly to TALE proteins in that they are able to bind to sequence specific sites in on the DNA based on their protein structure which can be modified. Chen et al. have successfully used a zinc finger DNA binding domain coupled with the TET1 protein to induce demethylation of several previously silenced genes. Kungulovski and Jeltsch successfully used ZFP-guided deposition of DNA methylation gene to cause gene silencing but the DNA methylation and silencing were lost when the trigger signal stopped. The authors suggest for stable epigenetic changes, there must be either multiple depositions of DNA methylation of related epigenetic marks, or long-lasting trigger stimuli. ZFP epigenetic editing has shown potential to treat various neurodegenerative diseases. CRISPR-Cas The Clustered Regulatory Interspaced Short Palindromic Repeat (CRISPR)-Cas system functions as a DNA site-specific nuclease. In the well-studied type II CRISPR system, the Cas9 nuclease associates with a chimera composed of tracrRNA and crRNA. This chimera is frequently referred to as a guide RNA (gRNA). When the Cas9 protein associates with a DNA region-specific gRNA, the Cas9 cleaves DNA at targeted DNA loci. However, when the D10A and H840A point mutations are introduced, a catalytically-dead Cas9 (dCas9) is generated that can bind DNA but will not cleave. The dCas9 system has been utilized for targeted epigenetic reprogramming in order to introduce site-specific DNA methylation. By fusing the DNMT3a catalytic domain with the dCas9 protein, dCas9-DNMT3a is capable of achieving targeted DNA methylation of a targeted region as specified by the present guide RNA. Similarly, dCas9 has been fused with the catalytic core of the human acetyltransferase p300. dCas9-p300 successfully catalyzes targeted acetylation of histone H3 lysine 27. Alternatively, the dCas9 protein alone is sufficient to physically interfere with normal processes which maintain DNA methylation at the site to which it is targeted in dividing cells; this results in targeted DNA demethylation. The primary benefit of this approach is that it is free of epigenetic-modifying enzymes, which may affect epigenetic marks over large distances and act independently throughout the genome despite being tethered to a targeted dCas9 protein, often leading to widespread off-target effects. A variant in CRISPR epigenome editing (called FIRE-Cas9) allows to reverse the changes made, in case something went wrong. CRISPRoff is a dead Cas9 fusion protein that can be used to heritably silence the gene expression of "most genes" and allows for reversible modifications. Commonly used effector proteins TET1 induces demethylation of cytosine at CpG sites. This protein has been used to activate genes that are repressed by CpG methylation and to determine the role of individual CpG methylation sites. It is widely believed that targeted demethylation is typically better achieved by dCas9 alone (by targeted interference with normal DNA methylation machinery) as introduction of dCas9-TET into cells leads to widespread off-target activity of the over-expressed TET enzyme. LSD1 induces the demethylation of H3K4me1/2, which also causes an indirect effect of deacetylation on H3K27. This effector can be used on histones in enhancer regions, which can changes the expression of neighboring genes. CIB1 is a light sensitive cryptochrome, this cryptochrome is fused to the TALE protein. A second protein contains an interaction partner (CRY2) fused with a chromatin/DNA modifier (ex. SID4X). CRY2 is able to interact with CIB1 when the cryptochrome has been activated by illumination with blue light. The interaction allows the chromatin modifier to act on the desired location. This means that the modification can be performed in an inducible and reversible manner, which reduces long-term secondary effects that would be caused by constitutive epigenetic modification. Applications Studying enhancer function and activity Editing of gene enhancer regions in the genome through targeted epigenetic modification has been demonstrated by Mendenhall et al. (2013). This study utilized a TALE-LSD1 effector fusion protein in order to target enhancers of genes, to induce enhancer silencing in order to deduce enhancer activity and gene control. Targeting specific enhancers followed by locus specific RT-qPCR allows for the genes affected by the silenced enhancer to be determined. Alternatively, inducing enhancer silencing in regions upstream of genes allows for gene expression to be altered. RT-qPCR can then be utilized to study effects of this on gene expression. This allows for enhancer function and activity to be studied in detail. Determining the function of specific methylation sites It is important to understand the role specific methylation sites play regulating in gene expression. To study this, one research group used a TALE-TET1 fusion protein to demethylate a single CpG methylation site. Although this approach requires many controls to ensure specific binding to target loci, a properly performed study using this approach can determine the biological function of a specific CpG methylation site. Determining the role of epigenetic modifications directly Epigenetic editing using an inducible mechanism offers a wide array of potential use to study epigenetic effects in various states. One research group employed an optogenetic two-hybrid system which integrated the sequence specific TALE DNA-binding domain with a light-sensitive cryptochrome 2 protein (CIB1). Once expressed in the cells, the system was able to inducibly edit histone modifications and determine their function in a specific context. Functional engineering Targeted regulation of disease-related genes may enable novel therapies for many diseases, especially in cases where adequate gene therapies are not yet developed or are inappropriate. While transgenerational and population level consequences are not fully understood, it may become a major tool for applied functional genomics and personalized medicine. As with RNA editing, it does not involve genetic changes and their accompanying risks. One example of a potential functional use of epigenome editing was described in 2021: repressing Nav1.7 gene expression via CRISPR-dCas9 which showed therapeutic potential in three mouse models of chronic pain. In 2022, research assessed its usefulness in reducing tau protein levels, regulating a protein involved in Huntington's disease, targeting an inherited form of obesity, and Dravet syndrome. Limitations Sequence specificity is critically important in epigenome editing and must be carefully verified (this can be done using chromatin immunoprecipitation followed by Sanger sequencing to verify the targeted sequence). It is unknown if the TALE fusion may cause effects on the catalytic activity of the epigenome modifier. This could be especially important in effector proteins that require multiple subunits and complexes such as the Polycomb repressive complex. Proteins used for epigenome editing may also obstruct ligands and substrates at the target site. The TALE protein itself may even compete with transcription factors if they are targeted to the same sequence. In addition, DNA repair systems could reverse the alterations on the chromatin and prevent the desired changes from being made. Finally, enzymes fused to dCas9 typically are able to act independently of the dCas9 protein that they are fused to. When these fusions are over-expressed in cells, these enzymes tend to modify large spans of the genome in what constitutes dramatic off-target activity. It is therefore necessary for fusion constructs and targeting mechanisms to be optimized for reliable and repeatable epigenome editing. See also DNA editing RNA editing References Further reading Tompkins JD. www.epigenomeengineering.com CRISPR Activation of Single Genes Turns Skin Cells to Stem Cells Epigenetics Genetic engineering Genome editing
Epigenome editing
[ "Chemistry", "Engineering", "Biology" ]
2,491
[ "Genetics techniques", "Biological engineering", "Genome editing", "Genetic engineering", "Molecular biology" ]
42,300,576
https://en.wikipedia.org/wiki/Samuel%20E.%20Horne%20Jr.
Samuel Emmett Horne Jr. (July 26, 1924 –February 4, 2006) was a research scientist at B. F. Goodrich noted for first synthesizing cis-1,4-polyisoprene, the main polymer contained in natural tree rubber, using Ziegler catalysis. Earlier attempts to produce synthetic rubber from isoprene had been unsuccessful, but in 1955, Horne prepared 98 percent cis-1,4-polyisoprene via the stereospecific polymerization of isoprene. The product of this reaction differs from natural rubber only slightly. It contains a small amount of cis-1,2-polyisoprene, but it is indistinguishable from natural rubber in its physical properties. The importance of Horne's development of synthetic polyisoprene and polybutadiene is readily seen in the production of these polymers. In 2008, global production of polybutadiene was 2,042,000 metric tons (exceeded only by SBR in capacity and production). Production of polyisoprene was 611,000 metric tons (Russia, 415,000; Asia, 78,000; US, 90,000; Europe, 25,000). Personal Horne was born July 26, 1924, in Jacksonville, Florida He grew up in Tampa, Florida. He married Sue Ross in 1949. They had four children. He showed interest in chemistry at a young age. When he was five or six years old, he and a friend played with a chemistry set. Although the experiments sometimes led to unpleasant odors and other problems, his parents nevertheless encouraged the young Horne to pursue his interest in chemistry. Horne died on February 4, 2006, in Columbus, Ohio. Education and career Horne graduated from Tampa's Henry B. Plant High School in 1942. He enrolled at Emory University, but his university studies were interrupted by World War II. He joined the U.S. Navy in July 1943 and served until 1946. He was released to inactive status with the rank of Lieutenant (JG). He returned to Emory University, where he obtained his A.B. degree in 1947, his M.A. degree in 1948, and his Ph.D. degree in 1950. While at Emory University, he taught organic chemistry from 1947 to 1950. He also had a research fellowship from 1946 to 1950. Horne's intention was to enter the teaching profession after completing his Ph.D. degree. He received advice to obtain industrial research experience before entering the academia. Taking that advice, he obtained a position at the B. F. Goodrich Company's Research and Development Center in Brecksville, Ohio in 1950. He was promoted in 1953, promoted again to Research Associate in 1960, and to Senior Research Associate in 1968. In 1982, B. F. Goodrich changed its strategic direction and sold its synthetic rubber operations to Polysar, Ltd. Goodrich was deemphasizing rubber research. As Horne's main interest was in rubber research, he made the decision to join Polysar. He retired from Polysar in 1987. Research Synthetic polyisoprene In 1954, immediately following the formation of the joint venture Goodrich-Gulf Chemicals, Inc., an option agreement was obtained from Professor Karl Ziegler to examine his new catalyst system for the polymerization of ethylene. Horne was called back from vacation to the Research Center to begin work immediately. He was given the assignment of translating into practice the information that B. F. Goodrich would receive from Karl Ziegler. After verifying the claims for the polymerization of ethylene, and also for other alpha-olefins, Horne copolymerized ethylene with other olefins as a means of controlling the polyethylene density. With the success of the copolymerizations, he decided to try to copolymerize ethylene with isoprene, with the thought of getting a copolymer that could be vulcanized with sulfur in a typical rubber recipe. While Professor Ziegler did not report the polymerization of dienes with his catalyst, Horne saw no reason why a copolymer or a homopolymer could not be made from a pure hydrocarbon diene. Actually, Karl Ziegler had said that they were unsuccessful in polymerizing dienes. With Horne's inquisitive nature and determination, he tried the copolymerization of ethylene and isoprene. Horne submitted the ethylene/isoprene copolymer for infrared examination. When Jim Shipman, who was responsible for the analysis, examined the infrared spectrum, he immediately called Horne and said, "Are you trying to fool us? We know natural rubber when we see it!" Fractionation of the sample then showed a mixture of cis-1,4-polyisoprene and polyethylene. The fact that isoprene had polymerized was not unexpected, but the high degree of stereo control was unexpected. They immediately recognized the importance of this discovery and began an intense program to elucidate the chemistry and variables associated with diene polymerizations. The successful duplication of natural rubber was a goal that had been sought by many scientists for nearly one hundred years. An extensive synthetic rubber program was carried out during World War II, one of the objectives being to accomplish this synthesis. The supply of natural rubber was limited for the military during World War II, because natural rubber plantations were not available to the allied forces due to the Japanese occupation of many of the world's rubber plantations. With the process well defined, the scale-up of the process from 50-gram laboratory batches to ton-size production was started. In less than six months, they scaled the process to production-size quantities, made bus and truck tires, and ran them under service conditions on the highways. The test results showed conclusively that the synthetic cis-1,4-polyisoprene was essentially equivalent to natural rubber. During the factory trials, the research team was delighted when they heard the factory personnel comment during the rubber mixing trials that the experimental rubber was nothing new – it mixed just like natural rubber. Polybutadiene After he discovered the stereo control of polyisoprene, he discovered the polymerization and stereo control of butadiene as well as many other alkylbutadienes. The polymerization of butadiene can lead to three basic structures: the cis-1,4- and the trans-1,4-polybutadienes, and the 1,2-polybutadiene with a vinyl side group. From these three basic structures, there are five structurally different polymers: the cis and trans 1,4-polybutadienes, and the isotactic, syndiotactic, and atactic 1,2-polybutadienes. All of these polymers have been isolated in pure form. Horne studied the Ziegler catalyst variables as well as other process variables and their effect on polybutadiene structures. He demonstrated that a wide variety of mixed cis and trans structures could be obtained by the proper choice of ratios of titanium tetrachloride to organo-aluminum. By replacing the titanium tetrachloride with titanium tetraiodide, he obtained polybutadiene with 90-95% cis-1,4-structures. Catalysts based on cobalt salts were very useful for preparation of cis-1,4-polybutadienes. Although many cobalt salts were suitable, Horne used cobalt octoate. He showed that cobalt can function under heterogeneous or homogeneous conditions. He prepared at -78 °C a polybutadiene with 99.8% cis-1,4 structure, which was the highest percentage of cis-1,4 structure he had ever seen. He defined the effect of temperature, solvent, and other additives to the catalyst to produce the highest percentage of cis-1,4 polymer. He studied many other alkylbutadienes. He polymerized 2-ethyl, 2-propyl, 2-amyl, 2-t-butyl and 2,3-dimethyl butadiene as well as others. Awards In 1969, he was chairman of the Gordon Conference on Hydrocarbon Chemistry. In 1974, he received the Pioneer Award from the American Institute of Chemists. In 1978, he received the Midgley Medal from the Detroit Section of the American Chemical Society. In 1980, he received the Charles Goodyear Medal. And in 1982, he received an Honorary Doctor of Science from Emory University. References Polymer scientists and engineers 20th-century American chemists 1924 births 2006 deaths Emory University alumni United States Navy personnel of World War II
Samuel E. Horne Jr.
[ "Chemistry", "Materials_science" ]
1,796
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
42,301,816
https://en.wikipedia.org/wiki/Northwestern%20blot
The northwestern blot, also known as the northwestern assay, is a hybrid analytical technique of the western blot and the northern blot, and is used in molecular biology to detect interactions between RNA and proteins. A related technique, the western blot, is used to detect a protein of interest that involves transferring proteins that are separated by gel electrophoresis onto a nitrocellulose membrane. A colored precipitate clusters along the band on the membrane containing a particular target protein. A northern blot is a similar analytical technique that, instead of detecting a protein of interest, is used to study gene expression by detection of RNA (or isolated mRNA) on a similar membrane. The northwestern blot combines the two techniques, and specifically involves the identification of labeled RNA that interact with proteins that are immobilized on a similar nitrocellulose membrane. History Edwin Southern first created the Southern blot, an analytical technique used to detect DNA. The technique involves using gel electrophoresis, an important analytical method that involves the use of an electric field and the subsequent migration of charged DNA, RNA or proteins through that electric field based on size and charge. With a Southern Blot, the separated DNA fragments are then transferred to a filter membrane for detection. Detection occurs as bands become visible on the membrane and correlate with a particular molecule of interest. Subsequently, other similar blotting techniques were created with similar nomenclature to detect different molecules or interactions between molecules. These techniques include the western blot (protein detection), the northern blot (RNA detection), the southwestern blot (DNA-protein interaction detection), the eastern blot (post translational modification detection) and the northwestern blot (RNA-protein interaction detection). Technique specifics Running a northwestern blot involves separating the RNA binding proteins by gel electrophoresis, which will separate the RNA binding proteins based upon their size and charge. Individual samples can be loaded in to the agarose or polyacrylamide gel (usually an SDS-PAGE) in order to analyze multiple samples at the same time. Once the gel electrophoresis is complete, the gel and associated RNA binding proteins are transferred to a nitrocellulose transfer paper. The newly transferred blots are then soaked in a blocking solution; non-fat milk and bovine serum albumin are common blocking buffers. This blocking solution assists with preventing non-specific binding of the primary and/or secondary antibodies to the nitrocellulose membrane. Once the blocking solution has adequate contact time with the blot, a specific competitor RNA is applied and given time to incubate at room temperature. During this time, the competitor RNA binds to the RNA binding proteins in the samples that are on the blot. The incubation time during this process can vary depending on the concentration of the competitor RNA applied; though incubation time is typically one hour. After the incubation is complete, the blot is usually washed at least 3 times for 5 minutes each wash, in order to dilute out the RNA in the solution. Common wash buffers include Phosphate buffered saline (PBS) or a 10% Tween 20 solution. Improper or inadequate washing will affect the clarity of the development of the blot. Once washing is complete the blot is then typically developed by x-ray or similar autoradiography methods. Applications After developing the blot using xray or autoradiography, the results can be analyzed and interpreted to determine the approximate size and concentration of the RNA binding protein(s) of interest to further study the protein(s). The location and concentration of the RNA binding protein on the blot can affect the results, and bands can sometimes appear after development. These bands can help researchers determine the size and concentration of the RNA binding protein of interest. When the approximate size of the protein is known, the original sample can be run on a chromatography machine to separate it by size. In addition, once the protein is isolated, it can be digested with trypsin, and Mass Spectrometry can be utilized to sequence the peptides in order to determine the identity of the specific protein. Advantages and disadvantages Advantages of northwestern blotting include the expedited detection of specific proteins that bind RNA, as well as the assessment of the approximate molecular weights of those proteins. The northwestern blot allows for detection of identified proteins in a way that is inexpensive. The blot is typically a first step in research, as it allows for the identification of the approximate molecular weights, once the molecular weight is known it allows for further research or purification through other methods like chromatography. Another advantage of the northwestern blot is that it aides in the building of expression libraries of cognate ligands. A noted disadvantage is that some RNA-Protein interactions with poor RNA binding properties may not be as detectable with this technique. Also the procedure for blotting can take from 3 to 5 hours. If the procedure is not done correctly it can result in significant background which can result in an unclear blot of the proteins identified. In addition, proteins need to renature after being separated and transferred to the nitrocellulose membrane. One last disadvantage is that proteins must consist of a single polypeptide or two subunits that comigrate in the gel matrix. See also Southern blot Western blot Northern blot Southwestern blot Eastern blot Gel electrophoresis SDS-PAGE Chromatography Protocols Northwestern Blot of Protein-RNA Interaction from Young Rice Panicles RNA Isolation and Northern Blot Analysis Protein Blotting References Molecular biology techniques Protein methods
Northwestern blot
[ "Chemistry", "Biology" ]
1,167
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Molecular biology techniques", "Molecular biology" ]
42,304,634
https://en.wikipedia.org/wiki/Multibook
A Multibook or a TACLANE Multibook is a single laptop that combines two to three different classified networks into a single device solution. Currently, most secure computing standards require the federal government and military personnel to maintain multiple PCs on different networks in an effort to allow users simultaneous access to unclassified and classified information. A multibook simply through a complex configuration allows separate enclaves and virtual machines through one display. A Multibook has no hard drive and uses a cryptographic ignition key to create a virtual hard drive space with a Type 1 COMSEC element found inside the MultiBook’s integrated Suite B security module. The security module known as a HAIPE protects information stored on the computer, as well as data being sent to and from networks classified Secret and below. Due to the lack of stored collateral data, multiBooks do not have any burdensome COMSEC handling requirements. There is no Data at Rest (DAR) when equipment is turned off. Some multibooks are NSA certified to protect information classified Secret and below. They are approved for Suite B information/processing with data in transit (DIT) encryption protecting information when sent to and from classified networks. The multibook security benefit for the user is that the device is a CHVP device and is not considered CCI like other devices used in collateral processing. References Computer network security Laptops
Multibook
[ "Engineering" ]
275
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security" ]
42,304,736
https://en.wikipedia.org/wiki/Cryptographic%20High%20Value%20Product
Cryptographic High Value Product (CHVP) is a designation used within the information security community to identify assets that have high value, and which may be used to encrypt / decrypt secure communications, but which do not retain or store any classified information. When disconnected from the secure communication network, the CHVP equipment may be handled with a lower level of controls than required for COMSEC equipment. See also COMSEC CCI Multibook References CHVP SUITE B NIST Computer Security Resource Center Cryptography
Cryptographic High Value Product
[ "Mathematics", "Engineering" ]
110
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
42,306,557
https://en.wikipedia.org/wiki/Saiful%20Islam%20%28chemist%29
Saiful Islam (born 14 August 1963) is a British chemist and professor of materials modelling at the Department of Materials, University of Oxford. Saiful is a Fellow of the Royal Society of Chemistry (FRSC), and received the Royal Society's Wolfson Research Merit Award and Hughes Medal, and the American Chemical Society Award for Energy Chemistry for his major contributions to the fundamental atomistic understanding of new materials for lithium batteries and perovskite solar cells. Saiful is an atheist who refused the Order of the British Empire citing discomfort with the phrase "British Empire" and its link to colonialism. Biography Early life and education Saiful was born in 1963 in Karachi, Pakistan to ethnically Bengali parents. The family moved to London in 1964 and he grew up in Crouch End, north London. There he went to Stationers' Company's School, a state comprehensive. He received both a BSc degree in chemistry and a PhD (1988) from University College London, where he studied under Professor Richard Catlow. Subsequently, he held a postdoctoral fellowship at the Eastman Kodak laboratories in Rochester, New York, working on oxide superconductors. Career and research Saiful returned to the UK in 1990 to become a lecturer, then reader, at the University of Surrey. In January 2006 he was appointed professor of Materials Chemistry at the University of Bath. His group applies computational methods combined with structural techniques to study fundamental atomistic properties such as ion conduction, defect chemistry and surface structures. In January 2022, he joined the Department of Materials, University of Oxford as a professor of materials modelling. Saiful has been a member of the editorial board of the Journal of Materials Chemistry, and sits on the advisory board of the RSC journal Energy and Environmental Science. He is Principal Investigator of the Faraday Institution's 'CATMAT' project on Next-generation Lithium-Ion Cathode Materials. Outreach and public engagement Saiful presented the 2016 Royal Institution Christmas Lectures, entitled "Supercharged: Fuelling the Future" on the theme of energy, a commemorative lecture series for the BBC which celebrated 80 years since the Christmas Lectures were first broadcast on television in 1936. The lectures were broadcast on BBC Four, and achieved over 3.5 million interactions through the BBC broadcasts and social media. Saiful was interviewed before these lectures for articles in The Guardian. A demonstration in these lectures led to a Guinness World Record for the highest voltage (1,275 Volts) produced by a fruit battery using more than 1,000 lemons. Saiful later broke that record in 2021 after using 2,923 lemons to produce 2,307.8 Volts. Saiful has served on the Diversity Committee of the Royal Society, and was selected for the Royal Society's 'Inspiring Scientists' project that recorded the life stories of British scientists with minority ethnic heritage in partnership with National Life Stories at the British Library. His outreach activities include talks on energy materials to student audiences using 3D glasses organised by the TTP Education in Action at the UCL Institute of Education, London. He was interviewed for The Life Scientific programme on BBC Radio 4 in October 2019. On 23 November 2022, Saiful was an invited speaker at the Brian Cox & Robin Ince's Compendium of Reason charity event, which was at the Royal Albert Hall. Personal life As of 2021, Saiful lives in Bath with his wife, Gita Sunthankar (a local GP), and their two children, Yasmin and Zak. Saiful is an atheist and Patron of Humanists UK. Awards and honours Saiful is a Fellow of the Royal Society of Chemistry (FRSC) since 2008 and the Institute of Materials, Minerals and Mining (FIMMM), as well as Honorary Fellow of the British Science Association. Saiful has received several RSC research awards including 2008 Francis Bacon Medal for Fuel Cell Science, 2011 Materials Chemistry Division Lecturer Award, 2013 Sustainable Energy Award, 2013 Wolfson Research Merit Award from the Royal Society, 2017 Peter Day Award for Materials Chemistry, 2020 Storch Award in Energy Chemistry from the American Chemical Society, 2022 Hughes Medal from the Royal society, and the Robert Perrin Award from Institute of Materials, Minerals and Mining. In 2019, he declined a New Year Honours Award of an Order of the British Empire, because he is "never been comfortable with the words ‘British Empire’ in this award and the links to empire, colonialism, and slavery". References Living people 1963 births British chemists Academics of the University of Bath Academics of the University of Surrey Alumni of University College London British atheists People from Crouch End Fellows of the Royal Society of Chemistry British humanists Computational chemists 20th-century British chemists 21st-century British chemists British people of Bangladeshi descent Solid state chemists Fellows of St Anne's College, Oxford Fellows of the Institute of Materials, Minerals and Mining Statutory Professors of the University of Oxford
Saiful Islam (chemist)
[ "Chemistry" ]
996
[ "Solid state chemists" ]
42,306,621
https://en.wikipedia.org/wiki/Catherine%20Clarke%20Fenselau
Catherine Clarke Fenselau (born 15 April 1939) is an American scientist who was the first trained mass spectrometrist on the faculty of an American medical school; she joined Johns Hopkins School of Medicine in 1968. She specializes in biomedical applications of mass spectrometry. She has been recognized as an outstanding scientist in the field of bioanalytical chemistry because of her work using mass spectrometry to study biomolecules. Early life and education Catherine Lee Clarke was born on 15 April 1939, in York, Nebraska. She graduated from Bryn Mawr College in 1961 with an Artium baccalaureus in chemistry. She received a Ph.D. in organic chemistry in 1965 from Stanford University, working with Carl Djerassi. As a field, organic mass spectrometry was new and had great potential impact for the pharmaceutical industry. The mass spectrometer was a new tool for examining the structures of small botanical molecules. Djerassi's lab examined electron ionization of molecules, studying basic mechanisms such as fragmentation and hydrogen transfer. For her thesis research, Catherine made a series of deuterium labeled analogues of amines, alcohols, esters and amides. Career She spent the next two years in postdoctoral positions, studying on a 1965–1966 fellowship from the American Association of University Women at the University of California, Berkeley with Melvin Calvin. In 1967, she worked at the Space Sciences Laboratory with Melvin Calvin and A. L. Burlingame. Calvin's lab was developing methods to be used in the analysis of lunar rock samples. Fenselau described an analysis technique for preparing lipid samples from Moon rocks, before actual lunar samples were available for testing. Johns Hopkins School of Medicine Fenselau was the first trained mass spectroscopist to join a medical faculty when she joined the mass spectrometry laboratory in the Pharmacology Department at Johns Hopkins University in 1968. When she arrived, Johns Hopkins did not have a mass spectrometer. Fenselau did her initial research by driving to the National Institutes of Health (NIH) laboratories to use their instruments. Paul Talalay, chairman of Pharmacology, and Albert L. Lehninger, the chairman of Biological Chemistry, submitted proposals for funding for a state of the art mass spectrometer. They were successful in obtaining funding from the National Science Foundation for a CEC 21-110 double-focusing mass spectrometer for Fenselau to use. She has done considerable work in the area of cancer and anti-cancer treatments, studying drugs such as cyclophosphamide. With oncologist O. M. Colvin, she identified the active metabolite of cyclophosphamide, and published the first quantification of the drug and its metabolites in urine and blood from patients. She led the development of synthetic and analytical methods for glucuronides, and studied the reactions of acyl-linked glucuronides with Martin Stogniew, work that has been important in understanding drug-derived liver disease. University of Maryland Although Fenselau and her second husband Robert Cotter both worked in mass spectrometry at Johns Hopkins, they chose to develop independent careers rather than a joint lab. "We felt that we could make twice as many contributions to science if we had two separate labs and evolved in our own ways that reflected our own skills and our own institutions." In 1987, Catherine Fenselau moved to the University of Maryland, Baltimore County (UMBC) to become chairperson of the Department of Chemistry and Biochemistry. She chose the university in part because she wanted greater opportunities for teaching. At UMBC she was one of the first faculty members involved in the Meyerhoff Scholarship Program, an initiative of UMBC president Freeman Hrabowski to attract minority undergraduate researchers. There, funding from the National Institutes of Health, the National Science Foundation, and others enabled Fenselau to establish a state-of-the-art mass spectrometry lab, the Structural Biochemistry Center (SBC). Equipment included a JEOL HX110/110 four-sector tandem mass spectrometer, a Hewlett-Packard quadrupole mass spectrometer with particle beam and Vestec electrospray ion sources, and 500 and 600 MHz NMR spectrometers. Research areas studied in the lab included biopolymer structure, ion thermochemistry, proton-binding entropies, glucuronide and glutathione conjugation, and possible mechanisms for acquired drug resistance. In June 1987, Fenselau oversaw the installation of a HighResMALDI Fourier transform mass spectrometer in her lab. The Fourier transform mass spectrometer used a strong magnetic field to trap and excite ions and measure the resulting electrical signals. Appointed Chairperson of the Department of Chemistry at University of Maryland, College Park in 1998, Fenselau supervised the disassembly, transport, and reassembly of the complex instrument, moving it safely to her new lab. With it, she has studied the chemistry of gaseous ions, chemical reactions of drugs with proteins, and posttranslational modification in protein biosynthesis. In 2005, she acted as the interim Dean for the College of Graduate Studies and Associate Vice President for Research in the Department of Chemistry and Biochemistry, and was named Distinguished University Professor at the University of Maryland in 2017. She has been president of the American Society for Mass Spectrometry (ASMS) from 1982 to 1984, founding president of US-Human Proteome Organization (US HUPO), and senior vice president of international Human Proteome Organization. She serves as a member of the Western Region of the Awards Committee of the Human Proteome Organization. She was the founding editor of Biomedical Mass Spectrometry (now the Journal of Mass Spectrometry) and associate editor of Analytical Chemistry. She has published more than 350 peer-reviewed articles. A 2020 issue of the Journal of Mass Spectrometry is dedicated to Fenselau for her distinguished career. Catherine Fenselau continues to teach at the University of Maryland College Park. More than 150 post-doctoral fellows, graduate students and undergraduate students have received training in her laboratories at Johns Hopkins University, University of Maryland, Baltimore County, and the University of Maryland, College Park. Awards Fenselau has received a number of significant awards, including the following. Garvan Medal, 1985 as a distinguished woman in chemistry. Maryland Chemist of the Year, American Chemical Society, 1989. Eastern Analytical Symposium Award for Outstanding Achievements in the Fields of Analytical Chemistry, 1999 National Institutes of Health MERIT Award, 1991–2001. Distinguished Service Award, Human Proteome Organization, 2006 Field & Franklin Award for Contributions in Mass spectrometry, American Chemical Society (ACS), 2008. Thomson Medal, International Mass Spectrometry Foundation, 2009. Ralph N. Adams Award in Bioanalytical Chemistry, 2010 John B. Fenn Award for a Distinguished Contribution in Mass Spectrometry, American Society for Mass Spectrometry, 2012. Eastern Analytical Symposium Award for Outstanding Achievements in Mass Spectrometry, 2014 Distinguished Contribution Award, the Association for Mass Spectrometry and Advances in Clinical Lab (MSACL), 2017 US Human Proteome Organization Catherine E. Costello Lifetime Achievement in Proteomics Award, 2022. Personal life Fenselau was married twice, first to Allan H. Fenselau, with whom she had two sons, and later to Robert J. Cotter. Further reading References 1939 births Living people People from York, Nebraska Bryn Mawr College alumni Stanford University alumni University of Maryland, College Park faculty Johns Hopkins University faculty American women scientists Thomson Medal recipients American women academics 21st-century American women Mass spectrometrists
Catherine Clarke Fenselau
[ "Physics", "Chemistry" ]
1,608
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
42,309,332
https://en.wikipedia.org/wiki/Infinite-order%20pentagonal%20tiling
In 2-dimensional hyperbolic geometry, the infinite-order pentagonal tiling is a regular tiling. It has Schläfli symbol of {5,∞}. All vertices are ideal, located at "infinity", seen on the boundary of the Poincaré hyperbolic disk projection. Symmetry There is a half symmetry form, , seen with alternating colors: Related polyhedra and tiling This tiling is topologically related as a part of sequence of regular polyhedra and tilings with vertex figure (5n). See also Pentagonal tiling Uniform tilings in hyperbolic plane List of regular polytopes References External links Hyperbolic and Spherical Tiling Gallery Hyperbolic tilings Infinite-order tilings Isogonal tilings Isohedral tilings Pentagonal tilings Regular tilings
Infinite-order pentagonal tiling
[ "Physics" ]
165
[ "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Isohedral tilings", "Symmetry" ]
40,874,485
https://en.wikipedia.org/wiki/Molecules%20in%20stars
Stellar molecules are molecules that exist or form in stars. Such formations can take place when the temperature is low enough for molecules to form – typically around or cooler. Otherwise the stellar matter is restricted to atoms and ions in the forms of gas or – at very high temperatures – plasma. Background Matter is made up by atoms (formed by protons and other subatomic particles). When the environment is right, atoms can join together and form molecules, which give rise to most materials studied in materials science. But certain environments, such as high temperatures, don't allow atoms to form molecules, as the environmental energy exceeds that of the dissociation energy of the bonds within the molecule. Stars have very high temperatures, primarily in their interior, and therefore there are few molecules formed in stars. By the mid-18th century, scientists surmised that the source of the Sun's light was incandescence, rather than combustion. Evidence and research Although the Sun is a star, its photosphere has a low enough temperature of , and therefore molecules can form. Water has been found on the Sun, and there is evidence of H2 in white dwarf stellar atmospheres. Cooler stars include absorption band spectra that are characteristic of molecules. Similar absorption bands can be found through observation of solar sun spots, which are cool enough to allow persistence of stellar molecules. Molecules found in the Sun include MgH, CaH, FeH, CrH, NaH, OH, SiH, VO, and TiO. Others include CN, CH, MgF, NH, C2, SrF, ZrO, YO, ScO, and BH. Stars of most types can contain molecules, even the Ap category of A-type stars. Only the hottest O-, B-, and A-type stars have no detectable molecules. Carbon-rich white dwarfs, even though very hot, have spectral lines of C2 and CH. Laboratory measurements Measurements of simple molecules that may be found in stars are performed in laboratories to determine the wavelengths of the spectra lines. Also, it is important to measure the dissociation energy and oscillator strengths (how strongly the molecule interacts with electromagnetic radiation). These measurements are inserted into formula that can calculate the spectrum under different conditions of pressure and temperature. However, man-made conditions are often different from those in stars, because it is hard to achieve the temperatures, and also local thermal equilibrium, as found in stars, is unlikely. Accuracy of oscillator strengths and actual measurement of dissociation energy is usually only approximate. Model atmosphere A numerical model of a star's atmosphere will calculate pressures and temperatures at different depths, and can predict the spectrum for different elemental concentrations. Application The molecules in stars can be used to determine some characteristics of the star. The isotopic composition can be determined if the lines in the molecular spectrum are observed. The different masses of different isotopes cause vibration and rotation frequencies to significantly vary. Secondly the temperature can be determined, as the temperature will change the numbers of molecules in the different vibrational and rotational states. Some molecules are sensitive to the ratio of elements, and so indicate elemental composition of the star. Different molecules are characteristic of different kinds of stars, and are used to classify them. Because there can be numerous spectral lines of different strength, conditions at different depths in the star can be determined. These conditions include temperature and speed towards or away from the observer. The spectrum of molecules has advantages over atomic spectral lines, as atomic lines are often very strong, and therefore only come from high in the atmosphere. Also the profile of the atomic spectral line can be distorted due to isotopes or overlaying of other spectral lines. The molecular spectrum is much more sensitive to temperature than atomic lines. Detection The following molecules have been detected in the atmospheres of stars: See also Stellar chemistry References Astrochemistry Molecules
Molecules in stars
[ "Physics", "Chemistry", "Astronomy" ]
788
[ "Astronomical sub-disciplines", "Molecular physics", "Molecules", "Astrochemistry", "Physical objects", "nan", "Atoms", "Matter" ]
40,877,887
https://en.wikipedia.org/wiki/Glask%C3%B6nigin
The Zwiesel Glass Queen () or simply Glass Queen () is the representative of the internationally known glass industry and glass manufacturing tradition of the Bavarian Forest in Germany and of Zwiesel, the town where the glass handwork industry is concentrated. She is elected every two years and is introduced to the public during the Border Festival. She is supported by the Glass Princess who is elected at the same time. Tasks The Glass Queen's role is to promote Zwiesel which is internationally recognised for its glassmaking and has been described as "famous for its crystal glass production". The Glass Queen leads presentation days for the industry and represents it at many events; in 2015 the Glass Queen spoke at the European Parliament in Brussels and also travelled around Germany including a visit to the north to Zwiesel's twin town of Brake. During her regency the Glass Queen represents the industry to key politicians like Horst Seehofer, Minister-President of Bavaria at events such as the annual New Year's reception. The Glass Queen is assisted in her tasks by the Glass Princess (). Former Glass Queens and Glass Princesses The following women have represented the Zwiesel glass industry as Glass Queens and Glass Princesses (in brackets) since elections began in 2003: 2023–2025: Susanne Glanzner (Jennifer Lo Conte) 2019–2023: Veronika Schwarz (Michaela Maier) 2017–2019: Julia Sattler (Kristina Bernereiter) 2015–2017: Andrea Herzog (Riccarda Kroner) 2013–2015: Julia Wagenbauer (Verena Probst) 2011–2013: Anja Weiß (Miriam Schneck) 2009–2011: Kathrin Czysch (Elena Brem) 2007–2009: Kristina Harant 2005–2007: Ramona Wenzl 2003–2005: Simone Molz References External links Culture of Altbayern Glass
Glaskönigin
[ "Physics", "Chemistry" ]
397
[ "Homogeneous chemical mixtures", "Amorphous solids", "Unsolved problems in physics", "Glass" ]
56,733,354
https://en.wikipedia.org/wiki/Experiment%20to%20Detect%20the%20Global%20EoR%20Signature
The Experiment to Detect the Global EoR Signature (EDGES) is an experiment and radio telescope located in a radio quiet zone at the Murchison Radio-astronomy Observatory in Western Australia. It is a collaboration between Arizona State University and Haystack Observatory, with infrastructure provided by CSIRO. EoR stands for epoch of reionization, a time in cosmic history when neutral atomic hydrogen gas became ionised due to ultraviolet light from the first stars. Low-band instruments The experiment has two low-band instruments, each of which has a dipole antenna pointed to the zenith and observing a single polarisation. The antenna is around in size, sat on a ground shield. It is coupled with a radio receiver, with a 100m cable run to a digital spectrometer. The instruments operate at , and are separated by 150m. Observations started in August 2015. In 2023, a new version of the low-band antenna in which the electronics are built into the antenna was installed on a larger ground plane of 50 x 50 metres (164 ft x 164 ft) to further reduce the effects of scattering from nearby objects and observations started in June 2023. 78 MHz absorption profile In March 2018, the collaboration published a paper in Nature announcing the discovery of a broad absorption profile centered at a frequency of MHz in the sky-averaged signal after subtracting Galactic synchrotron emission. The absorption profile has a width of MHz and an amplitude of K, against a background RMS of 0.025K, giving it a signal-to-noise ratio of 37. The equivalent redshift is centered at , spanning z=20–15. The signal is possibly due to ultraviolet light from the first stars in the Universe altering the emission of the 21cm line by lowering the temperature of the hydrogen relative to the cosmic microwave background (the mechanism is Wouthuysen–Field coupling). A "more exotic scenario," encouraged by the unexpected strength of the absorption, is that the signal is due to interactions between dark matter and baryons. In 2021, Melia reported that the deeper absorption is compatible with the alternative Friedmann–Lemaître–Robertson–Walker (FLRW) cosmology known as the Rh = ct universe. In 2022, an experiment called Shaped Antenna Measurement of the Background Radio Spectrum (SARAS) led by the Raman Research Institute reported that their measurements didn't replicate EDGES results rejecting them at 95.3% confidence level. High-band instruments The high-band instrument is of similar design, and operates at . See also Large Aperture Experiment to Detect the Dark Ages (LEDA) Absolute Radiometer for Cosmology, Astrophysics, and Diffuse Emission (ARCADE) List of astronomical observatories List of astronomical societies List of radio telescopes References External links Signal from age of the first stars could shake up search for dark matter - www.sciencemag.org Physical cosmology Radio telescopes Astronomical observatories in Western Australia
Experiment to Detect the Global EoR Signature
[ "Physics", "Astronomy" ]
607
[ "Astrophysics", "Theoretical physics", "Physical cosmology", "Astronomical sub-disciplines" ]
56,741,259
https://en.wikipedia.org/wiki/Combustion%20Science%20and%20Technology
Combustion Science and Technology is a monthly peer-reviewed scientific journal covering research on combustion. The editor-in-chief is Richard A. Yetter (Pennsylvania State University). It is published by Taylor & Francis and was established in 1964 as Pyrodynamics, obtaining its current name in 1969. Abstracting and indexing The journal is abstracted and indexed in, According to the Journal Citation Reports, the journal has a 2023 impact factor of 1.7. See also References External links Taylor & Francis academic journals Chemistry journals Physics journals English-language journals Engineering journals Combustion Academic journals established in 1964 Journals published between 13 and 25 times per year
Combustion Science and Technology
[ "Chemistry" ]
132
[ "Combustion" ]
32,478,863
https://en.wikipedia.org/wiki/Multipole%20radiation
Multipole radiation is a theoretical framework for the description of electromagnetic or gravitational radiation from time-dependent distributions of distant sources. These tools are applied to physical phenomena which occur at a variety of length scales - from gravitational waves due to galaxy collisions to gamma radiation resulting from nuclear decay. Multipole radiation is analyzed using similar multipole expansion techniques that describe fields from static sources, however there are important differences in the details of the analysis because multipole radiation fields behave quite differently from static fields. This article is primarily concerned with electromagnetic multipole radiation, although the treatment of gravitational waves is similar. Electromagnetic radiation depends on structural details of the source system of electric charge and electric current. Direct analysis can be intractable if the structure is unknown or complicated. Multipole analysis offers a way to separate the radiation into moments of increasing complexity. Since the electromagnetic field depends more heavily on lower-order moments than on higher-order moments, the electromagnetic field can be approximated without knowing the structure in detail. Properties of multipole radiation Linearity of moments Since Maxwell's equations are linear, the electric field and magnetic field depend linearly on source distributions. Linearity allows the fields from various multipole moments to be calculated independently and added together to give the total field of the system. This is the well-known principle of superposition. Origin dependence of multipole moments Multipole moments are calculated with respect to a fixed expansion point which is taken to be the origin of a given coordinate system. Translating the origin changes the multipole moments of the system with the exception of the first non-vanishing moment. For example, the monopole moment of charge is simply the total charge in the system. Changing the origin will never change this moment. If the monopole moment is zero then the dipole moment of the system will be translation invariant. If both the monopole and dipole moments are zero then the quadrupole moment is translation invariant, and so forth. Because higher-order moments depend on the position of the origin, they cannot be regarded as invariant properties of the system. Field dependence on distance The field from a multipole moment depends on both the distance from the origin and the angular orientation of the evaluation point with respect to the coordinate system. In particular, the radial dependence of the electromagnetic field from a stationary -pole scales as . That is, the electric field from the electric monopole moment scales as inverse distance squared. Likewise, the electric dipole moment creates a field that scales as inverse distance cubed, and so on. As distance increases, the contribution of high-order moments becomes much smaller than the contribution from low-order moments, so high-order moments can be ignored to simplify calculations. The radial dependence of radiation waves is different from static fields because these waves carry energy away from the system. Since energy must be conserved, simple geometric analysis shows that the energy density of spherical radiation, radius , must scale as . As a spherical wave expands, the fixed energy of the wave must spread out over an expanding sphere of surface area . Accordingly, every time-dependent multipole moment must contribute radiant energy density that scales as , regardless of the order of the moment. Hence, high-order moments cannot be discarded as easily as in static case. Even so, the multipole coefficients of a system generally diminish with increasing order, usually as , so radiation fields can still be approximated by truncating high-order moments. Time-dependent electromagnetic fields Sources Time-dependent source distributions can be expressed using Fourier analysis. This allows separate frequencies to be analyzed independently. Charge density is given by and current density by For convenience, only a single angular frequency ω is considered from this point forward; thus The superposition principle may be applied to generalize results for multiple frequencies. Vector quantities appear in bold. The standard convention of taking the real part of complex quantities to represent physical quantities is used. The intrinsic angular momentum of elementary particles (see Spin (physics)) may also affect electromagnetic radiation from some source materials. To account for these effects, the intrinsic magnetization of the system would have to be taken into account. For simplicity however, these effects will be deferred to the discussion of generalized multipole radiation. Potentials The source distributions can be integrated to yield the time-dependent electric potential and magnetic potential φ and A respectively. Formulas are expressed in the Lorenz Gauge in SI units. In these formulas c is the speed of light in vacuum, is the Dirac delta function, and is the Euclidean distance from the source point x′ to the evaluation point x. Integrating the time-dependent source distributions above yields where . These formulas provide the basis for analyzing multipole radiation. Multipole expansion in near field The near field is the region around a source where the electromagnetic field can be evaluated quasi-statically. If target distance from the multipole origin is much smaller than the radiation wavelength , then . As a result, the exponential can be approximated in this region as: See Taylor expansion. By using this approximation, the remaining dependence is the same as it is for a static system, the same analysis applies. Essentially, the potentials can be evaluated in the near field at a given instant by simply taking a snapshot of the system and treating it as though it were static - hence it is called quasi-static. See near and far field and multipole expansion. In particular, the inverse distance is expanded using spherical harmonics which are integrated separately to obtain spherical multipole coefficients. Multipole expansion in far field: Multipole radiation At large distances from a high frequency source, , the following approximations hold: Since only the first-order term in is significant at large distances, the expansions combine to give Each power of corresponds to a different multipole moment. The first few moments are evaluated directly below. Electric monopole radiation, nonexistence The zeroth order term, , applied to the scalar potential gives where the total charge is the electric monopole moment oscillating at frequency . Conservation of charge requires since If the system is closed then the total charge cannot fluctuate which means the oscillation amplitude q must be zero. Hence, . The corresponding fields and radiant power must also be zero. Electric dipole radiation Electric dipole potential Electric dipole radiation can be derived by applying the zeroth-order term to the vector potential. Integration by parts yields and the charge continuity equation shows It follows that Similar results can be obtained by applying the first-order term, to the scalar potential. The amplitude of the electric dipole moment of the system is , which allows the potentials to be expressed as Electric dipole fields Once the time-dependent potentials are understood, the time-dependent electric field and magnetic field can be calculated in the usual way. Namely, or, in a source-free region of space, the relationship between the magnetic field and the electric field can be used to obtain where is the impedance of free space. The electric and magnetic fields that correspond to the potentials above are which is consistent with spherical radiation waves. Pure electric dipole power The power density, energy per unit area per unit time, is expressed by the Poynting vector . It follows that the time averaged power density per unit solid angle is given by The dot product with extracts the emission magnitude and the factor of 1/2 comes from averaging over time. As explained above, the cancels the radial dependence of radiation energy density. Application to a pure electric dipole gives where θ is measured with respect to . Integration over a sphere yields the total power radiated: Magnetic dipole radiation Magnetic dipole potential The first-order term, , applied to the vector potential gives magnetic dipole radiation and electric quadrupole radiation. The integrand can be separated into symmetric and anti-symmetric parts in J and x′ The second term contains the effective magnetization due to the current and integration gives the magnetic dipole moment. Notice that has a similar form to . That means the magnetic field from a magnetic dipole behaves similarly to the electric field from an electric dipole. Likewise, the electric field from a magnetic dipole behaves like the magnetic field from an electric dipole. Taking the transformations on previous results yields magnetic dipole results. Magnetic dipole fields Pure magnetic dipole power The average power radiated per unit solid angle by a magnetic dipole is where θ is measured with respect to the magnetic dipole . The total power radiated is: Electric quadrupole radiation Electric quadrupole potential The symmetric portion of the integrand from the previous section can be resolved by applying integration by parts and the charge continuity equation as was done for electric dipole radiation. This corresponds to the traceless electric quadrupole moment tensor . Contracting the second index with the normal vector allows the vector potential to be expressed as Electric quadrupole fields The resulting magnetic and electric fields are: Pure electric quadrupole power The average power radiated per unit solid angle by an electric quadrupole is where θ is measured with respect to the magnetic dipole . The total power radiated is: Generalized multipole radiation As the multipole moment of a source distribution increases, the direct calculations employed so far become too cumbersome to continue. Analysis of higher moments requires more general theoretical machinery. Just as before, a single source frequency is considered. Hence the charge, current, and intrinsic magnetization densities are given by respectively. The resulting electric and magnetic fields share the same time-dependence as the sources. Using these definitions and the continuity equation allows Maxwell's equations to be written as These equations can be combined by taking the curl of the last equations and applying the identity . This gives the vector forms of the non-homogeneous Helmholtz equation. Solutions of the wave equation The homogeneous wave equations that describes electromagnetic radiation with frequency in a source-free region have the form. The wave function can be expressed as a sum of vector spherical harmonics Where are the normalized vector spherical harmonics and and are spherical Hankel functions. See spherical Bessel functions. The differential operator is the angular momentum operator with the property . The coefficients and correspond to expanding and contracting waves respectively. So for radiation. To determine the other coefficients, the Green's function for the wave equation is applied. If the source equation is then the solution is: The Green function can be expressed in vector spherical harmonics. Note that is a differential operator that acts on the source function . Thus, the solution to the wave equation is: Electric multipole fields Applying the above solution to the electric multipole wave equation gives the solution for the magnetic field: The electric field is: The formula can be simplified by applying the identities to the integrand, which results in Green's theorem and integration by parts manipulates the formula into The spherical bessel function can also be simplified by assuming that the radiation length scale is much larger than the source length scale, which is true for most antennas. Retaining only the lowest order terms results in the simplified form for the electric multipole coefficients: is the same as the electric multipole moment in the static case if it were applied to the static charge distribution whereas corresponds to an induced electric multipole moment from the intrinsic magnetization of the source material. Magnetic multipole fields Applying the above solution to the magnetic multipole wave equation gives the solution for the electric field: The magnetic field is: As before, the forumula simplifies to: Retaining only the lowest order terms results in the simplified form for the magnetic multipole coefficients: is the magnetic multipole moment from the effective magnetization while corresponds to the intrinsic magnetization . General solution The electric and magnetic multipole fields combine to give the total fields: Note that the radial function can be simplified in the far field limit . Thus the radial dependence of radiation is recovered. See also Multipole expansion Spherical harmonics Vector spherical harmonics Near and far field Quadrupole formula References Electromagnetic radiation
Multipole radiation
[ "Physics" ]
2,403
[ "Electromagnetic radiation", "Physical phenomena", "Radiation" ]
32,479,098
https://en.wikipedia.org/wiki/Revival%20of%20the%20woolly%20mammoth
The revival of the woolly mammoth is a proposed hypothetical that frozen soft-tissue remains and DNA from extinct woolly mammoths could be a means of regenerating the species. Several methods have been proposed to achieve this goal, including cloning, artificial insemination, and genome editing. Whether or not it is ethical to create a live mammoth is debated. In 2003, the Pyrenean ibex was briefly revived, giving credence to the idea that the mammoth could be successfully revived. Overview In theory, preserved genetic material found in remains of woolly mammoths could be used to recreate living mammoths, due to advances in molecular biology techniques and the cloning of mammals, begun with Dolly the Sheep in 1996. Cloning of mammals has improved in the last two decades. To date, no viable mammoth tissue or its intact genome has been found to attempt cloning. According to Beth Shapiro, a scientist who has taken a central role in the sequencing of the mammoth genome, states in her 2015 book How to Clone a Mammoth: The Science of De-Extinction, that a mammoth will never be cloned, at least not one that is pure mammoth. Nevertheless, the book concludes that we are likely, at some point, to see something that resembles a mammoth.<ref>De-extinction and Conservation. Gregory E. Kaebnick, and Bruce Jennings. The Hastings Center Report. 26 July 2017</ref> Comparative genomics shows that the mammoth genome matches 99% of the elephant genome, so researchers working in the field aim to engineer an elephant with mammoth genes, that code for the external appearance and traits of a mammoth. The outcome would be an elephant-mammoth hybrid with no more than 1% mammoth genes. Separate projects are working on gradually adding mammoth genes to elephant cells in vitro. Colossal Biosciences, founded in 2021, is one biotechnology company that has publicly stated that its project is to genetically resurrect the woolly mammoth, combining its genes with Asian elephant DNA. It has publicly stated that it intends to complete the project by 2027. Cloning Cloning involves removal of the DNA-containing nucleus of the egg cell of a female elephant, and replacement with a nucleus from woolly mammoth tissue, a process called somatic cell nuclear transfer. For example, Akira Iritani, at the Kyoto University in Japan, reportedly planned to do this. The cell would then be stimulated into dividing, and implanted in a female elephant. The resulting calf would have the genes of the woolly mammoth. However, nobody as of date has found a viable mammoth cell to begin the cloning process, and most scientists doubt that any living cell could have survived freezing in the tundra of the Arctic.Shapiro, 2015. p. 11 Because of their conditions of preservation, the DNA of frozen mammoths has deteriorated significantly over the millennia.Lister, 2007. pp. 42–43 Artificial insemination A second method involves artificially inseminating an elephant egg cell with sperm cells from a frozen woolly mammoth carcass. The resulting offspring would be an elephant–mammoth hybrid, and the process would have to be repeated, so more hybrids could be used in breeding. After several generations of cross-breeding these hybrids, an almost pure woolly mammoth would be produced. Whether the hybrid embryo would be carried through the two-year gestation is unknown; in one case, an Asian elephant and an African elephant produced a live calf named Motty, but it died of defects at less than two weeks old. There is also another fact to consider, that sperm cells of modern mammals are viable for 15 years at most after deep-freezing. This makes this method unfeasible. Gene editing In April 2015, Swedish scientists published the complete genome (nuclear DNA sequence) of the woolly mammoth. Several projects are working on gradually replacing the genes in elephant cells with mammoth genes. One such project is that of Harvard University geneticist George M. Church, who is funded by the Long Now Foundation, is attempting to create a mammoth–elephant hybrid using DNA from frozen mammoth carcasses. According to the researchers, a mammoth cannot be recreated, but they will try to eventually grow a hybrid elephant with some woolly mammoth traits in an "artificial womb". In 2017, George Church said "Actually it would be more like an elephant with a number of mammoth traits. We're not there yet, but it could happen in a couple of years." The creature, sometimes referred as a "mammophant", would be partly elephant, but with features such as small ears, subcutaneous fat, long shaggy hair and cold-adapted blood. The Harvard University team is attempting to study the animals' characteristics in vitro'' by replacing or editing some specific mammoth genes into Asian elephant skin cells called fibroblasts that have the potential to become embryonic stem cells. By March 2015 and using the new CRISPR DNA editing technique, Church's team had some woolly mammoth genes edited into the genome of an Asian elephant; focusing on cold-resistance initially, the target genes are for the external ear size, subcutaneous fat, hemoglobin, and hair attributes. By February 2017, Church's team had made 45 substitutions to the elephant genome. So far his work focuses solely on single cells. In 2021, Church received $15 million in funding and spun off a new company called Colossal. The Mammoth Genome Project at Pennsylvania State University is also researching the modification of African elephant DNA to create a mammoth–elephant hybrid. If a viable hybrid embryo is obtained by gene editing procedures, implanting it into a female Asian elephant housed in a zoo may be possible, but with the current knowledge and technology, whether the hybrid embryo would be carried through the two-year gestation is unknown. Ethics If any method is ever successful, a suggestion has been made to introduce the hybrids to a wildlife reserve in Siberia called the Pleistocene Park, but some biologists question the ethics of such recreation attempts. In addition to the technical problems, not much habitat is left that would be suitable for mammoth–elephant hybrids. Because both species are [were] social and gregarious, creating a few specimens would not be ideal. The time and resources required would be enormous, and the scientific benefits would be unclear, suggesting these resources should instead be used to preserve extant elephant species which are endangered. The ethics of using elephants as surrogate mothers in hybridisation attempts has also been questioned, as most embryos would not survive, and knowing the exact needs of a hybrid mammoth–elephant calf would be impossible. Woolly mammoths and sustainability Researchers from the company Colossal confirmed that their primary goal when trying to revive the woolly mammoth is to better the environment and climate change itself. See also List of animals that have been cloned De-extinction George Church Colossal Biosciences References Mammoths Cloning Cloned animals Animal welfare
Revival of the woolly mammoth
[ "Chemistry", "Engineering", "Biology" ]
1,408
[ "De-extinction", "Evolution of the biosphere", "Cell biology", "Cloned animals", "Cloning", "Genetic engineering", "Molecular biology", "Biochemistry" ]
32,484,563
https://en.wikipedia.org/wiki/International%20Institute%20of%20Refrigeration
The International Institute of Refrigeration (IIR) (also known, in French, as the Institut International du Froid (IIF)), is an independent intergovernmental science and technology-based organization that promotes knowledge of refrigeration and associated technologies and applications on a global scale that improve quality of life in a cost-effective and environmentally sustainable manner, including: Food quality and safety from farm to consumer Comfort in homes and commercial buildings Health products and services Low temperature technology and liquefied gas technology Energy efficiency Use of non-ozone-depleting and low global warming refrigerants in a safe manner. Its scientific and technical activities are coordinated by ten commissions which are divided into five distinct sections. History The early 19th century witnessed a sharp increase in the demand for natural ice during the summer months, particularly among breweries producing lager. Thanks to the advent of railways and steam ships, natural ice came onto the market. In order to meet demand, suppliers began looking for alternative ways of producing ice artificially. Thus, entrepreneurs begin research on the means of producing ice. Although Oliver Evans was the first to document the cycle, it was Jacob Perkins, an American working in England, who first patented a machine based on the vapour-compression cycle in 1835. In 1855, the first compression machines that proved to be successful on an industrial scale were developed by James Harrison. Ferdinand Carré invented the absorption device in 1859, then came the model of vapor compression refrigerator. This absorption machine was later replaced by a much simpler vapour-compression refrigerator, invented by French engineer Charles Tellier in 1885, that is still used today. In order to support the development of refrigeration technologies and in view of the economic development potential they represented, the IIR was created in several stages: October 5–10, 1908 - The rapidly growing, global industry and scientific quest for absolute zero lead to the 1st International Congress of Refrigeration held in Paris, France, at the Paris-Sorbonne University, which welcomed over 5,000 participants. January 25, 1909 - From this first Congress, the International Association of Refrigeration was born, formed by delegates from 35 countries. June 21, 1920 - The association was reorganised and officially titled as the International Institute of Refrigeration – IIR (Institut International du Froid – IIF, in French). The IIR status as an international organisation were defined by an International Agreement signed on December 1, 1954, and General Regulations for the Application of the International Agreements signed on November 20, 1956. Since then, the IIR has been operating at its headquarters based in Paris and is now an international organisation for expertise on refrigeration. The institute has continued to run the International Congress of Refrigeration every four years since its inauguration and has now expanded its event portfolio to ten conference series covering a vast variety of refrigeration topics. Working alongside governments, today the IIR remains committed to promoting knowledge on refrigeration for sustainable development, and continues to provide key services to disseminate information on associated technologies to all stakeholders (companies, universities, professionals...). Organization The IIR is a bilingual organization that works in both English and French and operates thanks to: the activities of its international network comprising over 300 Commission members its benefactor, corporate and private members the annual contributions from its 59 member countries. Statutory bodies General Conference The General Conference of the IIR defines the general policy of the IIR and convenes once every four years during its international congress. It includes representatives appointed by member countries. The General Conference elects the president and vice-presidents of the executive committee. Executive committee The Executive Committee of the IIR handles the administrative and financial aspects of the daily running of the IIR, and meets once per year. It includes one delegate per member country, a president and three to six vice-presidents. Management Committee The Management Committee is responsible for the general management of the IIR in between Executive Committee meetings. It includes: the President of the Executive Committee three members elected every four years by the Executive Committee three members elected every four years by the Science and Technology Council Science and Technology Council The Science and Technology Council (STC) coordinates the scientific and technical activities of the IIR. The Science and Technology Council includes five distinct Sections that are in turn divided into ten Commissions. The Science and Technology Council includes: one President six Vice-presidents ten Commission Presidents one congress liaison contact person. Commissions The scientific activities of the IIR are organised into five Sections, each of which is divided into two Commissions; there are thus 10 Commissions: Section A: Cryogenics and Liquefied Gases Section A on Cryogenics and Liquefied Gases focuses on refrigeration science and technology at low temperatures: the cryogenic domain spans the lower part of the temperature scale, from absolute zero to 120 K, thus encompassing the normal boiling points of air gases as well as of liquid natural gas (LNG). Section A comprises two Commissions, A1 Cryophysics and Cryoengineering, and A2 Liquefaction and Separation of Gases. Commission A1 deals with research, development and industrial activities at the lowest temperatures, including low-temperature physics, applications of superconductivity and helium cryogenics. Commission A2 essentially covers the liquefied gas industry, including air separation and LNG technology, two mature domains with high economic stakes and ongoing developments addressing important societal issues such as energy efficiency and carbon sequestration. Section A also maintains and develops relations with other Sections of the IIR, mainly Commission B1 Thermodynamics and Transfer Processes in the field of thermodynamics and transfer processes, essential tools of the cryogenic engineer, and Commission C1 Cryobiology, Cryomedicine and Health Products for the cooling of biological specimens and living tissues for preservation or treatment which require implementing cryogenic processes. Section A consists of a panel of multidisciplinary professionals and experts in sciences and technologies such as thermodynamics, condensed matter physics, materials science, heat transfer, fluid dynamics, vacuum and leak-tightness, instrumentation and process control, applied to the low-temperature domain. Commission A1: Cryophysics and Cryoengineering Commission A1 on Cryophysics and Cryoengineering deals with research, development and industrial activities at the lowest temperatures, including low-temperature physics, applications of superconductivity and helium cryogenics. Commission A2: Liquefaction and Separation of Gases The work of Commission A2 Liquefaction and Separation of Gases reflects world-wide activities in the domain of separation of gases and liquefaction. Apart from the personal involvement of Commission members in various projects, the commission is present at conferences, workshops and seminars: LNG International Exhibition and Conference, GASTECH, Cryogenics, Cryogen Expos, European Cryogenic Course and others. The commission is close to academia, industry and end users of separated and liquefied gases. Commission members work closely with Commission A1 Cryophysics, Cryoengineering and Commission C1 Cryobiology, Cryomedicine and Health Products. Section B: Thermodynamics, Equipment and Systems Section B on Thermodynamics, Equipment and Systems of the IIR focuses on the technological and scientific fundamentals of classical refrigeration, excluding cryogenic temperatures. The fundamentals are represented by its Commission B1 Thermodynamics and Transfer Processes whereas Commission B2 Refrigerating Equipment covers all kinds of refrigeration technology. Section B is a key player in most of the IIR international conferences; except for the International Conference of Refrigeration (ICR) organised every four years for all 10 IIR Commissions, where approximately 50% of all presentations are related to Section B topics. Independently, and together with other Sections, Section B hosts a multitude of conferences such as the Gustav Lorentzen Conference on Natural Working Fluids and the Ohrid Conference on Ammonia and Refrigeration Technologies; or conferences on Thermodynamic Properties and Transfer Processes of Refrigerants, on Magnetic Refrigeration at Room Temperature, on Compressors and Coolants, and on Phase Change Materials and Slurries for Refrigeration and Air Conditioning. A number of Working Groups, where emerging topics in refrigeration are discussed by IIR experts with the aim of publishing results in handbooks or other forms publications, are organised within the scope of Section B. Main topics include mitigation of direct emissions of greenhouse gases in refrigeration, refrigerant charge reduction in refrigerating systems, magnetic cooling, life cycle climate performance evaluation, and refrigerant system safety. Commission B1: Thermodynamics and Transfer Processes The objectives of Commission B1 on Thermodynamics and Transfer Processes are to provide academic and industrial information and data, and to propose any solutions on thermodynamics and transfer processes. The Commission B1 has been extremely active in IIR Working Groups, sub-commissions, IIR conferences and co-sponsored conferences and commission business meetings. As well as being involved in IIR Working Groups on the mitigation of direct emissions of greenhouse gases in refrigeration, the commission is equally involved in the Working Group on Life Cycle Climate Performance (LCCP) Evaluation. Active in IIR conferences and congresses, Commission B1 similarly organises workshops in various fields such as refrigerant charge reduction in refrigerating systems. Initiatives and opportunities, such as the phase-down of high-GWP refrigerants, energy-efficient buildings and cars, transport refrigeration, food preservation, the economic importance of the refrigeration sector, the involvement of the younger generation and identifying industrial needs are all at the heart of Commission B1. Commission B2: Refrigerating Equipment Commission B2 Refrigerating Equipment participates in many IIR activities aimed at promoting knowledge of refrigeration technologies and their applications worldwide.  It is a key Commission for most IIR activities synergizing with other Commissions. The Commission is very active in various IIR Working Groups on Magnetic Cooling and Refrigeration Safety. Section C: Biology and Food Technology The activities of Section C deal with the application of refrigeration technologies to life sciences and food sciences. Commission C1 Cryobiology, cryomedicine and health products is particularly focused on the application of refrigeration technologies on various branches of medicine: cryosurgery and oncology, cryotherapy, blood, organs and tissue preservation, health products (especially vaccines and thermosensitive preparations). On the one hand, the work focusses on the biological and biochemical aspects of the effects of refrigeration on organs, tissues and treated products, and on the other hand on the applied refrigeration techniques and technologies. Commission C2 food science and engineering is focused more particularly on the application of refrigeration technologies in the area of food sciences: preservation (refrigeration, freezing); hygiene and safety in its microbiological aspect; process (lyophilisation, cryoconcentration, cryoprecipitation, partial or total crystallisation). The work focuses on establishing a model for the transfer of heat and matter during refrigeration treatments, on the effects of refrigeration on food products, and on the evolution kinetics of products kept in cold storage. The work deals with the impact of the integrity of the cold chain on the quality of food, including in warm climate countries. Commission C1: Cryobiology, Cryomedicine and Health Products Commission C1 Cryobiology, Cryomedicine and Health Products have clearly defined objectives in cryobiology, cryomedicine and health products research; knowledge dissemination; technology transfer and education. This commission is truly active and participates in the various workshop series on cryoprocessing of biopharmaceuticals and biomaterials, as well as establishing innovative e-training actions concerning the commission's multidisciplinary needs as well as the interdisciplinary needs of the following commissions: A1 Cryophysics, Cryoengineering, A2 Liquefaction and Separation of Gases and finally C1 Cryobiology, Cyomedicine and Health Products. Commission C2: Food Science and Engineering Commission C2 on Food Science and Engineering focuses on research and breakthrough technologies related to food science and engineering. The commission is key in hosting the IIR Sustainability and the Cold Chain Conference (ICCC), held internationally since 2010. In addition to the Cold Chain conferences and the IIR Congress, Commission C2 has also co-sponsored four other conferences in Macedonia, Spain, Croatia and Germany, and continues to reinforce its leading role at the heart of developments in food science and engineering. The commission is involved in various IIR Working Groups and innovative projects linked to the development of the food chain across the globe. Section D: Storage and Transport Section D ON Storage and Transport of the IIR is involved in the controlled-temperature logistics and distribution of temperature-sensitive products, from foodstuffs to health products (medicines, vaccines, blood products, organs ...) from artwork to chemicals. It addresses all issues of equipment and solutions for a durable cold chain from the production or manufacture, to the consumption or use of these products. Section D thus covers the issues of storage, transportation by land, air or water, packaging, distribution and delivery of these products to the consumer, and the traceability of the cold chain. The Section is involved in warehouse and platform equipment, devices for temperature-controlled transport, coolants or cool packs, small coolers and refrigerated containers, chillers, refrigerated furnishings, refrigerated cabinets, climate chambers, refrigerators and freezers, but also to thermometers and temperature recorders. The cold chain involves many temperature ranges, both positive and negative, from -80 °C to + 63 °C. Commission D1: Refrigerated Storage Commission D1 on Refrigerated Storage deals with the storage of all products which require temperature control, such as food and pharmaceuticals. Industrial, commercial and residential storage are also taken into account so that, in cooperation with Commission D2 Refrigerated Transport, the entire cold chain is treated, from raw materials to the final product at our home. Refrigeration plays an essential role for perishable products. While the estimated capacity of refrigerated warehouses was over 500 million cubic meters worldwide in 2014, in some countries global food losses due to the lack of a cold chain are still very important and can reach as much as 20% of the global food supply. At the same time, in heavily industrialised countries, the use of commercial and domestic refrigerators accounts for up to 6% of global electricity consumption. As a result, the Commission faces important issues in order to promote widespread, energy efficient and environmentally friendly storage systems. New refrigerants, synergies to save or exchange energy with other systems and new technologies are the main focus of its activity. One of the most important themes in these days for this commission is energy efficiency Commission D2: Refrigerated Transport The IIR's Commission D2 on Refrigerated Transport is extremely active. In addition to the IIR's four yearly congress, Commission D2 participates in the IIR Conference on Sustainability and the Cold Chain, held out of synchronisation to the congress. Every year, Commission D2 CERTE test engineers meet in a European country to discuss refrigerated transport technology and testing issues. This group subsequently advises the United Nations working party on transport of perishable foodstuffs held each year in Geneva. Commission D2 is currently addressing the “Cold Chain for Pharmaceutical Products” and will add this to regular transport discussion and advisory topics. Commission D2 also helps to produce Informatory Notes to assist in areas of technical or regulatory difficulty. The role of the IIR is well recognized, and in particular, the expertise of the members of Commission D2 makes an important contribution to refrigerated transport issues: reducing food wastage and minimizing emissions. Section E: Air Conditioning, Heat pumps and Energy Recovery IIR Section E co-ordinates the work of the both Commissions E1 Air-Conditioning and E2 Heat Pumps and Heat Recovery. The core activities and interests of both Commissions are strongly connected resulting in tight collaborate and jointly organised conferences. Air-conditioning is a subject that is now more frequently addressed due to both better comfort in an increasing number of countries and the effects of global warming. Now, even countries where demand for air-conditioning during summer months was limited, due to a cooler climate, require the operation of an air-conditioning plant for longer periods. The demand of heating is nevertheless significant and the most efficient system to provide heating is undoubtedly the heat pump. No other technology can provide net primary energy savings, economic benefits to users and reduced climate impact at the same time. Also providing a cooling effect, the heat pump is expected to be the most common solution in the future for all year round operations. The combination of these technologies, with heat recovery capable buildings or industrial plants, cooling and heating requirements can be meet in the most efficient, reliable, cost-effective and environmentally friendly way. Commission E1 Air Conditioning Commission E1 on Air Conditioning often collaborates with Commission E2 on Heat pumps and Energy Recovery as they have at least one common aspect, the compressor. Both Commissions frequently work with the same equipment which is adapted according to the seasons, alternating between air conditioners and heat pumps. The commission is involved in various aspects of air conditioning from equipment to systems. In the last years it developed a particular focus on energy saving and sustainability, whilst maintaining good conditions of thermal comfort ranging from topics such as free cooling, solar cooling or long term energy storage. The general importance of the themes addressed by the Commission results in relevant International Conferences. The expertise of the Commission members on the use of new refrigerants in air conditioning systems, annual comparative studies of innovative and renewable energy systems, opportunities of part load operation on air conditioning systems to limit penalties or even to gain efficiency, and on other up-to-date research fields, is valuable, not only to the scientific community but also to the multitude of air conditioning users. Commission E2: Heat pumps and Energy Recovery Commission members are proposed by member countries then appointed by the STC following proposals from Presidents of commissions. These commission members comprise industry, university, and research-centre specialists or refrigeration practitioners. The aim of commission E2 on Heat Pumps, Energy Recovery is to promote and enhance scientific and technological knowledge in heat pump and energy recovery fields thanks to various activities such as the organization or co-sponsoring of international conferences, or the publication of books and Informatory Notes. Activities and Services FRIDOC Database FRIDOC is the most comprehensive database in the world dedicated to refrigeration. It contains over 110,000 references to documents in all domains of refrigeration. A large number of the documents referenced in FRIDOC are scientific and technical. FRIDOC also contains many review articles, documents on economic data and statistics, articles dealing with regulations and standardization, etc. Publications The IIR has over 200 publications available on refrigeration technologies and applications: reference documents, guides, technical books, conference and congress papers and proceedings, tables and diagrams comprising the thermophysical properties of refrigerants. Books in the refrigeration field published by other publishers are also available for purchase. International Journal of Refrigeration The Institute produces a monthly International Journal of Refrigeration that is published by Elsevier. The International Journal of Refrigeration is the reference journal in the refrigeration field. It is practical for all those wanting to keep abreast of research and industrial news in all fields of refrigeration including air-conditioning, heat-pump, refrigerated storage and transport. Newsletter The IIR produces an electronic monthly newsletter that features news and updates on the refrigeration sector: regulation, events, economic data, monitoring, technological progress, etc. It provides a detailed overview of the general developments within the sector worldwide and as acts a regular information tool for readers. Conferences and Congresses The IIR holds international conferences and congresses on key themes which include: natural refrigerants the cold chain magnetic refrigeration cryogenics compressors phase-change materials and slurries thermophysical properties and transfer processes of refrigerants new technologies International Congress of Refrigeration First held in 1908, the International Congress of Refrigeration of the IIR is a flagship event that converges industry and research. Covering all fields of refrigeration, the Congress, which takes place every four years, reunites key international stakeholders and provides perspectives on the future of the industry in line with sustainable development. Professional Directories The IIR publishes two professional Directories: a Laboratory Directory, which lists more than 300 laboratories in 55 countries; an Expertise Directory, which lists over 300 international experts in the refrigeration sector. Working Groups IIR Working Groups operate on a temporary basis, bringing together specialists, to work on projects arising from current issues. Their aim is to promote development, provide knowledge and give recommendations in these spheres. In order to achieve these objectives, they hold conferences and workshops, write publications and provide recommendations. Members of WGs are IIR members from industry, academia, national administrations and research. Research Projects ENOUGH Funded by the European Commission- Horizon 2020 and European Green Deal Duration: 4 years (October 2021-September 2025) Objective: The main scope of the project is to support the EU farm to fork sustainable strategy by providing technical, financial, and political tools and solutions to reduce GHG emissions (by 2030) and achieve carbon neutrality (by 2050) in the food industry. SophiA Funded by the European Commission- Horizon 2020 and European Green Deal Duration: 4 years (October 2021-September 2025) Objective: SophiA enables African countries to pursue sustainable pathways of development through a low-carbon, climate resilient and green growth trajectory, leapfrogging fossil fuels and high global warming potential refrigerant technologies. IIR Network Today, the IIR has 59 member countries representing over two-thirds of the global population. According to their annual financial contributions to the IIR, these member countries are divided into six category levels and this determines the services they receive and their level of voting power within the IIR. Member Countries take part in IIR activities via their delegates and their nominated commission members. The delegates and commission members determine IIR priorities and take part in the IIR scientific activities and Working Groups, and develop recommendations. Member countries are entitled to host several IIR conferences and meetings per year. Member Countries The following countries are members of the IIR: Benefactor and corporate members Benefactor and corporate members can be companies, universities, national, regional or international organizations, laboratories, associations or any other structure active in or connected to the refrigeration industry or IIR activities. Private members Private members include individuals such as researchers, scientists, industrial practitioners, journalists or professors with extensive expertise, passion or active in fields related to the refrigeration sector. References Cooling technology Food preservation Heating, ventilation, and air conditioning International organizations based in France Organizations based in Paris Scientific organizations based in France Thermodynamics
International Institute of Refrigeration
[ "Physics", "Chemistry", "Mathematics" ]
4,815
[ "Thermodynamics", "Dynamical systems" ]
32,485,111
https://en.wikipedia.org/wiki/Safety%20syringe
A safety syringe is a syringe with a built-in safety mechanism to reduce the risk of needlestick injuries to healthcare workers and others. The needle on a safety syringe can be detachable or permanently attached. On some models, a sheath is placed over the needle, whereas in others the needle retracts into the barrel. Safety needles serve the same functions as safety syringes, but the protective mechanism is a part of the needle rather than the syringe. Legislation requiring safety syringes or equivalents has been introduced in many nations since needlestick injuries and re-use prevention became the focus of governments and safety bodies. Types There are many types of safety syringes available on the market. Auto Disable (AD) syringes are designed as a single use syringe, with an internal mechanism blocking the barrel once depressed so it cannot be depressed again. The other type of syringe with a re-use prevention feature is the breaking plunger syringe. An internal mechanism cracks the syringe when the plunger is fully depressed to prevent further use. These syringes are only effectively disabled with a full depression of the plunger; users can avoid activating the re-use prevention feature and re-use the syringe. The more effective safety syringes have reuse and needlestick prevention features. A sheath or hood slides over the needle after the injection is completed with a Needlestick Prevention Syringe, which also has a re-use prevention feature (either an auto disable mechanism or breaking plunger). Retractable syringes use either manual or spring-loaded retraction to withdraw the needle into the barrel of the syringe. Some brands of spring-loaded syringes can have a splatter effect, where blood and fluids are sprayed off the cannula from the force of the retraction. Manual retraction syringes are generally easier to depress because there is no resistance from a spring. Alternatives Traditional glass syringes can be re-used once disinfected. Plastic body syringes have become more popular in recent years because they are disposable. Unfortunately, improper disposal methods and re-use are responsible for transferring blood borne diseases. Importance Of the 55 cases documented by the CDC of (non-sex work) occupational transmission of HIV, 90% were from contaminated needles that pierced the skin. The direct cost of needlestick injuries was calculated in a recent study to be between $539 and $672 million US dollars. That includes only lab tests, treatment, service and "other"; it does not take into account lost time and wages for employers and individuals. Legislation United States Needlestick Safety and Prevention Act, effective date 2001 Two lawyers, Mike Weiss and Paul Danzinger, were approached in 1998 by an inventor, Thomas Shaw, who was having trouble selling a safety syringe developed to protect health care workers from accidentally being infected by dirty needles. The problems were due to monopolistic actions of a major industry needle maker and hospital group purchasing organizations. The case was settled before trial for $150 million. This was portrayed by the 2011 movie Puncture. Shaw's attempts to get his retractable needle accepted by health care facilities were covered in a 2010 Washington Monthly article. Canada Health Canada Laboratory Biosafety Guidelines Provincial Legislation: British Columbia Alberta Manitoba Saskatchewan Ontario Nova Scotia Australia No nationwide legislation is in place, but suggested practices or policies have been implemented in New South Wales, Victoria, and Queensland. Europe The European Union has some regulations on this subject. See also Infection control Peggy Ferro References External links W.H.O. Injection Safety Toolbox W.H.O. Injection Safety Centers for Disease Control – Injection Safety • Washington Monthly, Jul/Aug 2010, "Dirty Medicine" Medical equipment Drug delivery devices
Safety syringe
[ "Chemistry", "Biology" ]
792
[ "Pharmacology", "Drug delivery devices", "Medical equipment", "Medical technology" ]
48,583,431
https://en.wikipedia.org/wiki/Conradson%20carbon%20residue
Conradson carbon residue, commonly known as "Concarbon" or "CCR", is a laboratory test used to provide an indication of the coke-forming tendencies of an oil. Quantitatively, the test measures the amount of carbonaceous residue remaining after the oil's evaporation and pyrolysis. In general, the test is applicable to petroleum products which are relatively non-volatile, and which decompose on distillation at atmospheric pressure. The phrase "Conradson carbon residue" and its common names can refer to either the test or the numerical value obtained from it. Test method A quantity of sample is weighed, placed in a crucible, and subjected to destructive distillation. During a fixed period of severe heating, the residue undergoes cracking and coking reactions . At the termination of the heating period, the crucible containing the carbonaceous residue is cooled in a desiccator and weighed. The residue remaining is calculated as a percentage of the original sample, and reported as Conradson carbon residue. Applications For burner fuel, Concarbon provides an approximation of the tendency of the fuel to form deposits in vaporizing pot-type and sleeve-type burners. For diesel fuel, Concarbon correlates approximately with combustion chamber deposits, provided that alkyl nitrates are absent, or if present, that the test is performed on the base fuel without additive. For motor oil, Concarbon was once regarded as indicative of the amount of carbonaceous deposits the oil would form in the combustion chamber of an engine. This is now considered to be of doubtful significance due to the presence of additives in many oils. For gas oil, Concarbon provides a useful correlation in the manufacture of gas there from. For delayed cokers, the Concarbon of the feed correlates positively to the amount of coke that will be produced. For fluid catalytic cracking units, the Concarbon of the feed can be used to estimate the feed's coke-forming tendency. See also Ramsbottom Carbon Residue References Petroleum technology Geochemical processes Petroleum industry
Conradson carbon residue
[ "Chemistry", "Engineering" ]
418
[ "Petroleum technology", "Petroleum engineering", "Petroleum industry", "Petroleum", "Geochemical processes", "Chemical process engineering" ]
48,590,147
https://en.wikipedia.org/wiki/Judd%E2%80%93Ofelt%20theory
Judd–Ofelt theory is a theory in physical chemistry describing the intensity of electron transitions within the 4f shell of rare-earth ions in solids and solutions. The theory was introduced independently in 1962 by Brian R. Judd of the University of California, Berkeley, and PhD candidate George S. Ofelt at Johns Hopkins University. Their work was published in Physical Review and the Journal of Chemical Physics, respectively. Judd and Ofelt did not meet until 2003 at a workshop in Lądek-Zdrój, Poland. Judd and Ofelt's work was cited approximately 2000 times between 1962 and 2004. Brian M. Walsh of NASA Langley places Judd and Ofelt's theory at the "forefront" of a 1960s revolution in spectroscopic research on rare-earth ions. Theory The theory is a powerful theoretical framework used to predict and analyze the intensities of electronic transitions within the 4f electron shell of rare-earth ions in solid-state materials. The transitions, which are parity forbidden in free ions, are made partially allowed in a solid matrix due to the effects of the crystal field. This field induces a mixing of electronic states, allowing transitions that would not occur in an isolated ion. The theory quantitatively describes this mixing using three phenomenological parameters, denoted as (where ). These parameters account for the asymmetric nature of the crystal field and enable the calculation of transition probabilities, oscillator strengths, and radiative lifetimes of excited states, which are crucial for the development of various photonic devices such as lasers and optical amplifiers. The theory is named after Brian G. Judd and George S. Ofelt, who independently developed it in 1962. It has become a standard tool in the field of lanthanide spectroscopy, providing insights into the optical properties of rare earth-doped materials and aiding in the design of materials for color display systems, fluorescent lamps, and lasers. Application software Judd–Ofelt intensity parameters from absorption spectrum of any lanthanide can be calculated by the RELIC application software. Judd–Ofelt intensity parameters and derived quantities (oscillator strengths, radiative transition probabilities, luminescence branching ratios, excited state radiative lifetimes, and estimates of quantum efficiencies) from the emission spectrum of Eu3+ doped compounds, can be obtained by the JOES application software. Theoretical Judd-Ofelt intensity parameters for Eu3+ can be obtained using the LUMPAC software. Additionally, the JOYSpectra web platform provides these parameters for all Ln3+ ions. Bibliography of the research by Judd and Ofelt supporting the theory See also Parity (physics) Bert Broer Otto Laporte Giulio Racah John Hasbrouck Van Vleck Eugene Wigner Brian Garner Wybourne References Atomic physics Electron states Physical chemistry 1962 introductions
Judd–Ofelt theory
[ "Physics", "Chemistry" ]
582
[ "Electron", "Electron states", "Applied and interdisciplinary physics", "Quantum mechanics", "Atomic physics", " molecular", "nan", "Atomic", "Physical chemistry", " and optical physics" ]
48,592,395
https://en.wikipedia.org/wiki/Stirrup%20pump
A stirrup pump is a portable reciprocating water pump used to extinguish or control small fires. It is operated by hand. The operator places a foot on a stirrup-like bracket at the bottom of the pump to hold the pump steady, the bottom of the suction cylinder is placed inside a bucket of water. References External links Fire watchers Pumps Appropriate technology Human power Firefighting equipment Fire suppression Active fire protection
Stirrup pump
[ "Physics", "Chemistry" ]
90
[ "Pumps", "Physical quantities", "Turbomachinery", "Physical systems", "Power (physics)", "Hydraulics", "Human power" ]
48,594,435
https://en.wikipedia.org/wiki/Dark%20diversity
Dark diversity is the set of species that are absent from a study site but present in the surrounding region and potentially able to inhabit particular ecological conditions. It can be determined based on species distribution, dispersal potential and ecological needs. The term was introduced in 2011 by three researchers from the University of Tartu and was inspired by the idea of dark matter in physics since dark diversity too cannot be directly observed. Overview Dark diversity is part of the species pool concept. A species pool is defined as set of all species that are able to inhabit a particular site and that are present in the surrounding region or landscape. Dark diversity comprises species that belong to a particular species pool but that are not currently present at a site. Dark diversity is related to "habitat-specific" or "filtered" species pool which only includes species that can both disperse to and potentially inhabit the study site. For example, if fish diversity in a coral reef site has been sampled, dark diversity includes all fish species from the surrounding region that are currently absent but can potentially disperse to and colonize the study site. Because all sampling will also miss some species actually present at a site, we also have the related idea of 'phantom species' – those species present at a site but not detected within the sampling units used to sample the community at that site. The existence of these phantom species means that routine measures of colonization and extinction at a site will always overestimate true rates because of "pseudo-turnover." Dark diversity name is borrowed from dark matter: matter which cannot be seen and directly measured, but its existence and properties are inferred from its gravitational effects on visible matter. Similarly, dark diversity cannot be seen directly when only the sample is observed, but it is present if broader scale is considered, and its existence and properties can be estimated when proper data is available. With dark matter we can better understand the distribution and dynamics of galaxies; with dark diversity we can understand the composition and dynamics of ecological communities. Habitat specificity and scale Dark diversity is the counterpart of observed diversity (alpha diversity) present in a sample. Dark diversity is habitat-specific in respect that the study site must contain favorable ecological conditions for species belonging to dark diversity. The habitat concept can be narrower (e.g. microhabitat in an old-growth forest) or broader (e.g. terrestrial habitat). Thus, habitat specificity does not mean that all species in dark diversity can inhabit all localities within study sample, but there must be ecologically suitable parts. Habitat-specificity is making the distinction between dark diversity and beta diversity. If beta diversity is the association between alpha and gamma diversity, dark diversity connects alpha diversity and habitat-specific (filtered) species pool. Habitat-specific species pool only these which can potentially inhabit focal study site. Observed diversity can be studied at any scale, and sites with varying heterogeneity. This is also true for dark diversity. Consequently, as local observed diversity can be linked to very different sample sizes, dark diversity can be applied at any study scale (1x1 m sample in vegetation, bird count transect in a landscape, 50x50 km UTM grid cell). Methods to estimate dark diversity Region size determines the likelihood of dispersal to the study site and selecting the appropriate scale depends on the research question. For a more general study, a scale comparable to biogeographic region can be used (e.g. a small country, a state, or a radius of a few hundred km). If we want to know which species potentially can inhabit the study site in the near future (for example 10 years), landscape scale is appropriate. To separate ecologically suitable species, different methods can be used. Environmental niche modelling can be applied to a large number of species. Expert opinion can be used. Data on species' habitat preferences is available in books, e.g. bird nesting habitats. This can also be quantitative, for example plant species indicator values, according to Ellenberg. A recently developed method estimates dark diversity from species co-occurrence matrices. An online tool is available for the co-occurrence method. Usage Dark diversity allows meaningful comparisons of biodiversity. The community completeness index can be used: . This expresses the local diversity at the relative scale, filtering out the effect of the regional species pool. For example, if completeness of plant diversity was studied at the European scale, it did not exhibit the latitudinal pattern seen with observed richness and species pool values. Instead, high completeness was characteristic to regions with lower human impact, indicating that anthropogenic factors are among the most important local scale biodiversity determinants in Europe. Dark diversity studies can be combined with functional ecology to understand why species pool is poorly realized in a locality. For example, if functional traits were compared between grassland species in observed diversity and dark diversity, it becomes evident, that dark diversity species have in general poorer dispersal abilities. Dark diversity can be useful in prioritizing nature conservation, to identify in different regions most complete sites. Dark diversity of alien species, weeds and pathogens can be useful to prepare for future invasions in time. Recently, the dark diversity concept was used to explain mechanisms behind the plant diversity-productivity relationship. See also Measurement of biodiversity Species diversity Species pool References External links DarkDivNet - a global network to explore the dark diversity of plant communities Shiny Dark Diversity Calculator - an online tool for calculating dark diversity based on species' co-occurrences Habitat Ecology Biodiversity Conservation biology
Dark diversity
[ "Biology" ]
1,112
[ "Conservation biology", "Ecology", "Biodiversity" ]
48,595,119
https://en.wikipedia.org/wiki/Insecticide%20Resistance%20Action%20Committee
The Insecticide Resistance Action Committee (IRAC) was formed in 1984 and works as a specialist technical group of the industry association CropLife to be able to provide a coordinated industry response to prevent or delay the development of insecticide resistance in insect, mite and nematode pests. IRAC strives to facilitate communication and education on insecticide and traits resistance as well as to promote the development and facilitate the implementation of insecticide resistance management strategies. IRAC is recognised by the Food and Agriculture Organization (FAO) and the World Health Organization (WHO) of the United Nations as an advisory body on matters pertaining to insecticide resistance. Pesticideresistance.org is a database financed by IRAC, US Department of Agriculture, and others. Sponsors IRAC's sponsors are: ADAMA, BASF, Bayer CropScience, Corteva, FMC, Mitsui Chemicals, Nihon Nohyaku, Sumitomo Chemical, Syngenta and UPL. Mode of action classification IRAC publishes an insecticide mode of action (MoA) classification that lists most common insecticides and acaricides and recommends that "successive generations of a pest should not be treated with compounds from the same MoA Group". IRAC assigns a mode of action (MoA) to an insecticide, based on sufficient scientific data. They then update the mode of action (MoA) classification. Several insecticides and classes of insecticide may act through the same mode of action. Classes of Insecticide If an insecticide is successful, follow-on insecticides, based on the chemical structure of the first in class (prototype) insecticide, may be developed either by the original company or by competitors. Sought after are insecticides which have improved properties or which kill different orders or species of insect. The resulting classes of insecticides are named by IRAC after common usage has been established, although alternative names may be found in the scientific literature. Table of modes of action and classes of insecticide In the table the number of insecticides listed in each class is given, and an example of each class. The number of insecticides in the IRAC class listing is given in column Nr (A). The number in the Compendium of Pesticide Common Names (insecticide + acaricide) is given in column Nr (B), although the name given there to the class historically is often different to the IRAC class name. See also HRAC classification List of insecticides Further reading References External links IRAC website home page Biotechnology advocacy Insecticides Organizations established in 1984 Pesticide organizations
Insecticide Resistance Action Committee
[ "Engineering", "Biology" ]
530
[ "Biotechnology organizations", "Biotechnology advocacy" ]
45,590,315
https://en.wikipedia.org/wiki/Tripartite%20synapse
Tripartite synapse refers to the functional integration and physical proximity of: The presynaptic membrane, Postsynaptic membrane, and their intimate association with surrounding glia. It also refers as well as the combined contributions of these three synaptic components to the production of activity at the chemical synapse. Tripartite synapses occur at a number of locations in the central nervous system with astrocytes, a type of glial cell, and may also exist with Muller glia of retinal ganglion cells and Schwann cells at the neuromuscular junction. The term was first introduced in the late 1990s to account for a growing body of evidence that glia are not merely passive neuronal support cells but, instead, play an active role in the integration of synaptic information through bidirectional communication with the neuronal components of the synapse as mediated by neurotransmitters and gliotransmitters. Evidence of the Tripartite Synapse Evidence for the role of astrocytes in the integration and processing of synaptic integration presents itself in a number of ways: Astrocytes are excitable cells: In response to stimuli from any of the three components of the tripartite synapse, astrocytes are capable of producing transient changes in their intracellular calcium concentrations through release of calcium stores from the endoplasmic reticulum Astrocytes communicate bidirectionally with neurons: Through changes in their calcium concentration excitability, astrocytes are able to detect neurotransmitters and other signals released from neurons at the synapse and can release their own neurotransmitters or gliotransmitters that are, in turn, capable of modifying the electrophysiological excitability of neurons. Astrocytes are capable of responding selectively to stimuli: Astrocytes of the hippocampal stratum oriens form tripartite synapses with axonal projections from the alveus. The alveus projections can form either glutamatergic or cholinergic synapses with the stratum oriens, but the astrocytes of this region respond with changes in calcium concentration only to cholinergic activation of alveus projections. This is not merely due to a sensitivity of these astrocytes exclusive to acetylcholine, as they will also respond to glutamatergic synaptic activity originating from a different brain region, the Schaffer collateral. Astrocytes integrate and modulate information from their synaptic inputs: Astrocytic calcium concentration changes in response to simultaneous stimulation by two neurotransmitter types is not always a linear summation (a linear summation being an increase in intracellular calcium concentration in the astrocyte in response to two simultaneous stimuli that would be the equivalent of adding the calcium concentration changes that would occur in response to each stimulation individually) of the effects of each individual input but varies by the transmitter combinations as well as frequency of stimulation. The hippocampal stratum oriens astrocytes, which respond to synaptic activity from glutamatergic neurons originating in the Schaffer collateral and cholinergic neurons originating in the alveus, produce changes in their intracellular calcium concentrations that is non-linear with the strength of synaptic input. Additionally, these same stimuli are capable of producing either a potentiated calcium concentration response at low frequencies of stimulation or a depressed calcium concentration response at high frequencies of stimulation. Differences between young and adult brain In a 2013 published research study titled Glutamate-Dependent Neuroglial Calcium Signaling Differ Between Young and Adult Brain, it was found that the tripartite synapse is not found in the adult brain. Earlier published research had discussed how astrocytes had metabotropic glutamate receptor 5 (mGluR5)–dependent increases in cytosolic calcium ions (Ca2+). However, astrocytic expression of mGluR5 was lost by the third postnatal week in mice and was not present in human cortical astrocytes. The results of the study indicate that neuroglial signaling the adult brain may be fundamentally different than the young brain. Maiken Nedergaard, M.D., D.M.Sc., lead author of the study and co-director of the University of Rochester Medical Center (URMC) Center for Translational Neuromedicine stated:If this concept was correct, it should have given rise to a clinical trial by now. It has not, which tells us that with so many labs work on this for 20 years that there must be something wrong.She also stated that:Our findings demonstrate that the tripartite synaptic model is incorrect. This concept does not represent the process for transmitting signals between neurons in the brain beyond the developmental stage.In collaboration with the University of Rochester’s Institute of Optics, Nedergaard and her team had developed a new 2-photon microscope that had allowed researchers to observe glia activity in the living brain, which allowed observable data for the study. References External links Study shows that current model for brain signaling is flawed (youtube.com) Neural synapse Neuroanatomy Neurochemistry Neurology
Tripartite synapse
[ "Chemistry", "Biology" ]
1,088
[ "Biochemistry", "Neurochemistry" ]
45,590,513
https://en.wikipedia.org/wiki/Data%20Protection%20Act%2C%202012
The Data Protection Act, 2012 (The Act) is legislation enacted by the Parliament of the Republic of Ghana to protect the privacy and personal data of individuals. It regulates the process personal information is acquired, kept, used or disclosed by data controllers and data processors by requiring compliance with certain data protection principles. Non compliance with provisions of the Act may attract either civil liability, or criminal sanctions, or both, depending on the nature of the infraction. The Act also establishes a Data Protection Commission, which is mandated to ensure compliance with its provisions, as well as maintain the Data Protection Register. History The Act was first introduced in the Ghana Parliament in 2010, but was subsequently withdrawn by the then Minister of Communications, Haruna Iddrisu, to be revised. Parliament passed the bill in 2012, which then received Presidential assent on May 10, 2012. The notice of the Act was gazetted on 18 May 2012, and in accordance with Section 99, the Act came into effect on 16 October 2012. Structure The Act is made up of 99 sections that are arranged under various headings, as follows: Key terms Key terms in the Act are defined in the interpretation section, section 96. Unless the context otherwise requires, section 96 provides the following definitions to the notable terms: “data controller” means a person who either alone, jointly with other persons or in common with other persons or as a statutory duty determines the purposes for and the manner in which personal data is processed or is to be processed “data processor” in relation to personal data means any person other than an employee of the data controller who processes the data on behalf of the data controller “data subject” means an individual who is the subject of personal data “foreign data subject” means data subject information regulated by laws of a foreign jurisdiction sent into Ghana from a foreign jurisdiction wholly for processing “personal data” means data about an individual who can be identified,(a) from the data, or (b) from the data or other information in the possession of, or likely to come into the possession of the data controller “processing” means an operation or activity or set of operations by automatic or other means that concerns data or personal data and the (a) collection, organisation, adaptation or alteration of the information or data, (b) retrieval, consultation or use of the information or data, (c) disclosure of the information or data by transmission, dissemination or other means available, or (d) alignment, combination, blocking, erasure or destruction of the information or data “recipient” means a person to whom data is disclosed, including an employee or agent of the data controller or the data processor to whom data is disclosed in the course of processing the data for the data controller, but does not include a person to whom disclosure is made with respect to a particular inquiry pursuant to an enactment “special purposes” means any one or more of the following: (a) the purpose of journalism, (b) where the purpose is in the public interest, (c) artistic purposes, and (d) literary purposes Application of the Act The Act is applicable, where the data controller is established in Ghana and the data is processed in Ghana, the data processor is not established in Ghana, but uses equipment, or uses the services of a data processor carrying on business in Ghana, to process data, or the information being processed originates either partly or wholly from Ghana. (Section 45(1)) Data which originates externally and merely transits through Ghana is however, not protected by the Act (Section 45(4)). The Act applies to the Ghanaian Government, and for that purpose, each government department is treated as a data controller. (Section 91) Data protection principles The Act provides for 8 principles that data processors have to take into account in processing data, in order to protect the privacy of individuals. These principles are similar to the OECD Guidelines and the Data Protection Directive of the European Union. The data protection principles are enumerated at Section 17 as follows: accountability lawfulness of processing specification of purpose compatibility of further processing with purpose of collection quality of information openness data security safeguards, and data subject participation. Accountability The accountability principle of data protection is seen generally as a fundamental principle of compliance. It requires that a data controller should be accountable for compliance with measures which give effect to data protection principles. The Act requires a person who processes personal data to ensure that the data is processed without infringing the rights of the data subject, and should be processed in a lawful and reasonable manner (Section 18(1)). Where the data to be processed involves a foreign data subject, the data controller or processor must ensure that the personal data is processed according to the data protection laws of the originating jurisdiction (Section 18 (2)). Lawfulness of processing Data processing is lawful where the conditions that justify the processing are present. The Act has a minimality provision, which requires that personal data can only be processed if the purpose for which it is to be processed is necessary, relevant, and not excessive. (Section 19) The prior consent of a data subject is also required before personal data is processed. (Section 20) This requirement is, however, subject to exceptions. For instance, where the purpose for which the personal data is processed is necessary for the purpose of a contract to which the data subject is a party; authorised or required by law, to protect a legitimate interest of the data subject; necessary for the proper performance of a statutory duty or necessary to pursue the legitimate interest of the data controller or a third party to whom the data is supplied (section 20(1)). Consent is also required for the processing of special personal data (Section 37(2) (b)). A data subject also object to the processing of personal data (section 20(2)), and the data processor is required to stop processing the data upon such objection (section 20(3)). In terms of retention of records, the Act prohibits the retention of personal data for a period longer than is necessary to achieve the purpose of the collection, unless, the retention is required by law, is reasonably necessary for a lawful purpose related to a function or activity, is required for contractual purposes, or the data subject has consented to the retention. (Section 24(1)). The retention requirement is, however, not applicable to personal data that is kept for historical, statistical, or research purposes, (section 24(2)), except that such records must be adequately protected against access or used for unauthorized purposes (Section 24(3)). Where a person uses a record of personal data to make a decision about the data subject, the data must only be retained for a period required by law or a code of conduct, and where no such law or code of conduct exists, for a period which will afford the data subject an opportunity to request access to the record. Upon the expiration of the retention period, the personal data must, however, be deleted or destroyed, in a manner that prevents its reconstruction in an intelligible form, or the record of the personal data must be de-identified. (Sections 24(4), (5), (6)). A data subject may also request that a record of personal data about that data subject held by a data controller be destroyed or deleted where the data controller no longer has the authorisation to retain that data. (Section 33(1) (b)) Specification of purpose The Act requires that a data controller who collects personal data do so for a specific purpose that is explicitly defined and lawful, and is related to the functions or activity of the person. (Section 22) The data controller who collects data is also required to take necessary steps to ensure that the data subject is aware of the purpose for which the data is collected. (Section 23) Compatibility of further processing The Act requires that where a data controller holds personal data collected in connection with a specific purpose, any further processing of that data must be compatible with the purpose for which the personal data was initially obtained. (Section 25(1)) The circumstances under which processing meets the compatibility requirement include where the data subjects consents to the further processing of the information, the data is in the public domain, further processing is necessary for purposes of fighting crime, for legislation that concerns protection of tax revenue collection, the conduct of court proceedings, protection of national security, public health, or the life or health of the data subject or another person. (Section 25(3)) Quality of information Under section 26 of the Act, a data controller who processes personal data must ensure that the data is complete, accurate, up to date and not misleading, having regard to the purpose for which that data is collected or processed. Openness The openness principle ensures that individuals know about, and can participate in enforcing their rights under a data protection regime. Section 27(1) makes it mandatory for a data controller who intends to process personal data to register with the Data Protection Commission. The Data Controller who intends to collect data must also ensure that the data subject is aware the nature of data being collected, the persons responsible for the collection, the purpose of the collection as well as whether or not the supply of data is mandatory or discretionary, among other things. (Section 27(2)) Where the data is collected from a third party, the Act requires the data subject to be informed before the data is collected, or as soon as practicable afterwards. (Section 27(3)) The Act provides circumstances under which the notification requirement is exempt, and they include where it is necessary to avoid compromising law enforcement, protect national security, or where it relates to the preparation or conduct of legal proceedings. Section 27(4)) Also, although it is not mandatory, a data controller can appoint a data protection supervisor, who would be responsible for monitoring compliance with the Act.(Section 58(1), (2)) The data protection supervisor may be an employee (Section 58(1)) and must meet the qualification criteria set out by the Data Protection Commission. (Section 58(7)) Data security safeguards Under the Act, a data controller has a duty to prevent the loss of, damage to, or unauthorized destruction of personal data, as well as the unlawful access to or unauthorized processing of personal data. The data controller must therefore adopt appropriate, reasonable, technical, and organizational means to take necessary steps to ensure the security of personal data in its possession or control. (Section 28(1)) The data controller is also required to take reasonable measures to identify and forestall any reasonably foreseeable risks, and ensure that any safeguards put in place are effectively implemented and updated continually. (Section 28(2)) The data controller must also observe both generally accepted and industry specific best practices in securing data, (Section 28(3)) as well as ensure that data processors comply with security measures. (Section 30) Where the data processor is not domiciled in Ghana, the data controller must ensure that the data processor complies with the relevant laws of its country. (Section 30(4)) The Act also requires the data controller to, as soon as reasonably practicable, notify the Data Protection Commission and the data subject of any security breaches to its system, and take steps to ensure that the integrity of the system is restored.(Section 31)) Data subject participation A data subject can, subject to proving the data subject's identity, request a data controller to confirm if the data controller holds that data subject's personal data, describe the nature of the personal data held, and the identity of any third party who has or has previously had access to that data (Section 32(1)). The request must however be made in a reasonable manner, within a reasonable time, after paying any prescribed fees and in a form that is generally understandable (Section 32(2)). A data subject can also request a data controller to correct or delete personal data about the data subject that is held by the data controller and which is inaccurate, irrelevant, excessive, out of date, incomplete, or misleading (Section 33(1)). Upon receipt of the request, the data controller must either comply with the request or provide the data subject with credible evidence in support of the data. (Section 33(2)). Special personal data Under section 96, "special personal data" means personal data which consists of information that relates to (a) the race, colour, ethnic or tribal origin of the data subject; (b) the political opinion of the data subject; (c) the religious beliefs or other beliefs of a similar nature, of the data subject; (d) the physical, medical, mental health or mental condition or DNA of the data subject; (e) the sexual orientation of the data subject; (f) the commission or alleged commission of an offence by the individual; or (g) proceedings for an offence committed or alleged to have been committed by the individual, the disposal of such proceedings or the sentence of any court in the proceedings; The Act prohibits the processing of data which relates to children under parental control, or to the religious or philosophical beliefs, ethnic origin, race, trade union membership, political opinions, health, sexual life or criminal behaviour of an individual Section 37(1). Special personal data may, however, be processed where it is necessary or the data subject has given consent to the processing (Section 37(2)). Processing of personal data is necessary where it is to exercise a right, or fulfil an obligation conferred or imposed by law on an employer (Section 37(3)). Special personal data relating to data subjects may also be processed where it is necessary for the protection of the vital interest of the data subject, where it is impossible for the data subject to give consent, or the data controller cannot reasonably be expected to obtain consent, or consent by the data subject has been unreasonably withheld. (Section 37(4)) Processing special personal data is presumed to be necessary where it is required for the purpose of legal proceedings, legal advice and for medical purposes, where it is undertaken by a health professional and subject to a duty of confidentiality between the patient and health professional. (Section 37(6)) The prohibition on processing special personal data relating to religious or philosophical beliefs does not apply where the processing is carried out by a religious organisation of which the data subject is a member or by an institution founded upon the religious or philosophical principles with respect to persons associated with that institution and is necessary to achieve the aims of the institution (Section 38(1)). Rights of data subjects Under the Act, a data subject has the right to have his personal data corrected (section 33), to access his personal data (section 35); to prevent the processing of personal data that causes or is likely to cause unwarranted damage or distress to him (section 39); to prevent processing of personal data for purposes of direct marketing (section 40); to require a data controller not to take a decision that would significantly affect him solely on the processing by automatic means (section 41); to exempt manual data (Section 42), to be compensated for the data controller's failure to comply with the provisions of the Act, upon proof of damages (Section 43); and to have inaccurate data rectified (Section 44) The Data Protection Commission The Act establishes a Data Protection Commission with two main objects, Protect the privacy of the individual and personal data by regulating the processing of personal information, and Provide the process to obtain, hold, use or disclose personal information. (section 2) The functions of the DPC are to: Implement and monitor compliance with the provisions of the Act, Make administrative arrangements its considers appropriate for the discharge of its duties Investigate and fairly determine any complaints made under the Act, and Keep and maintain the Data Protection Register (section 3) The DPC is governed by an 11-member board that is appointed by the President of Ghana, and the Act provides for certain specific institutional representation. (Section 4) Board members are allowed to hold office for a period not exceeding three years and cannot be appointed to more than two terms. (Section 5(1)) Allowances for Board members are approved by the Minister responsible for Communications in consultation with the Minister responsible for Finance. (Section 9) The board was officially sworn in on 1 November 2012, is currently chaired by Prof. Justice Samuel Kofi Date-Bah, a retired justice of the Supreme Court of Ghana. The DPC was officially launched on 18 November 2014. The Act also mandates the President to appoint an Executive Director (section 11) who shall be responsible for the day-to-day administration of the DPC, as well as the implementation of the decisions of the Board. (Section 12). Mrs. Teki Akuetteh Falconer is the current Executive-Director. Under the Act, the sources of the DPC's funds include money approved by parliament, donations and grants, money that accrues to the DPC in the performance of its functions and any money that the Minister responsible for Finance approves. (Section 14) The DPC is also granted power to serve enforcement notices on data controllers requiring them to refrain from contravening the data protection principles. (Section 75) The enforcement notice may be cancelled or varied either by the DPC, on its own motion, or upon application by a recipient of the notice. (Section 76) The Data Protection Register The Act provides for the establishment of a Data Protection Register which is to be maintained by the DPC and to which data controllers must compulsorily register. (Section 46) Applications for registration as a data controller is to be made in writing and the Act provides for certain particulars, such as the business name and address of applicant, a description of personal data to be collected and a description of purpose for the processing of personal data. (Section 47(1)) Knowingly supplying false information amounts to an offence punishable by a fine or imprisonment. (Section 47(2)) Also, a separate entry in the register must be made for each separate purpose for which the data controller wishes to process the data. (Section 47(3)) The DPC has the right to refuse to grant an application where the particulars provided for inclusion in an entry in the register are insufficient, the data controller has not been able to provide the appropriate safeguards for the protection of the privacy of the data subject, and in the opinion of the DPC the applicant does not merit the grant of the registration. (Section 47(1))Upon refusing a registration application, the DPC is required to inform the applicant of the reasons for the refusal, and in such an event, the applicant may apply to the High Court for judicial review of the decision. (Section 47(2)) Registration as a data controller is subject to renewal every two years (section 50). The DPC also has the power to cancel a registration for good cause. (Section 52) It is an offence to process personal data without registering. (Section 56) The Act also provides for access by the public to the register, upon the payment of the prescribed fee. (Section 54) General exemptions The Act provides several exemptions for different purposes as follows: The processing of personal data is exempt from the provisions of the Act where it relates to national security (section 60) and in relation to crime and taxation (section 61); the disclosure of personal data relating to health, education and social work; (section 61); is prohibited, unless it is required by law. The provisions of the Act are also not applicable for the protection of members of the public against specified loss or malpractice provisions (section 63) The processing of personal data is prohibited unless the processing is undertaken for the purpose of a literary or artistic material and the data controller reasonably believes the publication would be in the public interest and that compliance with the provision is incompatible with the special purposes. (Section 64) The provisions on non-disclosure do not apply, where the disclosure is required by any law or by a court. (Section 66) Act does not apply where data is processed only for the purpose of managing an individual's domestic affairs. (Section 67) The data protection principles do not apply to personal data if it consists of references given in confidence, for the purposes of education, appointment to an office or the provision of a service by the data subject. (Section 68) The subject information provisions of the Act do not apply to personal data, where it is likely to prejudice the combat effectiveness of the Armed Forces (Section 69); where it is processed to assess the suitability of a person for judicial appointment or to confer a national honour (Section 70) or if it consists of information in respect of a claim to professional privilege or confidentiality. (Section 74) Personal data is exempt from the provisions of the Act where it relates to examinations marks processed by the data controller and is in relation with the individual's results,(Section 72) or consists of information recorded by a candidate for academic purposes (section 73) Miscellaneous provisions The Act prohibits the purchase of personal data, the knowing or reckless disclosure of personal data, and the contravention of this provision amounts to an offence. (Section 88) The Act also makes the sale, the offering to sell, and the advertising of the sale of personal data an offence. (Section 89) The Minister responsible for communications may, in consultation with the DPC make regulations for the effective implementation of the Act. References External links Data Protection Commission Law of Ghana Parliament of Ghana Presidency of John Atta Mills Information privacy Data protection Data laws of Africa
Data Protection Act, 2012
[ "Engineering" ]
4,434
[ "Cybersecurity engineering", "Information privacy" ]
45,595,539
https://en.wikipedia.org/wiki/Quantum%20heat%20engines%20and%20refrigerators
A quantum heat engine is a device that generates power from the heat flow between hot and cold reservoirs. The operation mechanism of the engine can be described by the laws of quantum mechanics. The first realization of a quantum heat engine was pointed out by Scovil and Schulz-DuBois in 1959, showing the connection of efficiency of the Carnot engine and the 3-level maser. Quantum refrigerators share the structure of quantum heat engines with the purpose of pumping heat from a cold to a hot bath consuming power first suggested by Geusic, Schulz-DuBois, De Grasse and Scovil. When the power is supplied by a laser the process is termed optical pumping or laser cooling, suggested by Wineland and Hänsch. Surprisingly heat engines and refrigerators can operate up to the scale of a single particle thus justifying the need for a quantum theory termed quantum thermodynamics. The 3-level amplifier as a quantum heat engine The three-level-amplifier is the template of a quantum device. It operates by employing a hot and cold bath to maintain population inversion between two energy levels which is used to amplify light by stimulated emission The ground state level (1-g) and the excited level (3-h) are coupled to a hot bath of temperature . The energy gap is . When the population on the levels equilibrate where is the Planck constant and is the Boltzmann constant. The cold bath of temperature couples the ground (1-g) to an intermediate level (2-c) with energy gap . When levels 2-c and 1-g equilibrate then . The device operates as an amplifier when levels (3-h) and (2-c) are coupled to an external field of frequency . For optimal resonance conditions . The efficiency of the amplifier in converting heat to power is the ratio of work output to heat input: . Amplification of the field is possible only for positive gain (population inversion) . This is equivalent to . Inserting this expression into the efficiency formula leads to: where is the Carnot cycle efficiency. Equality is obtained under a zero gain condition . The relation between the quantum amplifier and the Carnot efficiency was first pointed out by Scovil and Schultz-DuBois: Reversing the operation driving heat from the cold bath to the hot bath by consuming power constitutes a refrigerator. The efficiency of the refrigerator defined as the coefficient of performance (COP) for the reversed device is: Types Quantum devices can operate either continuously or by a reciprocating cycle. Continuous devices include solar cells converting solar radiation to electrical power, thermoelectric where the output is current and lasers where the output power is coherent light. The primary example of a continuous refrigerator is optical pumping and laser cooling. Similarly to classical reciprocating engines, quantum heat engines also have a cycle that is divided into different strokes. A stroke is time segment in which a certain operation takes place (e.g. thermalization, or work extraction). Two adjacent strokes do not commute with each other. The most common reciprocating heat machines are the four-stroke machine, and the two-stroke machine. Reciprocating devices have been suggested operating either by the Carnot cycle or the Otto cycle. In both types the quantum description allows to obtain equation of motion for the working medium and the heat flow from the reservoirs. Quantum reciprocating heat engine and refrigerator Quantum versions of most of the common thermodynamic cycles have been studied, for example the Carnot cycle, Stirling cycle and Otto cycle. The Otto cycle can serve as a template for other reciprocating cycles. It is composed of the following four segments: Segment isomagnetic or isochoric process, partial equilibration with the cold bath under constant Hamiltonian. The dynamics of the working medium is characterized by the propagator . Segment magnetization or adiabatic compression, the external field changes expanding the gap between energy levels of the Hamiltonian. The dynamics is characterized by the propagator . Segment isomagnetic, or isochoric process partial equilibration with the hot bath described by the propagator . Segment demagnetization or adiabatic expansion reducing the energy gaps in the Hamiltonian, characterized by the propagator . The propagator of the four stroke cycle becomes , which is the ordered product of the segment propagators: The propagators are linear operators defined on a vector space which completely determines the state of the working medium. Common to all thermodynamic cycles the consecutive segment propagators do not commute . Commuting propagators will lead to zero power. In a reciprocating quantum heat engine the working medium is a quantum system such as spin systems or an harmonic oscillator. For maximum power the cycle time should be optimized. There are two basic timescales in the reciprocating refrigerator the cycle time and the internal timescale . In general when the engine operates in quasi-adiabatic conditions. The only quantum effect can be found at low temperatures where the unit of energy of the device becomes instead of . The efficiency at this limit is , always smaller than the Carnot efficiency . At high temperature and for the harmonic working medium the efficiency at maximum power becomes which is the endoreversible thermodynamics result. For shorter cycle times the working medium cannot follow adiabatically the change in the external parameter. This leads to friction-like phenomena. Extra power is required to drive the system faster. The signature of such dynamics is the development of coherence causing extra dissipation. Surprisingly the dynamics leading to friction is quantized meaning that frictionless solutions to the adiabatic expansion/compression can be found in finite time. As a result, optimization has to be carried out only with respect to the time allocated to heat transport. In this regime the quantum feature of coherence degrades the performance. Optimal frictionless performance is obtained when the coherence can be cancelled. The shortest cycle times , sometimes termed sudden cycles, have universal features. In this case coherence contributes to the cycles power. A two-stroke engine quantum cycle equivalent to the Otto cycle based on two qubits has been proposed. The first qubit has frequency and the second . The cycle is composed of a first stroke of partial equilibration of the two qubits with the hot and cold bath in parallel. The second power stroke is composed of a partial or full swap between the qubits. The swap operation is generated by a unitary transformation which preserves the entropy as a result it is a pure power stroke. The quantum Otto cycle refrigerators shares the same cycle with magnetic refrigeration. Continuous quantum engines Continuous quantum engines are the quantum analogues of turbines. The work output mechanism is coupling to an external periodic field, typically the electromagnetic field. Thus the heat engine is a model for a laser. The models differ by the choice of their working substance and heat source and sink. Externally driven two-level, three level four-level and coupled harmonic oscillators have been studied. The periodic driving splits the energy level structure of the working medium. This splitting allows the two level engine to couple selectively to the hot and cold baths and produce power. On the other hand, ignoring this splitting in the derivation of the equation of motion will violate the second law of thermodynamics. Non thermal fuels have been considered for quantum heat engines. The idea is to increase the energy content of the hot bath without increasing its entropy. This can be achieved by employing coherence or a squeezed thermal bath. These devices do not violate the second law of thermodynamics. Equivalence of reciprocating and continuous heat machines in the quantum regime Two-stroke, Four-stroke, and continuous machine are very different from each other. However it was shown that there is a quantum regime where all these machines become thermodynamically equivalent to each other. While the intra cycle dynamics in the equivalence regime is very different in different engine types, when the cycle is completed they all turn out to provide the same amount of work and consume the same amount of heat (hence they share the same efficiency as well). This equivalence is associated with a coherent work extraction mechanism and has no classical analogue. These quantum features have been demonstrated experimentally. Heat engines and open quantum systems The elementary example operates under quasi equilibrium conditions. Its main quantum feature is the discrete energy level structure. More realistic devices operate out of equilibrium possessing friction heat leaks and finite heat flow. Quantum thermodynamics supplies a dynamical theory required for systems out of equilibrium such as heat engines, thus, inserting dynamics into thermodynamics. The theory of open quantum systems constitutes the basic theory. For heat engines a reduced description of the dynamics of the working substance is sought, tracing out the hot and cold baths. The starting point is the general Hamiltonian of the combined systems: and the system Hamiltonian is time dependent. A reduced description leads to the equation of motion of the system: where is the density operator describing the state of the working medium and is the generator of dissipative dynamics which includes the heat transport terms from the baths. Using this construction, the total change in energy of the sub-system becomes: leading to the dynamical version of the first law of thermodynamics: The power Heat currents and . The rate of entropy production becomes: The global structure of quantum mechanics is reflected in the derivation of the reduced description. A derivation which is consistent with the laws of thermodynamics is based on the weak coupling limit. A thermodynamical idealization assumes that the system and the baths are uncorrelated, meaning that the total state of the combined system becomes a tensor product at all times: Under these conditions the dynamical equations of motion become: where is the Liouville superoperator described in terms of the system's Hilbert space, where the reservoirs are described implicitly. Within the formalism of quantum open system, can take the form of the Gorini-Kossakowski-Sudarshan-Lindblad (GKS-L) Markovian generator or also known just as Lindblad equation . Theories beyond the weak coupling regime have been proposed. The quantum absorption refrigerator The absorption refrigerator is of unique importance in setting an autonomous quantum device. Such a device requires no external power and operates without external intervention in scheduling the operations . The basic construct includes three baths; a power bath, a hot bath and a cold bath. The tricycle model is the template for the absorption refrigerator. The tricycle engine has a generic structure. The basic model consists of three thermal baths: A hot bath with temperature , a cold bath with temperature and a work bath with temperature . Each bath is connected to the engine via a frequency filter which can be modeled by three oscillators: where , and are the filter frequencies on resonance . The device operates as a refrigerator by removing an excitation from the cold bath as well as from the work bath and generating an excitation in the hot bath. The term in the Hamiltonian is non linear and crucial for an engine or a refrigerator. where is the coupling strength. The first-law of thermodynamics represents the energy balance of heat currents originating from the three baths and collimating on the system: At steady state no heat is accumulated in the tricycle, thus . In addition, in steady state the entropy is only generated in the baths, leading to the second law of thermodynamics: This version of the second-law is a generalisation of the statement of Clausius theorem; heat does not flow spontaneously from cold to hot bodies. When the temperature , no entropy is generated in the power bath. An energy current with no accompanying entropy production is equivalent to generating pure power: , where is the power output. Quantum refrigerators and the third law of thermodynamics There are seemingly two independent formulations of the third law of thermodynamics both originally were stated by Walther Nernst. The first formulation is known as the Nernst heat theorem, and can be phrased as: The entropy of any pure substance in thermodynamic equilibrium approaches zero as the temperature approaches zero. The second formulation is dynamical, known as the unattainability principle: It is impossible by any procedure, no matter how idealized, to reduce any assembly to absolute zero temperature in a finite number of operations. At steady state the second law of thermodynamics implies that the total entropy production is non-negative. When the cold bath approaches the absolute zero temperature, it is necessary to eliminate the entropy production divergence at the cold side when , therefore For the fulfillment of the second law depends on the entropy production of the other baths, which should compensate for the negative entropy production of the cold bath. The first formulation of the third law modifies this restriction. Instead of the third law imposes , guaranteeing that at absolute zero the entropy production at the cold bath is zero: . This requirement leads to the scaling condition of the heat current . The second formulation, known as the unattainability principle can be rephrased as; No refrigerator can cool a system to absolute zero temperature at finite time. The dynamics of the cooling process is governed by the equation where is the heat capacity of the bath. Taking and with , we can quantify this formulation by evaluating the characteristic exponent of the cooling process, This equation introduce the relation between the characteristic exponents and . When then the bath is cooled to zero temperature in a finite time, which implies a violation of the third law. It is apparent from the last equation, that the unattainability principle is more restrictive than the Nernst heat theorem. References Further reading Deffner, Sebastian and Campbell, Steve. "Quantum Thermodynamics: An introduction to the thermodynamics of quantum information", (Morgan & Claypool Publishers, 2019). F. Binder, L. A. Correa, C. Gogolin, J. Anders, G. Adesso (eds.) "Thermodynamics in the Quantum Regime. Fundamental Aspects and New Directions." (Springer 2018) Gemmer, Jochen, M. Michel, and Günter Mahler. "Quantum thermodynamics. Emergence of thermodynamic behavior within composite quantum systems. 2." (2009). Petruccione, Francesco, and Heinz-Peter Breuer. The theory of open quantum systems. Oxford university press, 2002. External links Quantum mechanics Heat pumps Thermodynamics
Quantum heat engines and refrigerators
[ "Physics", "Chemistry", "Mathematics" ]
3,016
[ "Quantum mechanics", "Theoretical physics", "Thermodynamics", "Dynamical systems" ]
52,414,699
https://en.wikipedia.org/wiki/Hypercube%20internetwork%20topology
In computer networking, hypercube networks are a type of network topology used to connect and route data between multiple processing units or computers. Hypercube networks consist of nodes, which form the vertices of squares to create an internetwork connection. A hypercube is basically a multidimensional mesh network with two nodes in each dimension. Due to similarity, such topologies are usually grouped into a -ary -dimensional mesh topology family, where represents the number of dimensions and represents the number of nodes in each dimension. Topology Hypercube interconnection network is formed by connecting N nodes that can be expressed as a power of 2. This means if the network has N nodes it can be expressed as : where m is the number of bits that are required to label the nodes in the network. So, if there are 4 nodes in the network, 2 bits are needed to represent all the nodes in the network. The network is constructed by connecting the nodes that just differ by one bit in their binary representation. This is commonly referred to as Binary labelling. A 3D hypercube internetwork would be a cube with 8 nodes and 12 edges. A 4D hypercube network can be created by duplicating two 3D networks, and adding a most significant bit. The new added bit should be ‘0’ for one 3D hypercube and ‘1’ for the other 3D hypercube. The corners of the respective one-bit changed MSBs are connected to create the higher hypercube network. This method can be used to construct any m-bit represented hypercube with (m-1)-bit represented hypercube. E-Cube routing Routing method for a hypercube network is referred to as E-Cube routing. The distance between two nodes in the network can be given by Hamming weight of (number of ones in) the XOR-operation between their respective binary labels. The distance between Node 1 (represented as ‘01’) and Node 2 (represented as ‘10’) in the network given by: E-Cube routing is a static routing method that employs XY-routing algorithm. This is commonly referred to as Deterministic, Dimension Ordered Routing model. E-Cube routing works by traversing the network in the kth dimension where k is the least significant non-zero bit in the result of calculating distance. For example, let the sender's label be ‘00’ and the receiver's label be ‘11’. So, the distance between them is 11 and the least significant non-zero bit is the LSB bit. Figuring out which way to go for a ‘0’ or ‘1’ is determined by XY routing algorithm. Metrics Different measures of performance are used to evaluate the efficiency of a hypercube network connection against various other network topologies. Degree This defines the number of immediately adjacent nodes to a particular node. These nodes should be immediate neighbors. In case of a hypercube the degree is m. Diameter This defines the maximum number of nodes that a message must pass through on its way from the source to the destination. This basically gives us the delay in transmitting a message across a network. In case of a hypercube the diameter is m. Average distance The distance between two nodes defined by the number of hops in the shortest path between two particular nodes. It is given by the formula - In case of Hypercubes the average distance is given as m/2. Bisection width This is the lowest number of wires that you should cut in order to divide the network into two equal halves. It is given as 2m-1 for Hypercubes. References Network topology
Hypercube internetwork topology
[ "Mathematics" ]
750
[ "Network topology", "Topology" ]
52,417,716
https://en.wikipedia.org/wiki/Subramania%20Ranganathan
Subramania Ranganathan (1934–2016), popularly known as Ranga, was an Indian bioorganic chemist and professor and head of the department of chemistry at the Indian Institute of Technology, Kanpur. He was known for his studies on synthetic and mechanistic organic chemistry and was an elected fellow Indian National Science Academy, National Academy of Sciences, India and the Indian Academy of Sciences The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 1977, for his contributions to chemical sciences. Biography Ranganathan, born on 2 February 1934 in the south Indian state of Tamil Nadu, graduated in chemistry from Madras University and continued there to complete his master's degree in 1957. Before moving to US to pursue his doctoral studies on a Sloan Kettering Foundation fellowship, he worked at the biochemistry department of the Central Leather Research Institute for a short while. In the US, he enrolled at Ohio State University at Harold Shechter's laboratory and secured a PhD in 1962. He moved to the laboratory of Robert Burns Woodward, the 1965 Nobel laureate, at Harvard University for his post- doctoral studies and in 1964, he shifted to Woodward Research Institute, Basel to complete the studies in 1964. On his return to India in 1966, he joined IIT Kanpur where he spent his entire official academic career, holding positions of a professor, head of the department and dean, before superannuating in 1994. Post-retirement, he served as an INSA senior scientist, first at National Institute for Interdisciplinary Science and Technology and later at the Indian Institute of Chemical Technology (IICT), both the facilities were earlier known as Regional Research Laboratories. Ranganathan was holding the position of an honorary position at IICT when he died on 8 January 2016, at the age of 81, survived by his son, Anand. He was married to Darshan Ranganathan, an academic, research associate and his co-author; his wife predeceased him. Anand Ranganathan is a scientist working on drugs for TB and Malaria at International Centre for Genetic Engineering and Biotechnology. Legacy During his post-doctoral days, Ranganathan worked closely with Woodward and was known to have assisted the latter in his work on Woodward–Hoffmann rules. It was during this time, he accomplished the total synthesis of Cephalosporin C and Woodward's Nobel lecture was based on this synthesis. Later, basing his researches on synthetic and mechanistic organic chemistry, he identified new methodologies for the synthesis of prostaglandins, a group of biologically active compounds. His researches have been documented by way of a number of books and over 200 peer-reviewed articles; the online repository of Indian Academy of Sciences has listed 97 of them; and many authors have cited his researches in their publications. Awards and honors Ranganathan received the Basudev Banerjee Medal in 1975 and the Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards, in 1977. He received R. C. Mehrotra Endowment Gold Medal in 2000 and the Silver Medal of the Chemical Research Society of India in 2001; CRSI would honor him again in 2006 with the Lifetime Achievement Award. In 2014, he was awarded the Best Teacher Award by the Indian National Science Academy. He held lectureships of the University Grants Commission of India (1979–80), Science and Engineering Research Board (1991) and Department of Atomic Energy (2001) and delivered several award orations including Professor K. Venkatraman Lecture (1979), Professor A. B. Kulkarni Lecture (1982); Professor N. V. Subba Rao Memorial Lecture (1985), Professor T. R. Seshadri Memorial Lecturer (1993) and Maitreyi Memorial Lecture (1994). The Indian Academy of Sciences elected him as a fellow in 1975 and he became an elected fellow of and the Indian National Science Academy and the National Academy of Sciences, India in 1981 and 1991 respectively. Books See also References Recipients of the Shanti Swarup Bhatnagar Award in Chemical Science 1934 births Fellows of the Indian Academy of Sciences Tamil scientists University of Madras alumni 2016 deaths Scientists from Tamil Nadu Fellows of the Indian National Science Academy Fellows of the National Academy of Sciences, India Ohio State University alumni Harvard University alumni Academic staff of IIT Kanpur Council of Scientific and Industrial Research Indian Tamil academics Indian organic chemists 20th-century Indian chemists
Subramania Ranganathan
[ "Chemistry" ]
941
[ "Organic chemists", "Indian organic chemists" ]
52,421,585
https://en.wikipedia.org/wiki/Pradeep%20Mathur%20%28scientist%29
Pradeep Mathur (born 1955) is an Indian organometallic and cluster chemist and the founder director of the Indian Institute of Technology, Indore. He is a former professor of the Indian Institute of Technology, Mumbai and is known for his studies on mixed metal cluster compounds. He is an elected fellow of the Indian Academy of Sciences The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 2000, for his contributions to chemical sciences. He has also been honoured by the award of an honorary Doctor of Science degree by the University of Keele in the U.K. Biography Pradeep Mathur, born on 17 August 1955 in Teheran to Damyanti and Amrit Dayal. Mathur and his older brother, renowned physicist at TIFR, Deepak Mathur (married to Helen Mathur) were both brought up and educated in London whilst their father Amrit Dayal worked as a senior diplomatic official at the Indian High Commission in London and Accra. Mathur continued to live in England till he moved to Yale. He gained an honours degree at the University of North London in 1976 and secured a PhD from Keele University in 1981 before moving onto Yale University as a post-doctoral researcher. Mathur chose to move to India and joined Indian Institute of Technology, Mumbai in 1984 as a member of the faculty of chemistry where he held several positions before reaching the position of a professor and the head of the National Single Crystal X-ray Diffraction Facility. When the Indian Institute of Technology, Indore was established in 2009, Mathur was appointed as its founder director. At the end of his first five-year term, his contract was extended for a second term and he continues to hold the position, simultaneously serving as a professor of the department of chemistry. He has been a visiting professor at University of Cambridge, University of Freiburg and University of Karlsruhe and has been associated with a number of scientific journals, viz. Organometallics, Journal of Organometallic Chemistry and Journal of Cluster Science as a member of their editorial boards. Mathur is married to Vinita and the couple have two daughters, Nehika and Saloni. Legacy and honors Mathur's researches were focused on the organometallic chemistry of mixed metal cluster compounds and he has developed synthetic strategies for introducing chalcogen bridges. At IIT Mumbai, he handled projects related to the investigation of unusual metal mediated transformations and the interactions between the metal atoms and unsaturated organic species. He has published his researches by way of chapters contributed to books authored by others and over 180 peer-reviewed articles; ResearchGate and Google Scholar, two online repositories have listed several of them. He has also guided 22 doctoral scholars in their studies. Mathur was a Fulbright scholar in 1995 and the Indian Academy of Sciences elected him as a fellow in 1996. The Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards, in 2000. He has also been honoured with an honorary D.Sc. degree by the University of Keele in the U.K. See also Cluster chemistry References External links Recipients of the Shanti Swarup Bhatnagar Award in Chemical Science 1955 births Indian Institute of Technology directors Fellows of the Indian Academy of Sciences University of Madras alumni 20th-century Indian chemists Organometallic chemistry Alumni of Keele University Yale University alumni Academic staff of IIT Bombay Academics of the University of Cambridge Academic staff of the University of Freiburg Living people
Pradeep Mathur (scientist)
[ "Chemistry" ]
752
[ "Organometallic chemistry" ]
52,423,232
https://en.wikipedia.org/wiki/Flow%20in%20partially%20full%20conduits
In fluid mechanics, flows in closed conduits are usually encountered in places such as drains and sewers where the liquid flows continuously in the closed channel and the channel is filled only up to a certain depth. Typical examples of such flows are flow in circular and Δ shaped channels. Closed conduit flow differs from open channel flow only in the fact that in closed channel flow there is a closing top width while open channels have one side exposed to its immediate surroundings. Closed channel flows are generally governed by the principles of channel flow as the liquid flowing possesses free surface inside the conduit. However, the convergence of the boundary to the top imparts some special characteristics to the flow like closed channel flows have a finite depth at which maximum discharge occurs. For computational purposes, flow is taken as uniform flow. Manning's Equation, Continuity Equation (Q=AV) and channel's cross-section geometrical relations are used for the mathematical calculation of such closed channel flows. Mathematical analysis for flow in circular channel Consider a closed circular conduit of diameter D, partly full with liquid flowing inside it. Let 2θ be the angle, in radians, subtended by the free surface at the centre of the conduit as shown in figure (a). The area of the cross-section (A) of the liquid flowing through the conduit is calculated as : (Equation 1) Now, the wetted perimeter (P) is given by: Therefore, the hydraulic radius (Rh) is calculated using cross-sectional area (A) and wetted perimeter (P) using the relation: (Equation 2) The rate of discharge may be calculated from Manning’s equation : . (Equation 3) where the constant  Now putting in the above equation yields us the rate of discharge for conduit flowing full (Qfull)) (Equation 4) Final dimensionless quantities In dimensionless form, the rate of discharge Q is usually expressed in a dimensionless form as : (Equation 5) Similarly for velocity (V) we can write : (Equation 6) The depth of flow (H) is expressed in a dimensionless form as : (Equation 7) Flow characteristics The variations of Q/Q(full) and V/V(full) with H/D ratio is shown in figure(b).From the equation 5, maximum value of Q/Q(full) is found to be equal to 1.08 at H/D =0.94 which implies that maximum rate of discharge through a conduit is observed for a conduit partly full. Similarly the maximum value of V/V(full) (which is equal to 1.14) is also observed at conduit partly full with H/D = 0.81.The physical explanation for these results are generally attributed to the typical variation of Chézy's coefficient with hydraulic radius Rh in Manning’s formula. However, an important assumption is taken that Manning’s Roughness coefficient ‘n’ is independent to the depth of flow while calculating these values. Also, the dimensional curve of Q/Q(full) shows that when the depth is greater than about 0.82D, then there are two possible different depths for the same discharge, one above and below the value of 0.938D In practice, it is common to restrict the flow below the value of 0.82D to avoid the region of two normal depths due to the fact that if the depth exceeds the depth of 0.82D then any small disturbance in water surface may lead the water surface to seek alternate normal depths thus leading to surface instability. References Fluid mechanics Hydraulics
Flow in partially full conduits
[ "Physics", "Chemistry", "Engineering" ]
744
[ "Physical systems", "Hydraulics", "Civil engineering", "Fluid mechanics", "Fluid dynamics" ]
52,424,203
https://en.wikipedia.org/wiki/Travel%20time%20reliability
According to FHWA, travel time reliability measures the extent of this unexpected delay. A formal definition for travel time reliability is: the consistency or dependability in travel times, as measured from day-to-day and/or across different times of the day. In addition, there are many different ways to define travel time reliability. For example, according to NZTA, trip time reliability is measured by the unpredictable variations in journey times, which are experiencedfor a journey undertaken at broadly the same time every day. The impact is related to the day-to-dayvariations in traffic congestion, typically as a result of day-to-day variations in flow. This is distinctfrom variations in individual journey times, which occur within a particular period. As reviewed by Taylor (2013), there are many different concerns to consider when defining travel time reliability. Travel time reliability has been increasingly recognized as a key performance indicator for transportation roadways and transport systems, which exerts a strong influence on the stakeholders in transportation networks, including users (travelers), service providers, planners, and managers. This has stimulated research into the development of measures to quantify the level of reliability or the extent of variability in travel times. As a result, several travel time reliability measures have been introduced over the last two decades. The reliability measures can be divided into three classes: (1) point-based measures, including probability-based, moment-based, percentile-based, tail-based, and utility-based measures, (2) bound-based measures, and (3) PDF-based measures. References Transportation planning
Travel time reliability
[ "Physics" ]
328
[ "Physical systems", "Transport", "Transport stubs" ]
39,541,633
https://en.wikipedia.org/wiki/Large%20low-shear-velocity%20provinces
Large low-shear-velocity provinces (LLSVPs), also called large low-velocity provinces (LLVPs) or superplumes, are characteristic structures of parts of the lowermost mantle, the region surrounding the outer core deep inside the Earth. These provinces are characterized by slow shear wave velocities and were discovered by seismic tomography of deep Earth. There are two main provinces: the African LLSVP and the Pacific LLSVP, both extending laterally for thousands of kilometers and possibly up to 1,000 kilometres vertically from the core–mantle boundary. These have been named Tuzo and Jason respectively, after Tuzo Wilson and W. Jason Morgan, two geologists acclaimed in the field of plate tectonics. The Pacific LLSVP is across and underlies four hotspots on Earth's crust that suggest multiple mantle plumes underneath. These zones represent around 8% of the volume of the mantle, or 6% of the entire Earth. Other names for LLSVPs and their superstructures include superswells, superplumes, thermo-chemical piles, or hidden reservoirs, mostly describing their proposed geodynamical or geochemical effects. For example, the name "thermo-chemical pile" interprets LLSVPs as lower-mantle piles of thermally hot and/or chemically distinct material. LLSVPs are still relatively mysterious, and many questions remain about their nature, origin, and geodynamic effects. Seismological modeling Directly above the core–mantle boundary is a thick layer of the lower mantle. This layer is known as the D″ ("D double-prime" or "D prime prime") or degree two structure. LLSVPs were discovered in full mantle seismic tomographic models of shear velocity as slow features at the D″ layer beneath Africa and the Pacific. The global spherical harmonics of the D″ layer are stable throughout most of the mantle but anomalies appear along the two LLSVPs. By using shear wave velocities, the locations of the LLSVPs can be verified, and a stable pattern for mantle convection emerges. This stable configuration is responsible for the geometry of plate motions at the surface. The LLSVPs lie around the equator, but mostly on the Southern Hemisphere. Global tomography models inherently result in smooth features; local waveform modeling of body waves, however, has shown that the LLSVPs have sharp boundaries. The sharpness of the boundaries makes it difficult to explain the features by temperature alone; the LLSVPs need to be compositionally distinct to explain the velocity jump. Ultra-low velocity zones at smaller scales have been discovered mainly at the edges of these LLSVPs. By using the solid Earth tide, the density of these regions has been determined. The bottom two thirds are 0.5% denser than the bulk of the mantle. However, tidal tomography cannot determine how the excess mass is distributed; the higher density may be caused by primordial material or subducted ocean slabs. The African LLSVP may be a potential cause for the South Atlantic Anomaly. Origins Several hypotheses have been proposed for the origin and persistence of LLSVPs, depending on whether the provinces represent purely thermal unconformities (i.e. are isochemical in nature, of the same chemical composition as the surrounding mantle) or represent chemical unconformities as well (i.e. are thermochemical in nature, of different chemical composition from the surrounding mantle). If LLSVPs represent purely thermal unconformities, then they may have formed as large mantle plumes of hot, upwelling mantle. However, geodynamical studies predict that isochemical upwelling of a hotter, lower viscosity material should produce long, narrow plumes, unlike the large, wide plumes seen in LLSVPs. It is important to remember, however, that the resolution of geodynamical models and seismic images of Earth's mantle are very different. The current leading hypothesis for the LLSVPs is the accumulation of subducted oceanic slabs. This corresponds to the locations of known slab graveyards surrounding the Pacific LLSVP. These graveyards are thought to be the reason for the high velocity zone anomalies surrounding the Pacific LLSVP and are thought to have formed by subduction zones that were around long before the dispersion—some 750 million years ago—of the supercontinent Rodinia. Aided by the phase transformation, the temperature would partially melt the slabs to form a dense melt that pools and forms the ultra-low velocity zone structures at the bottom of the core-mantle boundary closer to the LLSVP than the slab graveyards. The rest of the material is then carried upwards via chemical-induced buoyancy and contributes to the high levels of basalt found at the mid-ocean ridge. The resulting motion forms small clusters of small plumes right above the core-mantle boundary that combine to form larger plumes and then contribute to superplumes. The Pacific and African LLSVP, in this scenario, are originally created by a discharge of heat from the core (4000 K) to the much colder mantle (2000 K); the recycled lithosphere is fuel that helps drive the superplume convection. Since it would be difficult for the Earth's core to maintain this high heat by itself, it gives support for the existence of radiogenic nuclides in the core, as well as the indication that if fertile subducted lithosphere stops subducting in locations preferable for superplume consumption, it will mark the demise of that superplume. Another proposed origin for the LLSVPs is that their formation is related to the giant-impact hypothesis, which states that the Moon formed after the Earth collided with a planet-sized body called Theia. The hypothesis suggests that the LLSVPs may represent fragments of Theia's mantle which sank through to Earth's core-mantle boundary. The higher density of the mantle fragments is due to their enrichment in iron(II) oxide with respect to the rest of Earth's mantle. This higher iron(II) oxide composition would also be consistent with the isotope geochemistry of lunar samples, as well as that of the ocean island basalts overlying the LLSVPs. Dynamics Geodynamic mantle convection models have included compositional distinctive material. The material tends to get swept up in ridges or piles. When including realistic past plate motions into the modeling, the material gets swept up in locations that are remarkably similar to the present day location of the LLSVPs. These locations also correspond with known slab graveyard locations. These types of models, as well as the observation that the D″ structure of the LLSVPs is orthogonal to the path of true polar wander, suggest these mantle structures have been stable over large amounts of time. This geometrical relationship is consistent with the position of Pangaea and the formation of the current geoid pattern due to continental break-up from the superswell below. However, the heat from the core is not enough to sustain the energy needed to fuel the superplumes located at the LLSVPs. There is a phase transition from perovskite to post-perovskite from the down welling slabs that causes an exothermic reaction. This exothermic reaction helps to heat the LLSVP, but it is not sufficient to account for the total energy needed to sustain it. So it is hypothesized that the material from the slab graveyard can become extremely dense and form large pools of melt concentrate enriched in uranium, thorium, and potassium. These concentrated radiogenic elements are thought to provide the high temperatures needed. So, the appearance and disappearance of slab graveyards predicts the birth and death of an LLSVP, potentially changing the dynamics of all plate tectonics. Structure and composition A study by researchers from Utrecht University revealed that LLSVPs were not only hotter but also ancient, potentially over a billion years old. The findings suggested that their seismic properties are influenced by factors beyond temperature, such as composition or mineral grain size. Seismic waves passing through LLSVPs decelerate but lose less energy than expected, indicating compositional differences and shedding light on their complex structure. See also Low-velocity zone Cataclysmic pole shift hypothesis Inner core super-rotation Intermediate axis theorem References External links Geophysics Structure of the Earth
Large low-shear-velocity provinces
[ "Physics" ]
1,763
[ "Applied and interdisciplinary physics", "Geophysics" ]
39,548,916
https://en.wikipedia.org/wiki/Term%20graph
A term graph is a representation of an expression in a formal language as a generalized graph whose vertices are . Term graphs are a more powerful form of representation than expression trees because they can represent not only common subexpressions (i.e. they can take the structure of a directed acyclic graph) but also cyclic/recursive subexpressions (cyclic digraphs). Abstract syntax trees cannot represent shared subexpressions since each tree node can only have one parent; this simplicity comes at the cost of efficiency due to redundant duplicate computations of identical terms. For this reason term graphs are often used as an intermediate language at a subsequent compilation stage to abstract syntax tree construction via parsing. The phrase "term graph rewriting" is often used when discussing graph rewriting methods for transforming expressions in formal languages. Considered from the point of view of graph grammars, term graphs are not regular graphs, but hypergraphs where an n-ary word will have a particular subgraph in first place, another in second place, and so on, a distinction that does not exist in the usual undirected graphs studied in graph theory. Term graphs are a prominent topic in programming language research since term graph rewriting rules can formally expressing a compiler's operational semantics. Term graphs are also used as abstract machines capable of modelling chemical and biological computations as well as graphical calculi such as concurrency models. Term graphs can perform automated verification and logical programming since they are well-suited to representing quantified statements in first order logic. Symbolic programming software is another application for term graphs, which are capable of representing and performing computation with abstract algebraic structures such as groups, fields and rings. The TERMGRAPH conference focuses entirely on research into term graph rewriting and its applications. Term graphs are also used in type inference, where the graph structure aids in implementing type unification. See also Term (logic) Graph rewriting References Graph rewriting Formal systems Logical expressions Graph data structures
Term graph
[ "Mathematics" ]
402
[ "Mathematical logic", "Graph theory", "Mathematical relations", "Formal systems", "Logical expressions", "Graph rewriting" ]
34,086,984
https://en.wikipedia.org/wiki/Plasmonic%20nanoparticles
Plasmonic nanoparticles are particles whose electron density can couple with electromagnetic radiation of wavelengths that are far larger than the particle due to the nature of the dielectric-metal interface between the medium and the particles: unlike in a pure metal where there is a maximum limit on what size wavelength can be effectively coupled based on the material size. What differentiates these particles from normal surface plasmons is that plasmonic nanoparticles also exhibit interesting scattering, absorbance, and coupling properties based on their geometries and relative positions. These unique properties have made them a focus of research in many applications including solar cells, spectroscopy, signal enhancement for imaging, and cancer treatment. Their high sensitivity also identifies them as good candidates for designing mechano-optical instrumentation. Plasmons are the oscillations of free electrons that are the consequence of the formation of a dipole in the material due to electromagnetic waves. The electrons migrate in the material to restore its initial state; however, the light waves oscillate, leading to a constant shift in the dipole that forces the electrons to oscillate at the same frequency as the light. This coupling only occurs when the frequency of the light is equal to or less than the plasma frequency and is greatest at the plasma frequency that is therefore called the resonant frequency. The scattering and absorbance cross-sections describe the intensity of a given frequency to be scattered or absorbed. Many fabrication processes or chemical synthesis methods exist for preparation of such nanoparticles, depending on the desired size and geometry. The nanoparticles can form clusters (the so-called "plasmonic molecules") and interact with each other to form cluster states. The symmetry of the nanoparticles and the distribution of the electrons within them can affect a type of bonding or antibonding character between the nanoparticles similarly to molecular orbitals. Since light couples with the electrons, polarized light can be used to control the distribution of the electrons and alter the mulliken term symbol for the irreducible representation. Changing the geometry of the nanoparticles can be used to manipulate the optical activity and properties of the system, but so can the polarized light by lowering the symmetry of the conductive electrons inside the particles and changing the dipole moment of the cluster. These clusters can be used to manipulate light on the nano scale. Theory The quasistatic equations that describe the scattering and absorbance cross-sections for very small spherical nanoparticles are: where is the wavenumber of the electric field, is the radius of the particle, is the relative permittivity of the dielectric medium and is the relative permittivity of the nanoparticle defined by also known as the Drude Model for free electrons where is the plasma frequency, is the relaxation frequency of the charge carries, and is the frequency of the electromagnetic radiation. This equation is the result of solving the differential equation for a harmonic oscillator with a driving force proportional to the electric field that the particle is subjected to. For a more thorough derivation, see surface plasmon. It logically follows that the resonance conditions for these equations is reached when the denominator is around zero such that When this condition is fulfilled the cross-sections are at their maximum. These cross-sections are for single, spherical particles. The equations change when particles are non-spherical, or are coupled to 1 or more other nanoparticles, such as when their geometry changes. This principle is important for several applications. Rigorous electrodynamic analysis of plasma oscillations in a spherical metal nanoparticle of a finite size was performed in. Applications Plasmonic solar cells Due to their ability to scatter light back into the photovoltaic structure and low absorption, plasmonic nanoparticles are under investigation as a method for increasing solar cell efficiency. Forcing more light to be absorbed by the dielectric increases efficiency. Plasmons can be excited by optical radiation and induce an electric current from hot electrons in materials fabricated from gold particles and light-sensitive molecules of porphin, of precise sizes and specific patterns. The wavelength to which the plasmon responds is a function of the size and spacing of the particles. The material is fabricated using ferroelectric nanolithography. Compared to conventional photoexcitation, the material produced three to 10 times the current. Spectroscopy In the past 5 years plasmonic nanoparticles have been explored as a method for high resolution spectroscopy. One group utilized 40 nm gold nanoparticles that had been functionalized such that they would bind specifically to epidermal growth factor receptors to determine the density of those receptors on a cell. This technique relies on the fact that the effective geometry of the particles change when they appear within one particle diameter (40 nm) of each other. Within that range, quantitative information on the EGFR density in the cell membrane can be retrieved based on the shift in resonant frequency of the plasmonic particles. Cancer treatment Plasmonic nanoparticles have demonstrated a wide potential for the establishment of innovative cancer treatments. Despite that, there are still not plasmonic nanomaterials employed in the clinical practice, because the associated metal persistence. Preliminary research indicates that some nanomaterials, among which gold nanorods and ultrasmall-in-nano architectures, can convert IR laser light into localized heat, also in combination with other established cancer treatments. See also Localized surface plasmon Plasmonic metamaterials References Photovoltaics Spectroscopy Cancer treatments Nanoparticles by physical property Plasmonics
Plasmonic nanoparticles
[ "Physics", "Chemistry", "Materials_science" ]
1,168
[ "Plasmonics", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Surface science", "Condensed matter physics", "Nanotechnology", "Spectroscopy", "Solid state engineering" ]
34,088,391
https://en.wikipedia.org/wiki/OpenBSD%20Journal
The OpenBSD Journal is an online newspaper dedicated to coverage of OpenBSD software and related events. The OpenBSD Journal is widely recognized as a reliable source of OpenBSD-related information. It is a primary reporter for such events as Hackathons. The site also hosts the OpenBSD developers' blogs. History The OpenBSD Journal was founded in 2000 and operated until 1 April 2004 at , On 1 April 2004 the editors James Phillips and Jose Nazario announced that the site ceased its operation. Daniel Hartmeier backed up the contents of the journal in order to preserve them. Further investigation to the articles' structure lead to creation of CGI-based engine that would enable access to the deadly.org's content on a backup server. Consequently, the functionality of adding new articles was implemented and the previous editors allowed to re-publish articles. The OpenBSD Journal was therefore reintroduced at on 9 April 2004. References Journal Computing websites Technology websites Internet properties established in 2000
OpenBSD Journal
[ "Technology" ]
203
[ "Computing stubs", "Computing websites", "World Wide Web stubs" ]
38,088,194
https://en.wikipedia.org/wiki/Newbery%E2%80%93Vautin%20chlorination%20process
The Newbery-Vautin chlorination process is a method for extracting gold from its ore through the use of chlorination. This process was jointly developed by James Cosmo Newbery and Claude Vautin. Background The process of extracting and reducing gold from pyrite in gold ores using chlorine gas was initially introduced by Karl Friedrich Plattner around 1848. Precursor preparation The Newbery-Vautin process and other processes based on chlorination were replaced by processes based on cyanidation, which used fewer reagents. Processes that are free of cyanide and emit less toxic byproducts have also been developed. References Chemical processes Metallurgical processes Gold mining
Newbery–Vautin chlorination process
[ "Chemistry", "Materials_science" ]
148
[ "Metallurgical processes", "Metallurgy", "Chemical processes", "nan", "Chemical process engineering" ]
38,088,608
https://en.wikipedia.org/wiki/Rule%20of%20mixtures
In materials science, a general rule of mixtures is a weighted mean used to predict various properties of a composite material . It provides a theoretical upper- and lower-bound on properties such as the elastic modulus, ultimate tensile strength, thermal conductivity, and electrical conductivity. In general there are two models, one for axial loading (Voigt model), and one for transverse loading (Reuss model). In general, for some material property (often the elastic modulus), the rule of mixtures states that the overall property in the direction parallel to the fibers may be as high as where is the volume fraction of the fibers is the material property of the fibers is the material property of the matrix In the case of the elastic modulus, this is known as the upper-bound modulus, and corresponds to loading parallel to the fibers. The inverse rule of mixtures states that in the direction perpendicular to the fibers, the elastic modulus of a composite can be as low as If the property under study is the elastic modulus, this quantity is called the lower-bound modulus, and corresponds to a transverse loading. Derivation for elastic modulus Voigt Modulus Consider a composite material under uniaxial tension . If the material is to stay intact, the strain of the fibers, must equal the strain of the matrix, . Hooke's law for uniaxial tension hence gives where , , , are the stress and elastic modulus of the fibers and the matrix, respectively. Noting stress to be a force per unit area, a force balance gives that where is the volume fraction of the fibers in the composite (and is the volume fraction of the matrix). If it is assumed that the composite material behaves as a linear-elastic material, i.e., abiding Hooke's law for some elastic modulus of the composite and some strain of the composite , then equations and can be combined to give Finally, since , the overall elastic modulus of the composite can be expressed as Reuss modulus Now let the composite material be loaded perpendicular to the fibers, assuming that . The overall strain in the composite is distributed between the materials such that The overall modulus in the material is then given by since , . Other properties Similar derivations give the rules of mixtures for mass density: where f is the atomic percent of fiber in the mixture. ultimate tensile strength: thermal conductivity: electrical conductivity: See also When considering the empirical correlation of some physical properties and the chemical composition of compounds, other relationships, rules, or laws, also closely resembles the rule of mixtures: Amagat's law – Law of partial volumes of gases Gladstone–Dale equation – Optical analysis of liquids, glasses and crystals Kopp's law – Uses mass fraction Kopp–Neumann law – Specific heat for alloys Richmann's law – Law for the mixing temperature Vegard's law – Crystal lattice parameters References External links Rule of mixtures calculator Materials science Laws of thermodynamics
Rule of mixtures
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
620
[ "Applied and interdisciplinary physics", "Materials science", "Thermodynamics", "nan", "Laws of thermodynamics" ]
38,093,020
https://en.wikipedia.org/wiki/Hamilton%E2%80%93Jacobi%E2%80%93Einstein%20equation
In general relativity, the Hamilton–Jacobi–Einstein equation (HJEE) or Einstein–Hamilton–Jacobi equation (EHJE) is an equation in the Hamiltonian formulation of geometrodynamics in superspace, cast in the "geometrodynamics era" around the 1960s, by Asher Peres in 1962 and others. It is an attempt to reformulate general relativity in such a way that it resembles quantum theory within a semiclassical approximation, much like the correspondence between quantum mechanics and classical mechanics. It is named for Albert Einstein, Carl Gustav Jacob Jacobi, and William Rowan Hamilton. The EHJE contains as much information as all ten Einstein field equations (EFEs). It is a modification of the Hamilton–Jacobi equation (HJE) from classical mechanics, and can be derived from the Einstein–Hilbert action using the principle of least action in the ADM formalism. Background and motivation Correspondence between classical and quantum physics In classical analytical mechanics, the dynamics of the system is summarized by the action . In quantum theory, namely non-relativistic quantum mechanics (QM), relativistic quantum mechanics (RQM), as well as quantum field theory (QFT), with varying interpretations and mathematical formalisms in these theories, the behavior of a system is completely contained in a complex-valued probability amplitude (more formally as a quantum state ket – an element of a Hilbert space). Using the polar form of the wave function, so making a Madelung transformation: the phase of is interpreted as the action, and the modulus is interpreted according to the Copenhagen interpretation as the probability density function. The reduced Planck constant is the quantum of angular momentum. Substitution of this into the quantum general Schrödinger equation (SE): and taking the limit yields the classical HJE: which is one aspect of the correspondence principle. Shortcomings of four-dimensional spacetime On the other hand, the transition between quantum theory and general relativity (GR) is difficult to make; one reason is the treatment of space and time in these theories. In non-relativistic QM, space and time are not on equal footing; time is a parameter while position is an operator. In RQM and QFT, position returns to the usual spatial coordinates alongside the time coordinate, although these theories are consistent only with SR in four-dimensional flat Minkowski space, and not curved space nor GR. It is possible to formulate quantum field theory in curved spacetime, yet even this still cannot incorporate GR because gravity is not renormalizable in QFT. Additionally, in GR particles move through curved spacetime with a deterministically known position and momentum at every instant, while in quantum theory, the position and momentum of a particle cannot be exactly known simultaneously; space and momentum , and energy and time , are pairwise subject to the uncertainty principles which imply that small intervals in space and time mean large fluctuations in energy and momentum are possible. Since in GR mass–energy and momentum–energy is the source of spacetime curvature, large fluctuations in energy and momentum mean the spacetime "fabric" could potentially become so distorted that it breaks up at sufficiently small scales. There is theoretical and experimental evidence from QFT that vacuum does have energy since the motion of electrons in atoms is fluctuated, this is related to the Lamb shift. For these reasons and others, at increasingly small scales, space and time are thought to be dynamical up to the Planck length and Planck time scales. In any case, a four-dimensional curved spacetime continuum is a well-defined and central feature of general relativity, but not in quantum mechanics. Equation One attempt to find an equation governing the dynamics of a system, in as close a way as possible to QM and GR, is to reformulate the HJE in three-dimensional curved space understood to be "dynamic" (changing with time), and not four-dimensional spacetime dynamic in all four dimensions, as the EFEs are. The space has a metric (see Metric space for details). The metric tensor in general relativity is an essential object, since proper time, arc length, geodesic motion in curved spacetime, and other things, all depend on the metric. The HJE above is modified to include the metric, although it is only a function of the 3d spatial coordinates , (for example in Cartesian coordinates) without the coordinate time : In this context is referred to as the "metric field" or simply "field". General equation (free curved space) For a free particle in curved "empty space" or "free space", i.e. in the absence of matter other than the particle itself, the equation can be written: where is the determinant of the metric tensor and the Ricci scalar curvature of the 3d geometry (not including time), and the "" instead of "" denotes the variational derivative rather than the ordinary derivative. These derivatives correspond to the field momenta "conjugate to the metric field": the rate of change of action with respect to the field coordinates . The and here are analogous to and , respectively, in classical Hamiltonian mechanics. See canonical coordinates for more background. The equation describes how wavefronts of constant action propagate in superspace - as the dynamics of matter waves of a free particle unfolds in curved space. Additional source terms are needed to account for the presence of extra influences on the particle, which include the presence of other particles or distributions of matter (which contribute to space curvature), and sources of electromagnetic fields affecting particles with electric charge or spin. Like the Einstein field equations, it is non-linear in the metric because of the products of the metric components, and like the HJE it is non-linear in the action due to the product of variational derivatives in the action. The quantum mechanical concept, that action is the phase of the wavefunction, can be interpreted from this equation as follows. The phase has to satisfy the principle of least action; it must be stationary for a small change in the configuration of the system, in other words for a slight change in the position of the particle, which corresponds to a slight change in the metric components; the slight change in phase is zero: (where is the volume element of the volume integral). So the constructive interference of the matter waves is a maximum. This can be expressed by the superposition principle; applied to many non-localized wavefunctions spread throughout the curved space to form a localized wavefunction: for some coefficients , and additionally the action (phase) for each must satisfy: for all n, or equivalently, Regions where is maximal or minimal occur at points where there is a probability of finding the particle there, and where the action (phase) change is zero. So in the EHJE above, each wavefront of constant action is where the particle could be found. This equation still does not "unify" quantum mechanics and general relativity, because the semiclassical Eikonal approximation in the context of quantum theory and general relativity has been applied, to provide a transition between these theories. Applications The equation takes various complicated forms in: Quantum gravity Quantum cosmology See also Foliation Quantum geometry Quantum spacetime Calculus of variations The equation is also related to the Wheeler–DeWitt equation. Peres metric References Notes Further reading Books Selected papers (Equation A.3 in the appendix). General relativity Hamiltonian mechanics Quantum gravity
Hamilton–Jacobi–Einstein equation
[ "Physics", "Mathematics" ]
1,531
[ "Theoretical physics", "Unsolved problems in physics", "Classical mechanics", "General relativity", "Hamiltonian mechanics", "Quantum gravity", "Theory of relativity", "Physics beyond the Standard Model", "Dynamical systems" ]
38,094,287
https://en.wikipedia.org/wiki/Simcenter%20Amesim
Simcenter Amesim is a commercial simulation software for the modeling and analysis of multi-domain systems. It is part of systems engineering domain and falls into the mechatronic engineering field. The software package is a suite of tools used to model, analyze and predict the performance of mechatronics systems. Models are described using nonlinear time-dependent analytical equations that represent the system's hydraulic, pneumatic, thermal, electric or mechanical behavior. Compared to 3D CAE modeling this approach gives the capability to simulate the behavior of systems before detailed CAD geometry is available, hence it is used earlier in the system design cycle or V-Model. To create a simulation model for a system, a set of libraries is used. These contain pre-defined components for different physical domains. The icons in the system have to be connected and for this purpose each icon has ports, which have several inputs and outputs. Causality is enforced by linking the inputs of one icon to the outputs of another icon (and vice versa). Simcenter Amesim libraries are written in C language, Python and also support Modelica, which is a non-proprietary, object-oriented, equation based language to model complex physical systems containing, e.g., mechanical, electrical, electronic, hydraulic, thermal, control, electric power or process-oriented subcomponents. The software runs on Linux and on Windows platforms. Simcenter Amesim is a part of the Siemens Digital Industries Software Simcenter portfolio. This combines 1D simulation, 3D CAE and physical testing with intelligent reporting and data analytics. This portfolio is intended for development of complex products that include smart systems, through implementing a Predictive Engineering Analytics approach. History The Simcenter Amesim software was developed by Imagine S.A., a company which was acquired in June 2007 by LMS International, which itself was acquired in November 2012 by Siemens AG. The Imagine S.A. company was created in 1987 by Dr Michel Lebrun from the University Claude Bernard in France, to control complex dynamic systems coupling hydraulic servo-actuators with finite-elements mechanical structures. The initial engineering project involved the deck elevation of the sinking Ekofisk North Sea petroleum platforms. In the early 1990s the association with Pr C. W. Richards, coming from the University of Bath in England, led to the first commercial release of Simcenter Amesim in 1995 which was then dedicated to fluid control systems. Simcenter Amesim is used by companies in the automotive, aerospace and other advanced manufacturing industries. Usage Simcenter Amesim is a multi-domain software that supports modeling a variety of physics domains (hydraulic, pneumatic, mechanic, electrical, thermal, electromechanical). It is based on the Bond graph theory. Under the Windows platform, Simcenter Amesim works with the free Gcc compiler, which is provided with the software. It also works with the Microsoft Visual C++ compiler and its free Express edition. Since the version 4.3.0 Simcenter Amesim uses the Intel compiler on all platforms. Platform facilities Simcenter Amesim features: Platform Facilities graphical user interface, interactive help, supercomponents, post-processed variables, experiments management, meta-data, statechart designer Analysis Tools table editor, plots, dashboard, 3D animation, replay of results, linear analysis (eigenvalues, modal shapes, transfer functions, root locus), activity index, power and energy computation Optimization, Robustness, DOE Design Of Experiments, optimization, Monte-Carlo Solvers and Numerics LSODA, DASSL, DASKR, fixed-step solvers, discrete partitioning, parallel processing, Simcenter Amesim/Simcenter Amesim cosimulations Software Interfaces generic co-simulation (to be used to co-simulate with any software coupled to Simcenter Amesim), functional mock-up interface (export) MIL/SIL/HIL and Real-Time plant/control, various real-time targets Simulator Scripting scripting functions to pilot the simulations from Microsoft Excel, MATLAB, Scilab, Python, and support for C and Python development and reverse-engineering script generation from a model Customization own customized pre and post-processing tools with python, script caller assistant, editor of parameters group, app designer Modelica Platform support of the Modelica modeling language 1D/3D CAE CAD Import, CFD software co-simulation, FEA import of reduced modal basis with pre-defined frontier nodes, MBS software cosimulation and import/export Development Users can develop submodels from different standard submodels (supercomponent) using Component Customization functionality or by programming them in C or in Fortran with the Submodel Editor. Physical libraries Physical libraries from which models can be built include control, electrical networks, mechanical, fluid, thermodynamic, IC engine, and aerospace and defense libraries. Education and research Simcenter Amesim is used by engineering schools and universities. It is also the reference framework for various research projects in Europe. Release history See also Model-based design Lumped-element model Distributed-element model Bond graphs GT-SUITE Mechatronics Control theory Real-time computing Hardware-in-the-loop simulation Systems engineering Simulink 20-sim Wolfram SystemModeler References Simulation software Numerical software Computer-aided engineering Simulation programming languages Fortran
Simcenter Amesim
[ "Mathematics", "Engineering" ]
1,113
[ "Industrial engineering", "Computer-aided engineering", "Construction", "Numerical software", "Mathematical software" ]
38,094,543
https://en.wikipedia.org/wiki/SolveSpace
SolveSpace is a free and open-source 2D/3D constraint-based parametric computer-aided design (CAD) software that supports basic 2D and 3D constructive solid geometry modeling. It is a constraint-based parametric modeler with simple mechanical simulation capabilities. Version 2.1 and onward runs on Windows, Linux and macOS. The Linux version is shipped as a snap and native packages. It supports STEP and DFX for import and export. By default, SolveSpace utilizes its own CAD file format called for model storage. It is possible to export models as a whole or in part to various formats such as PDF, SVG, or Encapsulated PostScript (EPS). It was initially created by Jonathan Westhues and as of 2022 is maintained by a community of volunteers. History Development of SolveSpace started in 2008 as commercial proprietary software for Microsoft Windows. A previous software package called SketchFlat, also developed by Westhues, was replaced by SolveSpace. In 2012 version 1.9 released as unrestricted freeware proprietary software. In 2013 version 2.0 released as free and open-source software. In 2016 version 2.1 brings support for Linux and MacOS. According to an interview given in 2020 by a major maintainer SolveSpace aims to be backwards compatibile as much as possible. The codebase at the time was about 30,000 lines of code and it took Whitequark almost 2 years to familiarize herself with it. On September 22, 2020, Whitequark stepped down as a maintainer. Overview SolveSpace is free and open source software distributed under the GPL-3.0-or-later license. Features SolveSpace is shipped with the following basic features: 2D sketch modeling SolveSpace supports parametric 2D drawing of lines, circles, arcs, Cubic bézier curves etc; datum points and lines are also supported for general, reference based modeling. 3D solid modeling Drawing, extrusion, rotation and revolution along a helix are supported in both modes. In 3D it is possible to use basic Boolean operations (union, difference, intersection), though as of version 3.0, SolveSpace had limitations on the order of application of these operations. Mechanical design and analysis By using the built-in constraint solver it is possible to visualize planar or spatial linkages with pin, ball, or slide joints, trace their movements, and export its data in CSV format. Assembly SolveSpace allows solids to be imported in a special mode that does not allow modeling. These imported solids can then be constrained to ensure that the designed model's dimensions meet necessary requirements. Plane and solid geometry Replace hand-solved trigonometry and spreadsheets with a live dimensioned drawing. Supported file formats Importing SolveSpace can open and import its own textual file formats for both editing and assembly. The DXF/DWG file format AutoCAD (version 2007) is supported for opening and editing. Exporting SolveSpace v3.0 is able to export 2D sketches and surfaces into DXF/DWG (AutoCAD version 2007), PDF, SVG, EPS, and HPGL file formats. Wireframes can be exported as DXF and STEP files. Polygon meshes can be exported as STL and Wavefront OBJ; NURBS as STEP. SolveSpace is able to export models in STEP, STL, and G-code for reuse in third-party CAM software. Linking SolveSpace can link its own , STL and IDF files as external parts into complex assembly. Workflow SolveSpace workflow starts either with opening an existing file or creating a new one and usually involves sketching. The basic shapes of a new physical part is sketched out and constrained to specific dimensions and locations. When the model is complete, it is either exported to one of the supported CAD formats or into a document for further processing. Sketching Modeling in SolveSpace is done by way of sketching in a workplane. A workplane is plane with an origin for the new sketch where the SolveSpace draws entities. Users can make it active and draw basic primitives such as lines, circles, arcs, dots, and other points of references on the workplane, and constrain them to specific dimensions and relations. SolveSpace can split intersecting entities via a separate tool. Users can snap points to a grid. There are no software limitations for the number of workplanes user can create. Constraints Constraints include dimension limitation, angle, paralleling with another line, tangency, point, symmetry and alignment of a line with origin axes (to make them "vertical" or "horizontal"). The radius of a circle, for instance, can be constrained to a specific value, or can be influenced by some other entity dimension. 3D modeling When sketching is complete, a 3D part can be extruded into a volumetric model for further modeling. An extruded model creates a group along a specified normal. Every group in SolveSpace encapsulates an action applied to the specified sketch created for every 3D operation, such as an extrusion, rotation, or translation. Created 3D models can also be further constrained with the basic tools mentioned above or combined with another one by Boolean operations. It is also possible to draw a workplane on a specific "surface" of another 3D model; the surface is usually indicated by two line segments joined by a point. Assembly In order to verify a newly modeled concept in SolveSpace, users can "link" all the components and constrain them at specific positions to check whether the virtual end-product meets the original concept's design and constraints. Libraries SolveSpace depends upon ANGLE, OpenGL Utility Library, zlib, libpng, libdxfrw, cairo, mimalloc, libsigc++ and some other C++ libraries, as well as freetype2, harfbuzz, and Pango for text rendering. On Linux Solvespace uses gtk-3. Limitations As of v2.1., SolveSpace reference lists a disclaimer on limited support for NURB-surface Boolean operations which may occasionally fail. As of v3.0 SolveSpace didn't provide functionality for chamfers/fillets on top of 3D solid body. However there is a way to make it manually. As for 2D sketch there is a way to create fillets as a tangent arc at corner point. SolveSpace may fit well for simpler CAM models, but not for sophisticated ones. There is no extrude along the path. Criticism A 2013 article and interview with the main developer published in Libre Graphics World has praised SolveSpace for its small executable file size, advanced constraints solver, and output formats. However, it was also criticized for some drawbacks it had at the time, such as limited support for NURBS (i.e. Boolean operations) and a lack of native Linux support, the latter of which has since been rectified. On the other hand NURBS operations are parallel, instead of single-threaded. See also CAD exchange formats Computer-aided technologies Comparison of computer-aided design software FreeCAD Notes References Publications Angelo, L. D.; Leali, F.; Stefano P. D. (May 2016). Can Open-Source 3D Mechanical CAD Systems Effectively Support University Courses?. International Journal of Engineering Education. 32 (3 (A)): 1313-1324 Konapala, A.; Koona, R. (October 2016). Development of Web based Tool Path Generator (W-TPG). International Journal of Current Engineering and Technology. 6 (5): 1784-1791. Axelsson, M. T. (May 2017). Open source CAD - discover the best CAD packages for your next maker project. Linux Format. 223: 26-27. Rosendahl, M. (2017). Constraint representation of 2-dimensional models with respect to avoiding cycles. Computer-Aided Design and Applications. 14 (1): 117-126. Beuchat, B., & Scalisi, A. (January 11, 2019). Cellulo Learning Activity [Semester project report]. CHILI Lab, EPFL, Lausanne, Switzerland. Frazelle, J. (June 2021). A New Era for Mechanical CAD. ACM Queue. 19 (2): 5-17. Fundamentals of 3D Food Printing and Applications. (2018). Great Britain: Elsevier Science. Biron, M. (2018). Thermoplastics and Thermoplastic Composites. Great Britain: Elsevier Science. Advances in Human Factors of Transportation: Proceedings of the AHFE 2019 International Conference on Human Factors in Transportation, July 24-28, 2019, Washington D.C., USA. (2019). Germany: Springer International Publishing. Advances in Mechanism and Machine Science: Proceedings of the 15th IFToMM World Congress on Mechanism and Machine Science. (2019). Germany: Springer International Publishing. Staple, D. (2023). Robotics at Home with Raspberry Pi Pico: Build Autonomous Robots with the Versatile Low-cost Raspberry Pi Pico Controller and Python. (n.p.): Packt Publishing. External links Computer-aided design Computer-aided design software Free computer-aided design software Computer-aided design software for Linux Computer-aided design software for Windows Free software programmed in C++
SolveSpace
[ "Engineering" ]
1,943
[ "Computer-aided design", "Design engineering" ]
38,095,130
https://en.wikipedia.org/wiki/Deutoplasm
The deutoplasm comprises the food particles stored in the cytoplasm of an ovum or a cell, as distinguished from protoplasm, the yolk substance. Generally, the deutoplasm accumulates about the nucleus and is heavier than the surrounding cytoplasm. In chicken eggs, the cytoplasm and deutoplasm are separate. The primary function of the deutoplasm is to provide the developing embryo with additional nutrients, such as vitamins, minerals, proteins and lipids. References Cell biology
Deutoplasm
[ "Biology" ]
118
[ "Cell biology" ]
43,727,603
https://en.wikipedia.org/wiki/Quantum%20excitation%20%28accelerator%20physics%29
Quantum excitation is the effect in circular accelerators or storage rings whereby the discreteness of photon emission causes the charged particles (typically electrons) to undergo a random walk or diffusion process. Mechanism An electron moving through a magnetic field emits radiation called synchrotron radiation. The expected amount of radiation can be calculated using the classical power. Considering quantum mechanics, however, this radiation is emitted in discrete packets of photons. For this description, the distribution of the number of emitted photons and also the energy spectrum for the electron should be determined instead. In particular, the normalized power spectrum emitted by a charged particle moving in a bending magnet is given by This result was originally derived by Dmitri Ivanenko and Arseny Sokolov and independently by Julian Schwinger in 1949. Dividing each power of this power spectrum by the energy yields the photon flux: The photon flux from this normalized power spectrum (of all energies) is then The fact that the above photon flux integral is finite implies discrete photon emission. It is a Poisson process. The emission rate is For a travelled distance at a speed close to (), the average number of emitted photons by the particle can be expressed as where is the fine-structure constant. The probability that photons are emitted over is The photon number curve and the power spectrum curve intersect at the critical energy where , is the total energy of the charged particle, is the radius of curvature, the classical electron radius, the particle rest mass energy, the reduced Planck constant, and the speed of light. The mean of the quantum energy is given by and impacts mainly the radiation damping. However, the particle motion perturbation (diffusion) is mainly related by the variance of the quantum energy and leads to an equilibrium emittance. The diffusion coefficient at a given position is given by Further reading For an early analysis of the effect of quantum excitation on electron beam dynamics in storage rings, see the article by Matt Sands. References Accelerator physics
Quantum excitation (accelerator physics)
[ "Physics" ]
404
[ "Accelerator physics", "Applied and interdisciplinary physics", "Experimental physics" ]
43,730,809
https://en.wikipedia.org/wiki/Byers%E2%80%93Yang%20theorem
In quantum mechanics, the Byers–Yang theorem states that all physical properties of a doubly connected system (an annulus) enclosing a magnetic flux through the opening are periodic in the flux with period (the magnetic flux quantum). The theorem was first stated and proven by Nina Byers and Chen-Ning Yang (1961), and further developed by Felix Bloch (1970). Proof An enclosed flux corresponds to a vector potential inside the annulus with a line integral along any path that circulates around once. One can try to eliminate this vector potential by the gauge transformation of the wave function of electrons at positions . The gauge-transformed wave function satisfies the same Schrödinger equation as the original wave function, but with a different magnetic vector potential . It is assumed that the electrons experience zero magnetic field at all points inside the annulus, the field being nonzero only within the opening (where there are no electrons). It is then always possible to find a function such that inside the annulus, so one would conclude that the system with enclosed flux is equivalent to a system with zero enclosed flux. However, for any arbitrary the gauge transformed wave function is no longer single-valued: The phase of changes by whenever one of the coordinates is moved along the ring to its starting point. The requirement of a single-valued wave function therefore restricts the gauge transformation to fluxes that are an integer multiple of . Systems that enclose a flux differing by a multiple of are equivalent. Applications An overview of physical effects governed by the Byers–Yang theorem is given by Yoseph Imry. These include the Aharonov–Bohm effect, persistent current in normal metals, and flux quantization in superconductors. References Theorems in quantum mechanics
Byers–Yang theorem
[ "Physics", "Mathematics" ]
368
[ "Theorems in quantum mechanics", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Physics theorems" ]
43,732,972
https://en.wikipedia.org/wiki/Protospacer%20adjacent%20motif
A protospacer adjacent motif (PAM) is a 2–6-base pair DNA sequence immediately following the DNA sequence targeted by the Cas9 nuclease in the CRISPR bacterial adaptive immune system. The PAM is a component of the invading virus or plasmid, but is not found in the bacterial host genome and hence is not a component of the bacterial CRISPR locus. Cas9 will not successfully bind to or cleave the target DNA sequence if it is not followed by the PAM sequence. PAM is an essential targeting component which distinguishes bacterial self from non-self DNA, thereby preventing the CRISPR locus from being targeted and destroyed by the CRISPR-associated nuclease. Spacers/protospacers In a bacterial genome, CRISPR loci contain "spacers" (viral DNA inserted into a CRISPR locus) that in type II adaptive immune systems were created from invading viral or plasmid DNA (called "protospacers"). Upon subsequent invasion, a CRISPR-associated nuclease such as Cas9 attaches to a tracrRNA–crRNA complex, which guides Cas9 to the invading protospacer sequence. But Cas9 will not cleave the protospacer sequence unless there is an adjacent PAM sequence. The spacer in the bacterial CRISPR loci will not contain a PAM sequence, and thus will not be cut by the nuclease, but the protospacer in the invading virus or plasmid will contain the PAM sequence, and thus will be cleaved by the Cas9 nuclease. In genome editing applications, a short oligonucleotide known as a guide RNA (gRNA) is synthesized to perform the function of the tracrRNA–crRNA complex in recognizing gene sequences having a PAM sequence at the 3'-end, thereby "guiding" the nuclease to a specific sequence which the nuclease is capable of cutting. PAM sequences The canonical PAM is the sequence 5'-NGG-3', where "N" is any nucleobase followed by two guanine ("G") nucleobases. Guide RNAs can transport Cas9 to any locus in the genome for gene editing, but no editing can occur at any site other than one at which Cas9 recognizes PAM. The canonical PAM is associated with the Cas9 nuclease of Streptococcus pyogenes (designated SpCas9), whereas different PAMs are associated with the Cas9 proteins of the bacteria Neisseria meningitidis, Treponema denticola, and Streptococcus thermophilus. 5'-NGA-3' can be a highly efficient non-canonical PAM for human cells, but efficiency varies with genome location. Attempts have been made to engineer Cas9s to recognize different PAMs in order to improve the ability of CRISPR-Cas9 to edit genes at any desired genome location. The Cas9 of Francisella novicida recognizes the canonical PAM sequence 5'-NGG-3', but has been engineered to recognize 5'-YG-3' (where "Y" is a pyrimidine), thus adding to the range of possible Cas9 targets. The Cpf1 nuclease of Francisella novicida recognizes the PAM 5'-TTTN-3' or 5'-YTN-3'. Aside from CRISPR-Cas9 and CRISPR-Cpf1, there are doubtless many yet undiscovered nucleases and PAMs. CRISPR/Cas13a (formerly C2c2) from the bacterium Leptotrichia shahii is an RNA-guided CRISPR system that targets sequences in RNA rather than DNA. PAM is not relevant for an RNA-targeting CRISPR, although a guanine flanking the target negatively affects efficacy, and has been designated a "protospacer flanking site" (PFS). GUIDE-Seq A technology called GUIDE-Seq has been devised to assay off-target cleavages produced by such gene editing. The PAM requirement can be exploited to specifically target single-nucleotide heterozygous mutations while exerting no aberrant effects on wild-type alleles. See also CRISPR CRISPR/Cpf1 External links Addgene CRISPR-Cas Guide References Genome editing
Protospacer adjacent motif
[ "Engineering", "Biology" ]
908
[ "Genetics techniques", "Genetic engineering", "Genome editing" ]
60,143,839
https://en.wikipedia.org/wiki/List%20of%20South%20American%20countries%20by%20life%20expectancy
This is a list of South American countries by life expectancy. United Nations (2023) Estimation of the analytical agency of the UN. UN: Estimate of life expectancy for various ages in 2023 UN: Change of life expectancy from 2019 to 2023 World Bank Group (2022) Estimation of the World Bank Group for 2022. The data is filtered according to the list of countries in South America. The values in the World Bank Group tables are rounded. All calculations are based on raw data, so due to the nuances of rounding, in some places illusory inconsistencies of indicators arose, with a size of 0.01 year. In 2014, some of the world's leading countries had a local peak in life expectancy, so this year is chosen for comparison with 2019 and 2022. WHO (2019) Estimation of the World Health Organization for 2019. Charts See also References Life expectancy South America
List of South American countries by life expectancy
[ "Biology" ]
192
[ "Senescence", "Life expectancy" ]
60,143,883
https://en.wikipedia.org/wiki/Non-linear%20phononics
Non-linear phononics is the physics in solids created or triggered by large amplitude oscillations of phonons, the elementary vibration of the crystal lattice. It is an extension of the field of phononics, which studies the regime of small harmonic vibrations and related phenomena in materials. In contrast to phononics, however, large amplitudes oscillation reveal the anharmonicity of the crystal lattice, which theoretical treatment requires the incorporation of higher order terms within the crystal potential. References Quasiparticles
Non-linear phononics
[ "Physics", "Materials_science" ]
112
[ "Quasiparticles", "Subatomic particles", "Condensed matter physics", "Matter" ]
60,155,045
https://en.wikipedia.org/wiki/GM%20Crops%20%26%20Food
GM Crops & Food: Biotechnology in Agriculture and the Food Chain is a quarterly peer-reviewed scientific journal covering agricultural and food biotechnology. It was established in 2010 as GM Crops, obtaining its current name in 2012. It is published by Taylor & Francis and the editors-in-chief are Naglaa A. Abdallah (Cairo University) and Channapatna S. Prakash (Tuskegee University). According to the Journal Citation Reports, the journal has a 2017 impact factor of 2.913. References External links Academic journals established in 2010 Taylor & Francis academic journals Biotechnology journals Genetically modified organisms in agriculture Quarterly journals English-language journals
GM Crops & Food
[ "Biology" ]
133
[ "Biotechnology literature", "Biotechnology journals" ]
53,814,193
https://en.wikipedia.org/wiki/Fluorescence%20polarization%20immunoassay
Fluorescence polarization immunoassay (FPIA) is a class of in vitro biochemical test used for rapid detection of antibody or antigen in sample. FPIA is a competitive homogenous assay, that consists of a simple prepare and read method, without the requirement of separation or washing steps. The basis of the assay is fluorescence anisotropy, also known as fluorescence polarization. If a fluorescent molecule is stationary and exposed to plane-polarized light, it will become excited and consequently emit radiation back to the polarized-plane. However, if the excited fluorescent molecule is in motion (rotational or translational) during the fluorescent lifetime, it will emit light in a different direction than the excitation plane. The fluorescent lifetime is the amount of time between the absorption moment and the fluorescent emission moment. Typically, the rate at which a molecule rotates is indicative of its size. When a fluorescent-labelled molecule (tracer) binds to another molecule the rotational motion will change, resulting in an altered intensity of plane-polarized light, which results in altered fluorescence polarization. Fluorescence polarization immunoassays employ a fluorophore bound antigen that when bound to the antibody of interest, will increase fluorescence polarization. The change in polarization is proportional to the amount of antigen in sample, and is measured by a fluorescence polarization analyzer. History Fluorescence polarization was first observed by F. Weigert in 1920. He experimented with solutions of fluorescein, eosin, and other dyes at various temperatures and viscosities. Observing that polarization increased with viscosity of the solvent and the size of the dye molecule, but decreased with an increase in temperature, he deduced that polarization increased with a decrease in mobility of the emitting species. From 1925 to 1926 Francis Perrin detailed a quantitative theory for fluorescence polarization in multiple significant publications which remain relevant to this day. Since Perrin's contribution, the technique has grown from determining binding isotherms under heavily controlled parameters, to the study of antigen-antibody, small molecule-protein, and hormone-receptor binding interactions. A fluorescence polarization immunoassay was first described and used in the 1960s. The competitive homogenous characteristic allowed for the fluorescence polarization immunoassay to be automated much easier than other immunoassay techniques such as radioimmunoassays or enzyme-linked immunoassays. Despite originating as a method for direct interaction studies, the technique has been adopted by high-throughput screening (HTS) since the mid 1990s to help facilitate the drug discovery process by studying complex enzymatic interaction. Principle FPIA quantifies the change in fluorescence polarization of reaction mixtures of fluorescent-labelled tracer, sample antigen, and defined antibody. Operating under fixed temperature and viscosity allows for the fluorescence polarization to be directly proportional to the size of the fluorophore. Free tracer in solution has a lower fluorescence polarization than antibody-bound tracer with slower Brownian motion. The tracer and the specific antigen will compete to bind to the antibody and if the antigen is low in concentration, more tracer will be bound to the antibody resulting in a higher fluorescence polarization and vice versa. A conventional FPIA follows the procedure below: A specific quantity of sample is added to reaction buffer. The solution is allowed to equilibrate at room temperature for approximately two minutes. The solution is evaluated in a fluorescence polarization analyzer to gather a baseline measurement. A specific quantity of antigen conjugated with fluorophore is added to the solution. The solution again equilibrates for approximately two minutes. The solution is evaluated again by the fluorescence polarization analyzer. The fluorescence polarization value for the tracer containing solution is compared to the baseline and magnitude of difference is proportional to quantity of target analyte in sample. Applications FPIA has emerged as a viable technique for quantification of small molecules in mixtures, including: pesticides, mycotoxins in food, pharmaceutical compounds in wastewater, metabolites in urine and serum indicative of drug use (cannabinoids, amphetamines, barbiturates, cocaine, benzodiazepines, methadone, opiates, and PCP), and various small molecule toxins. As well as with the analysis of hormone-receptor interactions. See also ELISA Radio immunoassay FRET Magnetic immunoassay Fluorescence Immunoscreening Lateral flow test Cloned enzyme donor immunoassay Surround optical fiber immunoassay Plate reader References Biochemistry methods Immunologic tests
Fluorescence polarization immunoassay
[ "Chemistry", "Biology" ]
978
[ "Biochemistry methods", "Biochemistry", "Immunologic tests" ]
46,907,831
https://en.wikipedia.org/wiki/Electric%20dipole%20spin%20resonance
Electric dipole spin resonance (EDSR) is a method to control the magnetic moments inside a material using quantum mechanical effects like the spin–orbit interaction. Mainly, EDSR allows to flip the orientation of the magnetic moments through the use of electromagnetic radiation at resonant frequencies. EDSR was first proposed by Emmanuel Rashba. Computer hardware employs the electron charge in transistors to process information and the electron magnetic moment or spin for magnetic storage devices. The emergent field of spintronics aims in unifying the operations of these subsystems. For achieving this goal, the electron spin should be operated by electric fields. EDSR allows to use the electric component of AC fields to manipulate both charge and spin. Introduction Free electrons possess electric charge and magnetic moment whose absolute value is about one Bohr magneton . The standard electron spin resonance, also known as electron paramagnetic resonance (EPR), is due to the coupling of electron magnetic moment to the external magnetic field through the Hamiltonian describing its Larmor precession. The magnetic moment is related to electron angular momentum as , where is the g-factor and is the reduced Planck constant. For a free electron in vacuum . As the electron is a spin-1/2 particle, the spin operator can take only two values: . So, Larmor interaction has quantized energy levels in a time-independent magnetic field as the energy is equal to . In the same way, under a resonant AC magnetic field at the frequency , results in electron paramagnetic resonance, that is, the signal gets absorbed strongly at this frequency as it produces transitions between spin values. Coupling electron spin to electric fields in atoms In atoms, electron orbital and spin dynamics are coupled to the electric field of the protons in the atomic nucleus according to the Dirac equation. An electron moving in a static electric field sees, according to the Lorentz transformations of special relativity, a complementary magnetic field in the electron frame of reference. However, for slow electrons with this field is weak and the effect is small. This coupling is known as the spin–orbit interaction and gives corrections to the atomic energies about the order of the fine-structure constant squared , where . However, this constant appears in combination with the atomic number as , and this product is larger for massive atoms, already of the order of unity in the middle of the periodic table. This enhancement of the coupling between the orbital and spin dynamics in massive atoms originates from the strong attraction to the nucleus and the large electron speeds. While this mechanism is also expected to couple electron spin to the electric component of electromagnetic fields, such an effect has been probably never observed in atomic spectroscopy. Basic mechanisms in crystals Most important, spin–orbit interaction in atoms translates into spin–orbit coupling in crystals. It becomes an essential part of the band structure of their energy spectrum. The ratio of the spin–orbit splitting of the bands to the forbidden gap becomes a parameter that evaluates the effect of spin–orbit coupling, and it is generically enhanced, of the order of unity, for materials with heavy ions or with specific asymmetries. As a result, even slow electrons in solids experience strong spin–orbit coupling. This means that the Hamiltonian of an electron in a crystal includes a coupling between the electron crystal momentum and the electron spin. The coupling to the external electric field can be found by substituting the momentum in the kinetic energy as , where is the magnetic vector potential, as it is required by the gauge invariance of electromagnetism. The substitution is known as Peierls substitution. Thus, the electric field becomes coupled to the electron spin and its manipulation may produce transitions between spin values. Theory Electric dipole spin resonance is the electron spin resonance driven by a resonant AC electric field . Because the Compton length , entering into the Bohr magneton and controlling the coupling of electron spin to AC magnetic field , is much shorter than all characteristic lengths of solid state physics, EDSR can be by orders of magnitude stronger than EPR driven by an AC magnetic field. EDSR is usually strongest in materials without the inversion center where the two-fold degeneracy of the energy spectrum is lifted and time-symmetric Hamiltonians include products of the spin related Pauli matrices , as , and odd powers of the crystal momentum . In such cases electron spin is coupled to the vector-potential of electromagnetic field. Remarkably, EDSR on free electrons can be observed not only at the spin-resonance frequency but also at its linear combinations with the cyclotron resonance frequency . In narrow-gap semiconductors with inversion center EDSR can emerge due direct coupling of electric field to the anomalous coordinate . EDSR is allowed both with free carriers and with electrons bound at defects. However, for transitions between Kramers conjugate bound states, its intensity is suppressed by a factor where is the separation between adjacent levels of the orbital motion. Simplified theory and physical mechanism As stated above, various mechanisms of EDSR operate in different crystals. The mechanism of its generically high efficiency is illustrated below as applied to electrons in direct-gap semiconductors of the InSb type. If spin–orbit splitting of energy levels is comparable to the forbidden gap , the effective mass of an electron and its g-factor can be evaluated in the framework of the Kane scheme, see k·p perturbation theory. , where is a coupling parameter between the electron an valence bands, and is the electron mass in vacuum. Choosing the spin–orbit coupling mechanism based on the anomalous coordinate under the condition :, we have , where is electron crystal momentum. Then energy of an electron in a AC electric field is An electron moving in vacuum with a velocity in an AC electric field sees, according to the Lorentz transformation an effective magnetic field . Its energy in this field The ratio of these energies . This expression shows explicitly where the dominance of EDSR over the electron paramagnetic resonance comes from. The numerator of the second factor is a half of the Dirac gap while is of atomic scale, 1eV. The physical mechanism behind the enhancement is based on the fact that inside crystals electrons move in strong field of nuclei, and in the middle of the periodic table the product of the atomic number and the fine-structure constant is of the order of unity, and it is this product that plays the role of the effective coupling constant, cf. spin–orbit coupling. However, one should bear in mind that the above arguments based on effective mass approximation are not applicable to electrons localized in deep centers of the atomic scale. For them the EPR is usually the dominant mechanism. Inhomogeneous Zeeman coupling mechanism Above mechanisms of spin–orbit coupling in solids originated from the Thomas interaction and couple spin matrices to electronic momentum . However, the Zeeman interaction in an inhomogeneous magnetic field produces a different mechanism of spin–orbit interaction through coupling the Pauli matrices to the electron coordinate . The magnetic field can be both a macroscopic inhomogeneous field or a microscopic fast-oscillating field inside ferro- or antiferromagnets changing at the scale of a lattice constant. Experiment EDSR was first observed experimentally with free carriers in indium antimonide (InSb), a semiconductor with strong spin–orbit coupling. Observations made under different experimental conditions allowed demonstrate and investigate various mechanisms of EDSR. In a dirty material, Bell observed a motionally narrowed EDSR line at frequency against a background of a wide cyclotron resonance band. MacCombe et al. working with high quality InSb observed isotropic EDSR driven by the mechanism at the combinational frequency where is the cyclotron frequency. Strongly anisotropic EDSR band due to inversion-asymmetry Dresselhaus spin–orbit coupling was observed in InSb at the spin-flip frequency by Dobrowolska et al. spin–orbit coupling in n-Ge that manifests itself through strongly anisotropic electron g-factor results in EDSR through breaking translational symmetry by inhomogeneous electric fields which mixes wave functions of different valleys. Infrared EDSR observed in semimagnetic semiconductor CdMnSe was ascribed to spin–orbit coupling through inhomogeneous exchange field. EDSR with free and trapped charge carriers was observed and studied at a large variety of three-dimensional (3D) systems including dislocations in Si, an element with notoriously weak spin–orbit coupling. All above experiments were performed in the bulk of three-dimensional (3D) systems. Applications Principal applications of EDSR are expected in quantum computing and semiconductor spintronics, currently focused on low-dimensional systems. One of its main goals is fast manipulation of individual electron spins at a nanometer scale, e.g., in quantum dots of about 50 nm size. Such dots can serve as qubits of quantum computing circuits. Time-dependent magnetic fields practically cannot address individual electron spins at such a scale, but individual spins can be well addressed by time dependent electric fields produced by nanoscale gates. All basic mechanisms of EDSR listed above are operating in quantum dots, but in AB compounds also the hyperfine coupling of electron spins to nuclear spins plays an essential role. For achieving fast qubits operated by EDSR are needed nanostructures with strong spin–orbit coupling. For the Rashba spin–orbit coupling , the strength of interaction is characterized by the coefficient . In InSb quantum wires the magnitude of of the atomic scale of about 1 eV has been already achieved. A different way for achieving fast spin qubits based on quantum dots operated by EDSR is using nanomagnets producing inhomogeneous magnetic fields. See also Fine electronic structure Stark effect Zeeman effect Electron electric dipole moment References Further reading Quantum mechanics
Electric dipole spin resonance
[ "Physics" ]
1,997
[ "Theoretical physics", "Quantum mechanics" ]
46,918,469
https://en.wikipedia.org/wiki/Lai-Sang%20Young
Lai-Sang Lily Young (, born 1952) is a Hong Kong-born American mathematician who holds the Henry & Lucy Moses Professorship of Science and is a professor of mathematics and neural science at the Courant Institute of Mathematical Sciences of New York University. Her research interests include dynamical systems, ergodic theory, chaos theory, probability theory, statistical mechanics, and neuroscience. She is particularly known for introducing the method of Markov returns in 1998, which she used to prove exponential correlation delay in Sinai billiards and other hyperbolic dynamical systems. Education and career Although born and raised in Hong Kong, Young came to the US for her education, earning a bachelor's degree from the University of Wisconsin–Madison in 1973. She moved to the University of California, Berkeley for her graduate studies, earning a master's degree in 1976 and completing her doctorate in 1978, under the supervision of Rufus Bowen. She taught at Northwestern University from 1979 to 1980, Michigan State University from 1980 to 1986, the University of Arizona from 1987 to 1990, and the University of California, Los Angeles from 1991 to 1999. She has been the Moses Professor at NYU since 1999. Awards and honors Young became a Sloan Fellow in 1985, and a Guggenheim Fellow in 1997. In 1993, Young was given the Ruth Lyttle Satter Prize in Mathematics of the American Mathematical Society "for her leading role in the investigation of the statistical (or ergodic) properties of dynamical systems". This is a biennial award for outstanding research contributions by a female mathematician. In 2004, she was elected as a fellow of the American Academy of Arts and Sciences. Young was an invited speaker at the International Congress of Mathematicians in 1994, and an invited plenary speaker at the 2018 International Congress of Mathematicians. In 2005, she presented the Noether Lecture of the Association for Women in Mathematics; her talk was entitled "From Limit Cycles to Strange Attractors". In 2007, she presented the Sonia Kovalevsky lecture, jointly sponsored by the AWM and the Society for Industrial and Applied Mathematics. In 2020 she was elected a member of the National Academy of Sciences. She is the recipient of the 2021 Jürgen Moser Lecture prize "for her sustained and deep contributions to the theory of non-uniformly hyperbolic dynamical systems." In 2023, she was awarded the Heinz Hopf Prize and in 2024 the Rolf Schock Prize. Selected publications . . . . . . . References External links Young's Homepage (Plenary Lecture 8) Kevin Hartnett, A Mathematical Model Unlocks the Secrets of Vision, Quanta Magazine, 21 August 2019 1952 births Living people 20th-century American mathematicians 20th-century American women mathematicians 21st-century American mathematicians 21st-century American women mathematicians Courant Institute of Mathematical Sciences faculty Dynamical systems theorists Fellows of the American Academy of Arts and Sciences Members of the United States National Academy of Sciences Hong Kong emigrants to the United States Hong Kong mathematicians Michigan State University faculty Northwestern University faculty University of Arizona faculty University of California, Berkeley alumni University of California, Los Angeles faculty University of Wisconsin–Madison alumni
Lai-Sang Young
[ "Mathematics" ]
630
[ "Dynamical systems theorists", "Dynamical systems" ]
42,322,190
https://en.wikipedia.org/wiki/Eco-Block
An Eco-Block is an environmental-friendly brick made from recycled materials and construction waste. The brick was invented by the Hong Kong Polytechnic University in 2006. Its major feature is to catalyze the nitrogen oxide and other pollutants in air into non-hazardous substances. Eco-Blocks have been mainly used as paving brick in pedestrians and vehicular areas in Hong Kong and are now in their third generation. Manufacturing Material The major constituents of the Eco-Block are recycled glass and recycled aggregate from construction and demolition waste. Apart from that, a small quantity of photocatalyst is used on the surface layer of the eco-block. Producing process A mechanized molding method is used for producing the Eco-Block. The materials are mixed with water and fly ash in a fixed proportion. Then the mixed materials will be molded under a combined vibrating and compacting action. Before put into use, the eco-block needs to be cured under suitable condition. Coating On the surface layer, there is a special coating made from titanium oxide (TiO2). When activated by the sunlight, the titanium oxide can catalyze the decomposition of nitrous oxides into oxygen, water, sulphur, nitrates and other non-toxic solid compounds which can be washed away by water. Development Eco-Block has currently undergone three generations and each generation has a slightly different composition. In general, all the generations made up of local construction waste. In the second generation, recycled glass powder from recycled glass bottles were added in the incorporation in Eco-Block for making its outlook more appealing and providing a wider range of usages of it. In the third generation, a photo-catalyst was added in the production process which catalyzes a chemical reaction and helps improve the air quality in Hong Kong. Features Eco-Block is a construction material and its surface layer is made of sand, glass sand, fly ash and cement while the base layer is made of coarse aggregate, sand, glass sand, fly ash and cement. De-polluting effect Eco-block can remove air pollutant such as nitrogen oxide, which causes acid rain. Eco-block converts toxic air pollutants into harmless compounds by decomposition. Also, it help to reduce exhaust gases from roadside vehicles. Alleviate pressure of landfill Eco-Block is made of daily activated sludge inside waste water plant, recycled glasses and construction waste. Therefore, Eco-block utilizes waste in manufacturing and reduces pressure in landfills. Save coal consumption Since Eco-Block has a relatively high compressive strength, it reduce the coal consumption in brick manufacturing. . Applications of Eco-Block Local schools and education institutions starts to adopt Eco-Block for pavement such as The Hong Kong Polytechnic University. “Eco-blocks for Eco-schools”, which is a joint programme held by The Hong Kong Polytechnic University and HSBC Insurance, aims at improved environment and air quality of schools with the use of the Eco-Block since 2008. King Lam Catholic Primary School (景林天主教小學) was one of beneficiary and an area of 200 square metres on the school premises has been paved with Eco-block. According to the Hong Kong government, in 2011 the government brought and used Eco-Block which total areas was around 17 hectare in government public works contracts including 1000 and 1500 square meters of Eco-Block paving in Kwun Tong Garden Estate, which located in Ngau Tau Kok, and Sha Tau Kok Chuen. Awards Awards list Notable Mention, ECO-Products Award 2006, Hong Kong (2006) Merit Award, Green Building Award, Hong Kong (2006) Gold Award – The 6th International Exhibition of Inventions (2008) Best Invention Award from Macao Foundation (2008) References Building materials Masonry
Eco-Block
[ "Physics", "Engineering" ]
756
[ "Masonry", "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
58,636,241
https://en.wikipedia.org/wiki/Bogoliubov%20quasiparticle
In condensed matter physics, a Bogoliubov quasiparticle or Bogoliubon is a quasiparticle that occurs in superconductors. Whereas superconductivity is characterized by the condensation of Cooper pairs into the same ground quantum state, Bogoliubov quasiparticles are elementary excitations above the ground state, which are superpositions (linear combinations) of the excitations of negatively charged electrons and positively charged electron holes, and are therefore neutral spin-½ fermions. These quasiparticles are named after Nikolay Bogolyubov. Sometimes these quasiparticles are also called Majorana modes, in analogy with the equations for Majorana fermions. References Superconductivity Quasiparticles
Bogoliubov quasiparticle
[ "Physics", "Materials_science", "Engineering" ]
161
[ "Matter", "Materials science stubs", "Physical quantities", "Superconductivity", "Materials science", "Subatomic particles", "Condensed matter physics", "Quasiparticles", "Condensed matter stubs", "Electrical resistance and conductance" ]
58,639,717
https://en.wikipedia.org/wiki/4-Vinylpyridine
4-Vinylpyridine (4-VP) is an organic compound with the formula CH2CHC5H4N. It is a derivative of pyridine with a vinyl group in the 4-position. It is a colorless liquid, although impure samples are often brown. It is a monomeric precursor to specialty polymers. 4-Vinylpyridine is prepared by the condensation of 4-methylpyridine and formaldehyde. 4-VP is sometimes used in biochemistry to alkylate protein cysteine residues. When compared to other alkylation agents, such as iodoacetamide, acrylamide, and N-ethylmaleimide, 4-VP is less reactive, meaning the completion rate of cysteine alkylation is lower, but it also yields fewer side reactions. For some uses, such as during mass spectrometry measurements, 4-VP might be better because it is basic and can thus be protonated, adding net charge. See also 2-Vinylpyridine References 4-Pyridyl compounds Monomers Vinyl compounds
4-Vinylpyridine
[ "Chemistry", "Materials_science" ]
231
[ "Monomers", "Polymer chemistry" ]
58,640,005
https://en.wikipedia.org/wiki/ECOSTRESS
ECOSTRESS (Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station) is an ongoing scientific experiment in which a radiometer mounted on the International Space Station (ISS) measures the temperature of plants growing in specific locations on Earth over the course of a solar year. These measurements give scientists insight into the effects of events like heat waves and droughts on crops. ECOSTRESS radiometer The instrument that collects this data is a multispectral thermal infrared radiometer. It measures temperatures on the surface of the Earth, rather than surface air temperature. Dr. Simon Hook is the principal investigator of the ECOSTRESS mission and Dr. Joshua Fisher is the Science lead; both are located at NASA's Jet Propulsion Laboratory (JPL). ECOSTRESS data is archived at the Land Processes Distributed Active Archive Center (LP DAAC), which is a data center managed by the United States Geological Survey (USGS). ECOSTRESS data is discoverable through various platforms including through LP DAAC's AppEEARS (Application for Extracting and Exploring Analysis Ready Samples) tool, which allows users to quickly subset and reproject data into a geographic lat/lot format. The data collected is also published via the open-access TERN Data Discovery Portal in Australia. The ECOSTRESS radiometer was built at JPL and consisted of 5 spectral bands in the thermal infrared (8-12 micron) and 1 band in the shortwave infrared, which is used for geolocation. ECOSTRESS was delivered to the ISS by the SpaceX Dragon after a launch out of Cape Canaveral, Florida on 29 June 2018 The Dragon arrived at the space station on 3 July 2018. The radiometer was mounted on the station's Kibo module. The radiometer constituted about of the of cargo on board the Dragon. Other cargo included spare parts for the Canadarm2 robotic arm, as well as other equipment and supplies. The high-resolution images have a pixel size of 70 meters by 38 meters (225 feet by 125 feet). Key science questions The key science questions that ECOSTRESS is addressing include: How is the terrestrial biosphere responding to changes in water availability? How do changes in diurnal vegetation water stress impact the global carbon cycle? Can agricultural vulnerability be reduced through advanced monitoring of agricultural water consumptive use and improved drought estimation? Other uses Image data helps capture and quantify the temperature differences between man-made and natural surfaces. JPL released a report highlighting a 10 June 2022 record high air temperature in Las Vegas, NV of 43 C (109 F) and the corresponding ground temperatures. For instance, asphalt surfaces reached 50 C (122 F), while suburban neighborhood surfaces reached 42 C (108 F) and green spaces measured 37 C (99 F). Team Members The original ECOSTRESS Science Team included Dr. Glynn Hulley (JPL) and scientists at the U.S. Department of Agriculture, including Dr. Andrew French and Dr. Martha Anderson. Other science team members include Drs. Eric Wood (Princeton), Rick Allen (University of Idaho), and Chris Hain (NASA Marshall Space Flight Center). ECOSTRESS is the first Earth Venture mission to establish an Early Adopters Program, which provided its members with early access to provisional data and opportunities to collaborate with other ECOSTRESS users in a Slack channel. As of August 2019, the Early Adopters Program has transitioned to the ECOSTRESS Community of Practice, with over 250 members. Science data products Science data products produced by ECOSTRESS include: See also Effects of climate change on plant biodiversity Effects of global warming Hardiness (plants) Scientific research on the International Space Station Water scarcity External links JPL ECOSTRESS References Biology experiments Electromagnetic radiation meters International Space Station experiments Radiometry
ECOSTRESS
[ "Physics", "Technology", "Engineering" ]
775
[ "Telecommunications engineering", "Spectrum (physical sciences)", "Electromagnetic radiation meters", "Electromagnetic spectrum", "Measuring instruments", "Radiometry" ]
58,641,040
https://en.wikipedia.org/wiki/Atsuko%20Miyaji
Atsuko Miyaji (, born 1965) is a Japanese cryptographer and number theorist known for her research on elliptic-curve cryptography and software obfuscation. She is a professor in the Division of Electrical, Electronic and Information Engineering, at Osaka University. Education and career Miyaji was born in Osaka Prefecture and became interested in mathematics as an elementary school student after learning of the Epimenides paradox. She studied mathematics as an undergraduate at Osaka University, and chose to go into industry instead of continuing as a graduate student, working from 1990 to 1998 for Matsushita Electric Industrial. During this time she returned to graduate school, and earned a doctorate from Osaka University in 1997. She became an associate professor at the Japan Advanced Institute of Science and Technology in 1998, and returned to Osaka University as a professor in 2015. She has also held short-term teaching or visiting positions at Osaka Prefecture University, the University of Tsukuba, the University of California, Davis, and Kyoto University. Book Miyaji is the author of a 2012 Japanese language book on cryptography, "代数学から学ぶ暗号理論:整数論の基礎から楕円曲線暗号の実装まで". References External links ResearchMap profile 1965 births Living people Japanese computer scientists Japanese women computer scientists Public-key cryptographers Japanese mathematicians Japanese women mathematicians Number theorists Academic staff of Osaka University Osaka University alumni
Atsuko Miyaji
[ "Mathematics" ]
294
[ "Number theorists", "Number theory" ]
58,643,344
https://en.wikipedia.org/wiki/Lisa%20Pratt
Lisa Pratt is an American biogeochemist and astrobiologist who served as the 7th Planetary Protection Officer for NASA from 2018 to 2021 under President Donald Trump. Her academic work as a student, professor, and researcher on organisms and their respective environments prepared her for the position, in which she was responsible for protecting Earth and other planets in the solar system from traveling microbes. She is a Provost Professor Emeritus of Earth and Atmospheric Sciences for Indiana University Bloomington. Early life and education Lisa Pratt was born and raised in Rochester, Minnesota. At her high school in Minnesota, Pratt took science courses up until her senior year. When she began college, she was determined not to pursue a degree in science because she felt women were not welcome in the field. Her father had been a surgeon at the Mayo Clinic, and she noted that none of his peers were female-identifying. Pratt first began her undergraduate education at Rollins College studying Spanish. However, she later transferred to University of North Carolina, where she began studying botany. Pratt received her Bachelor's of Arts in botany from the University of North Carolina in 1972. In 1974, she received her Masters of Science from the University of Illinois in Botany. Pratt later entered the field of geology by earning her Masters of Science from the University of North Carolina in 1978 and her Doctorate from Princeton University in 1982. Academic career Pratt held a post-doctoral fellowship for two years at the U.S. Geological Survey in Denver and stayed on for an additional five years as a Research Geologist in the U.S.G.S. Branch of Petroleum Geology before leaving Colorado for a junior professorship in biogeochemistry at Indiana University in 1987 to help train young scientists for careers in the petroleum exploration and extraction industry. Pratt is a Provost Professor Emeritus of Earth and Atmospheric Sciences for Indiana University Bloomington where she has been a faculty member since 1987. Since joining Indiana University's faculty, Pratt has focused her research on how extreme environments effect the microorganisms within them. Projects When Pratt was a doctoral student, her work focused on the periods of time when Earth's oceans were starved for oxygen, which led to oceanic anoxic events that led to the creation of black sediment deposits. She looked at the geological record to better understand what had taken place millions of years ago. Later, as Pratt was completing her post-doctoral work at the U.S. Geological Survey in Denver, she studied microorganisms in the extreme heat of active African gold mines. This led to NASA looking to bring Pratt in to help study the microorganisms effected on their future projects. In 2011, she received a $2.4 million grant from NASA's Astrobiology Science and Technology for Exploring Planets program to study microorganisms on the Greenland Ice Sheet. While Pratt has been a faculty member for Indiana University at Bloomington since 1987, she has a history of working with NASA since the early 2000s. She served as a team director at the NASA Astrobiology Institute from 2003 to 2008. Pratt also served as a chair for NASA's Mars Exploration Program Analysis from 2013 to 2016, and serves as a chair for the Return Sample Science Board for the Mars 2020 Rover mission. In June 2017, the application for the position of Planetary Protection Officer was posted, but Pratt was hesitant to apply. She says that encouragement from her daughter led to her submitting her name, and on February 5, 2018, Pratt became the Planetary Protection Officer for NASA, leaving her role as Indiana University's College of Arts and Sciences dean. She was one of a rumored 1,400 applicants vying for the position. She had two responsibilities at NASA: protecting the Earth in event of extraterrestrial involvement, and ensuring that Earth's microbes do not travel and impact other planets in the solar system. Her research at NASA focused on the developing the tools and techniques needed to avoid organic-constituent and biological contamination during either human or robotic missions. Additionally, Pratt was responsible for updating planetary policies in response to changing federal legislation. In May 2021, President Biden announced the appointment of J. Nick Benardini to replace Pratt as Planetary Protection Officer effective the following month. Awards and honors National Research Council Post-Doctoral Fellow 1982-1984 Matson Award American Association of Petroleum Geologist, 1986 Distinguished Lecturer, American Association Petroleum Geologists, 1990-1991 Association of Women Geoscientists, Outstanding Educator, 1997 American Association of Petroleum Geologists, Eastern Section, Outstanding Educator, 2002 Indiana University College of Arts and Sciences Alumni, Distinguished Faculty Member, 2003 Phi Beta Kappa Visiting Scholar, 2009-2011 Fellow Geological Society of America, 2010 Phi Beta Kappa Triennial Council Meeting, featured lecture, 2012 President's Medal for Excellence at Indiana University (2018) Bicentennial Medal at Indiana University (2020) References Living people Year of birth missing (living people) Place of birth missing (living people) Indiana University Bloomington faculty NASA people Astrobiologists University of North Carolina alumni University of Illinois alumni Princeton University alumni American women biologists Fellows of the Geological Society of America Planetary scientists American women planetary scientists Biogeochemists 21st-century American women
Lisa Pratt
[ "Chemistry" ]
1,035
[ "Geochemists", "Biogeochemistry", "Biogeochemists" ]
58,646,074
https://en.wikipedia.org/wiki/Surge%20in%20compressors
Compressor surge is a form of aerodynamic instability in axial compressors or centrifugal compressors. The term describes violent air flow oscillating in the axial direction of a compressor, which indicates the axial component of fluid velocity varies periodically and may even become negative. In early literature, the phenomenon of compressor surge was identified by audible thumping and honking at frequencies as low as 1 Hertz, pressure pulsations throughout the machine, and severe mechanical vibration. Description Compressor surge can be classified into deep surge and mild surge. Compressor surge with negative mass flow rates is considered as deep surge while the one without reverse flows is generally termed mild surge. On a performance map, the stable operating range of a compressor is limited by the surge line. Although the line is named after a surge, technically, it is an instability boundary which denotes onsets of discernible flow instabilities, such as compressor surge or rotating stall. When the mass flow rate drops to a critical value at which discernible flow instabilities take place, nominally, the critical value should be determined as a surge mass flow rate on a constant speed line; however, in practice, the surge line on a performance map is affected by specific criteria adopted for determining discernible flow instabilities. Effects Compressor surge is catastrophic for the compressor and the whole machine. When compressor surge happens, the operating point of a compressor, which is usually denoted by the pair of the mass flow rate and pressure ratio, orbits along a surge cycle on the compressor performance map. The unstable performance caused by compressor surge is not acceptable to machines on which a compressor is mounted to ventilate or dense air. In addition to affecting performance, compressor surge is also accompanied with loud noises. Frequencies of compressor surge can range from a few to dozens Hertz depending on the configuration of a compression system. Although Helmholtz resonance frequency is often employed to characterize the unsteadiness of mild surge; it was found that Helmholtz oscillation did not trigger compressor surge in some cases. Another effect of compressor surge is on solid structure. Violent flows of compressor surge repeatedly hit blades in the compressor, resulting in blade fatigue or even mechanical failure. While fully developed compressor surge is axisymmetric, its initial phase is not necessarily axisymmetric. Actually, severe damage of compressor surge is often related to very large transverse loads on blades and casing in its initial transient. A chain reaction of compressor surge is the flameout of a jet engine. Due to a lack of air intake in the case of compressor surge, there will be unburnt fuel in the combustion chamber, and that unburnt fuel will burn and cause flameout near the exit of the engine where oxygen is sufficient. Causes In most low-speed and low-pressure cases, rotating stall comes prior to compressor surge; however, a general cause-effect relation between rotating stall and compressor surge has not been determined yet. On a constant speed line of a compressor, the mass flow rate decreases as the pressure delivered by the compressor gets higher. Internal flows of the compressor are in a very large adverse pressure gradient which tends to destabilize the flow and cause flow separation. A fully developed compressor surge can be modeled as a one-dimensional global instability of a compression system which typically consists of inlet ducts, compressors, exit ducts, gas reservoir, and throttle valve. A cycle of compressor surge can be divided into several phases. If the throttle valve is turned to be a very small opening, the gas reservoir would have a positive net flux. The pressure in the reservoir keeps increasing and then exceeds the pressure at compressor exit, thus resulting in an adverse pressure gradient in exit ducts. This adverse pressure gradient naturally decelerates flows in the whole system and reduces the mass flow rate. The slope of a constant speed line near surge line is usually zero or even positive, which implies that the compressor cannot provide a much higher pressure as lowering the mass flow rate. Thus, the adverse pressure gradient could not be suppressed by the compressor and the system would rapidly involve an overshoot of adverse pressure gradient which would dramatically reduce the mass flow rate or even cause flows to reverse. On the other hand, the pressure in the reservoir would gradually drop due to less flux delivered by the compressor, thus rebuilding a favorable pressure gradient in exit ducts. And then the mass flow rate would be recovered, and the compressor is back to work on a constant speed line again, which would eventually trigger the next surge cycle. Therefore, compressor surge is a process which keeps breaking the flow path of a compression system down and rebuilding it. Several rules of thumb can be inferred from the interpretation above. Compressor surge in a system with a small gas reservoir is high-frequency and low-amplitude whereas a large gas reservoir leads to low-frequency and high-amplitude compressor surge; another rule of thumb is that compressor surge happens in a compressor with a large external volume and compressor stall tends to show up in a system with a short exit duct. It is also worth noting that the surge line of a compressor can have small variations in different systems, such as a test bench or an engine. Preventing surge In the oil and gas industry the operation of gas compressors in surge conditions is prevented by instrumentation around the compressor. The measured flow rate of gas (FT) in the compressor suction line together with the suction pressure (PT), and sometimes the suction temperature (TT) and the pressure (PT) in discharge line is fed into the surge controller. Algorithms in the controller use the data to establish the performance of the machine; the data identifies the operating point in terms of the flow and the developed head. When the compressor’s operation approaches the surge point the controller modulates either a flow control valve (FCV) in the recycle line or adjusts the speed (SC) of the compressor driver. The FCV allows cooled gas from the discharge to spill back to the suction of the compressor, thereby maintaining the forward flow of gas through the machine. The recycle line is ideally located to take cooled gas from downstream of the compressor after-cooler and to discharge it into the feed to the compressor suction drum. See also Compressor stall References Gas compressors
Surge in compressors
[ "Chemistry" ]
1,263
[ "Gas compressors", "Turbomachinery" ]
58,646,755
https://en.wikipedia.org/wiki/Commuting%20probability
In mathematics and more precisely in group theory, the commuting probability (also called degree of commutativity or commutativity degree) of a finite group is the probability that two randomly chosen elements commute. It can be used to measure how close to abelian a finite group is. It can be generalized to infinite groups equipped with a suitable probability measure, and can also be generalized to other algebraic structures such as rings. Definition Let be a finite group. We define as the averaged number of pairs of elements of which commute: where denotes the cardinality of a finite set . If one considers the uniform distribution on , is the probability that two randomly chosen elements of commute. That is why is called the commuting probability of . Results The finite group is abelian if and only if . One has where is the number of conjugacy classes of . If is not abelian then (this result is sometimes called the 5/8 theorem) and this upper bound is sharp: there are infinitely many finite groups such that , the smallest one being the dihedral group of order 8. There is no uniform lower bound on . In fact, for every positive integer there exists a finite group such that . If is not abelian but simple, then (this upper bound is attained by , the alternating group of degree 5). The set of commuting probabilities of finite groups is reverse-well-ordered, and the reverse of its order type is known to be either or . Generalizations The commuting probability can be defined for other algebraic structures such as finite rings. The commuting probability can be defined for infinite compact groups; the probability measure is then, after a renormalisation, the Haar measure. References Finite groups
Commuting probability
[ "Mathematics" ]
360
[ "Mathematical structures", "Algebraic structures", "Finite groups" ]
58,646,964
https://en.wikipedia.org/wiki/Extracorporeal%20life%20support
Extracorporeal life support (ECLS), is a set of extracorporeal modalities that can provide oxygenation, removal of carbon dioxide, and/or circulatory support, excluding cardiopulmonary bypass for cardiothoracic or vascular surgery. ECLS modalities include: Extracorporeal membrane oxygenation (ECMO) - for temporary support of patients with respiratory and/or cardiac failure . Extracorporeal carbon dioxide removal (ECCO2R) - for removal of CO2 only. without cardiac support. ECCO2R is used for patients with hypercapnic respiratory failure or patients with less severe forms of acute respiratory distress syndrome. References Intensive care medicine Medical equipment stubs
Extracorporeal life support
[ "Biology" ]
152
[ "Biotechnology stubs", "Medical technology stubs", "Medical technology" ]
58,647,164
https://en.wikipedia.org/wiki/Katharina%20Boll-Dornberger
Katharina Boll-Dornberger (2 November 1909 – 27 July 1981), also known as Käte Dornberger-Schiff, was an Austrian-German physicist and crystallographer. She is known for her work on order-disorder structures. Life Katharina Boll-Dornberger was born in Vienna in 1909 as the daughter of the university professor and Alice Friederike (Gertrude) Schiff. She studied physics and mathematics in Vienna and Göttingen. She wrote her dissertation under supervision of V. M. Goldschmidt on the crystal structure of water-free zinc sulfate in Göttingen and handed it in in Vienna in 1934. Afterwards, she conducted research in Philipp Gross's lab in Vienna. In 1937 she emigrated to England. In England, she worked with John D. Bernal, Nevill F. Mott, and Dorothy Hodgkin. She married Paul Dornberger in 1939. Her sons were born in 1943 and 1946. In 1946, she and her family returned to Germany. At first, she worked as a lecturer for physics and mathematics at the Hochschule für Baukunst in Weimar. Then, she moved to East Berlin. Starting in 1948, she was the head of a department at the Institut für Biophysik at the German Academy of Sciences at Berlin. In 1952, she married Ludwig Boll (1911–1984), a German mathematician. In 1956, she became a professor at the Humboldt University. In 1958, the Institut für Strukturforschung was created and she was head of the institute until 1968. She died in 1981 in Berlin. Research Her research focused on the crystallographic investigation of order-disorder structures. She introduced groupoids to crystallography to describe disordered structures. Roughly 2/3 of her 60 publications focused on order-disorder. The other publications dealt with structure determination of organic and inorganic crystals, methods development in single-crystal diffraction, and the development of equipment for this purpose. Awards For her work in crystallography, she was awarded two national awards by the German Democratic Republic: Patriotic Order of Merit in 1959 National Prize of the German Democratic Republic in 1960 A street in Berlin is named after her. Notes References Further reading https://fakultaeten.hu-berlin.de/de/sprachlit/frauenbeauftragte/weitere-informationen/der_lange_weg_z_chancengleichheit_2014.pdf 1981 deaths 1909 births Recipients of the National Prize of East Germany Recipients of the Patriotic Order of Merit Scientists from Vienna Academic staff of the Humboldt University of Berlin German women physicists 20th-century German physicists Crystallographers
Katharina Boll-Dornberger
[ "Chemistry", "Materials_science" ]
560
[ "Crystallographers", "Crystallography" ]
58,649,179
https://en.wikipedia.org/wiki/MiR-324-5p
miR-324-5p is a microRNA that functions in cell growth, apoptosis, cancer, epilepsy, neuronal differentiation, psychiatric conditions, cardiac disease pathology, and more. As a microRNA, it regulates gene expression through targeting mRNAs. Additionally, miR-324-5p is both an intracellular miRNA, meaning it is commonly found within the microenvironment of the cell, and one of several circulating miRNAs found throughout the body. Its presence throughout the body both within and external to cells may contribute to miR-324-5p's wide array of functions and role in numerous disease pathologies – especially cancer – in various organ systems. History miR-324-5p first appeared in literature in a paper published by John Kim et al. in early 2004 that identified 32 entirely new miRNAs from cultured rat cortical neurons using miRNA cloning and RNA analysis. The miRNA quickly gained traction in scientific literature, appearing in articles about the evolutionary conservation of microRNAs, HIV, cancer, and other topics within a few years. Today, the functions and roles of miR-324-5p are still not yet fully characterized. Structure and targets miR-324-5p is a reverse strand miRNA, meaning it is produced from the 5' end of the associated RNA, and spans from position 7,223,342 to 7,223,364 on chromosome 17. Its sequence is CGCAUCCCCUAGGGCAUUGGUG. miRNA forms following cleavage of pre-miRNA at the hairpin loop by the enzyme dicer within the cytosol. Interestingly, both strands of miR-324's pre-miRNA hairpin loop structure, miR-324-5p and miR-324-3p, become active miRNAs with distinct targets and functions. miR-324-5p has between 166 and 469 predicted targets, including regulators of cell growth, proliferation, survival, cytoskeletal structure, ATP transport, and ion channels. Though miR-324-5p is found on chromosome 17, its targets span across all chromosomes. Functions Cell growth and survival miR-324-5p likely regulates cell growth and survival through interaction with multiple pathways. Published research demonstrates that this miRNA interacts with the Hedgehog (HH) signaling pathway via interactions with HH transcription factor Gli1 and HH protein receptor Smo, often contributing to tumorigenesis. miR-324-5p's activating interaction with the protein NfkB also regulates numerous components of cell survival, including cell cycle control, enzyme synthesis, and cell adhesion. In addition, miR-324-5p regulates components of the MAPK pathway, influencing cell growth, proliferation, and survival. Specifically, miR-324-5p downregulates RAF and ERK and is necessary for normal levels of cell growth. Reduced expression leads to increased cell growth and proliferation, and overexpression limits growth, leading to its role in oncogenesis. Cancer Both up and downregulation of miR-324-5p is shown to contribute to various types of cancer. miR-324-5p plays a role in inflammation and tumorigenesis in colorectal cancer through regulation of CUEDC2, which regulates inflammation via interaction with NF-kB signaling. miR-324-5p can inhibit glioma proliferation, suppress hepatocellular carcinoma and nasopharyngeal carcinoma cell invasion, and regulate growth and pathology in multiple myeloma. Additionally, chromosome 17 deletions, which include deletion of miR-324-5p, are present in 10% of multiple myeloma patients and are associated with poorer prognosis. In contrast, overexpression of miR-324-5p in gastric cancer cells reduces cell death and promotes growth and proliferation. miR-324-5p has also been shown to reduce the viability of gastric cancer cells via downregulation of TSPAN8, and miR-324-5p expression increased apoptosis in these same gastric cancer cells. Epilepsy Seizures are characterized by high levels of synchronized neuronal activity. One important regulator of neuronal activity is the hyperpolarizing A-type current mediated by potassium channel KV4.2. miR-324-5p downregulates KV4.2, exacerbating conditions that lead to seizure onset, and downregulation of miR-324-5p in mouse models of epilepsy is seizure-suppressive. Changes in miRNA expression are seen in epileptogenesis and in other disease pathologies. In epilepsy, miR-324-5p expression has been shown to increase and decrease at different timepoints and loci. Importantly, miR-324-5p has increased association with the RISC complex following seizure in mice, indicating more suppressive activity. Overall, this suggests that miR-324-5p plays a role in epileptogenesis via targeting of potassium channel KV4.2. Cardiac disease miR-324-5p contributes to cardiac disease pathophysiology and cardiomyocite death through translational inhibition of Mtfr1, leading to reduced mitochondrial fission, apoptosis, and myocardial infarction. Psychiatric conditions MiRNA expression profiles are altered in psychiatric conditions, including depression, anxiety, and PTSD. It has been demonstrated that miR-324-5p expression is altered in the brains of suicide victims with depression and in the amygdala, the fear center of the brain, in PTSD. MiRNAs are an underexplored potential biomarker and target for treatment for psychiatric disease. Future research and potential in medicine miRNA-324-5p is a relatively new and understudied microRNA. It is an important regulator in several diseases, and its effects span across the body from neuronal dysregulation in seizure to hepatocellular carcinoma and cardiac disease. Because microRNAs have numerous targets, they are capable of regulating multiple pathways and circuits, an ability that may be useful in the treatment of complex disorders like epilepsy in which many subsystems are dysregulated. However, the wide-ranging functions of miRNAs may be limiting as well. microRNA expression modulation could lead to unanticipated physiological effects and not provide adequate specificity. References MicroRNA RNA Gene expression
MiR-324-5p
[ "Chemistry", "Biology" ]
1,336
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
50,972,558
https://en.wikipedia.org/wiki/Fichera%27s%20existence%20principle
In mathematics, and particularly in functional analysis, Fichera's existence principle is an existence and uniqueness theorem for solution of functional equations, proved by Gaetano Fichera in 1954. More precisely, given a general vector space and two linear maps from it onto two Banach spaces, the principle states necessary and sufficient conditions for a linear transformation between the two dual Banach spaces to be invertible for every vector in . See also Notes References . A survey of Gaetano Fichera's contributions to the theory of partial differential equations, written by two of his pupils. . . : for a review of the book, see . . The paper Some recent developments of the theory of boundary value problems for linear partial differential equations describes Fichera's approach to a general theory of boundary value problems for linear partial differential equations through a theorem similar in spirit to the Lax–Milgram theorem. . A monograph based on lecture notes, taken by Lucilla Bassotti and Luciano De Vito of a course held by Gaetano Fichera at the INdAM: for a review of the book, see . . , reviewed also by , and by . . . An expository paper detailing the contributions of Gaetano Fichera and his school on the problem of numerical calculation of eigenvalues for general differential operators. Banach spaces Normed spaces Partial differential equations Theorems in functional analysis
Fichera's existence principle
[ "Mathematics" ]
284
[ "Theorems in mathematical analysis", "Theorems in functional analysis" ]
50,977,055
https://en.wikipedia.org/wiki/Polymer%20scattering
Polymer scattering experiments are one of the main scientific methods used in chemistry, physics and other sciences to study the characteristics of polymeric systems: solutions, gels, compounds and more. As in most scattering experiments, it involves subjecting a polymeric sample to incident particles (with defined wavelengths), and studying the characteristics of the scattered particles: angular distribution, intensity polarization and so on. This method is quite simple and straightforward, and does not require special manipulations of the samples which may alter their properties, and hence compromise exact results. As opposed to crystallographic scattering experiments, where the scatterer or "target" has very distinct order, which leads to well defined patterns (presenting Bragg peaks for example), the stochastic nature of polymer configurations and deformations (especially in a solution), gives rise to quite different results. Formalism We consider a polymer as a chain of monomers, each with its position vector and scattering amplitude . For simplicity, it is worthwhile considering identical monomers in the chain, such that all . An incoming ray (of light/neutrons/X-ray etc.) has a wave vector (or momentum) , and is scattered by the polymer to the vector . This enables us to define the scattering vector . By coherently summing the contributions of all monomers, we get the scattering intensity from a single polymer, as a function of : Dilute solutions A dilute solution of a certain polymer has a unique feature: all polymers are considered independent from each other, so that interactions between polymers may be neglected. By illuminating such a solution with a ray of considerable width, a macroscopic number of chain conformations are being sampled simultaneously. In this situation the accessible observables are all ensemble averages, i.e. averages over all possible configurations and deformations of the polymer. In such a solution, where the polymer density is low (dilute) enough, homogenous and isotropic (on average), intermolecular contributions to the structure factor are averaged out, and only the single-molecule/polymer structure factor is preserved: with representing the ensemble average. This reduces to the following for an isotropic system (which is typically the case): where two more definitions were made: and . Ideal chains – Debye function If the polymers of interest are ideal gaussian chains (or freely-jointed chains), in the limit of very long chains (allows performing a sort of "continuum transition"), the calculation of the structure can be carried out explicitly and result in a sort of Debye function: With being the polymer's radius of gyration. in many practical scenarios, the above formula is approximated by the (much more convenient) Lorentzian: which has a relative error of no more than 15% compared to the exact expression. Small-angle scattering from polymers The calculation of the structure factor for cases differing from ideal polymer chains can be quite cumbersome, and sometimes impossible to complete analytically. However, when the small-angle scattering condition is met, , the sinc term can be expanded so one gets: and by utilising the definition of the radius of gyration: where the final transition utilises once again the small-angle approximation. We can thus approximate the scattering intensity in the small-angle regime as: and by plotting vs. , a so-called "Guinier plot", we may determine the radius of gyration from the slope of this linear curve. This measure is one of many examples of how scattering experiments of polymers can reveal basic properties of those polymer chains. Practical considerations In order to reap the benefits of working in this small-angle regime, one must take into consideration: The characteristic length scale of the polymer, e.g. The wavelength of the scattered particles The ratio will determine the available angular spectrum of this regime. To see this one may consider the case of elastic scattering (even approximately elastic ). If the scattering angle is , we may express as: so the small-angle condition becomes , determining the relevant angles. Example - For visible light, - For neutrons, - For "hard" X-rays, while typical values for polymers range in . This makes small-angle measurements in neutrons and X-rays a bit more tedious, as very small angles are needed, and the data in those angles is often "overpowered" by the spot emerging in usual scattering experiments. The problem is mitigated by conducting longer experiments with more exposure time, which allows the required data to "intensify". One must take care though, as to not allow the prolonged exposure to high levels of radiation damage the polymers (which might be a real problem when considering biological polymer samples – proteins, for example). On the other hand, to resolve smaller polymers and structurals subtleties, one cannot always resort to using the long-wavelength rays, as the diffraction limit comes into play. Applications The main purpose of such scattering experiments involving polymers is to study unique properties of the sample of interest: Determine the polymers "size" - radius of gyration. Evaluating the structural and thermo-statistical behavior of a polymer, i.e. freely-jointed chain / freely-rotating chain etc. Explore the distribution of the polymers in the sample - is it truly isotropic? Or does it favor certain directions on average? Identifying deformations in the polymer samples and quantifying them. Examining complex interactions of polymers in the solution - between themselves, and between them and the solution. Such interactions may arise if the polymers are charged, corresponding to ionic interactions, This would have a significant impact on the particles behavior, and will result in a significant scattering signature. Studying a myriad of biological substances (e.g. DNA) that are often suspended in an aqueous solution. Further reading Polymers Static light scattering Biological small-angle scattering Neutron scattering Polymer characterization Wide-angle X-ray scattering References Scattering Particle physics
Polymer scattering
[ "Physics", "Chemistry", "Materials_science" ]
1,219
[ "Nuclear physics", "Scattering", "Condensed matter physics", "Particle physics" ]
50,978,083
https://en.wikipedia.org/wiki/Jaszczak%20phantom
A Jaszczak phantom () aka Data Spectrum ECT phantom is an imaging phantom used for validating scanner geometry, 3D contrast, uniformity, resolution, attenuation and scatter correction or alignment tasks in nuclear medicine. It is commonly used in academic centers and hospitals to characterize a SPECT or some gamma camera systems for quality control purposes. It is used for accreditation by clinical and academic facilities for the American College of Radiology. The phantom was developed by Ronald J. Jaszczak of Duke University, and was filed for a patent in 1982. It is a cylinder containing fillable inserts that is often used with a radionuclide such as Technetium-99m or Fluorine-18. Although the phantom can be used for acceptance testing, the National Electrical Manufacturers Association recommends a 30 million count acquisition and section reconstruction of the phantom be performed quarterly. In 1981 Ronald J. Jaszczak founded Data Spectrum Corporation which manufactures the Jaszczak phantom and several other nuclear imaging tools, such as the Hoffman Brain phantom. Structure and composition Jaszczak phantoms consist of a main cylinder or tank made of acrylic plastic with several inserts. The circular phantom comes in two varieties: flanged and flangeless. The latter is recommended by the American College of Radiology for accreditation of nuclear medicine departments. All Jaszczak phantoms have six solid spheres and six sets of 'cold' rods. In flanged models, the sizes of the spheres vary. The number of rods in each set depends on the size of the rod in that set as different models of the phantom have rods of different sizes. In flangeless models, the diameters of the spheres are 9.5, 12.7, 15.9, 19.1, 25.4 and 31.8 mm, while the rod diameters are 4.8, 6.4, 7.9, 9.5, 11.1 and 12.7 mm. Both solid spheres and rod inserts mimic cold lesions in a hot background. Spheres are used to measure the image contrast while the rods are used to investigate the image resolution in SPECT systems. References External links ACR Accreditation of Nuclear Medicine and PET Imaging Departments Nuclear medicine Quality control tools Positron emission tomography
Jaszczak phantom
[ "Physics" ]
471
[ "Antimatter", "Positron emission tomography", "Matter" ]
57,195,675
https://en.wikipedia.org/wiki/Emmons%20problem
In combustion, Emmons problem describes the flame structure which develops inside the boundary layer, created by a flowing oxidizer stream on flat fuel (solid or liquid) surfaces. The problem was first studied by Howard Wilson Emmons in 1956. The flame is of diffusion flame type because it separates fuel and oxygen by a flame sheet. The corresponding problem in a quiescent oxidizer environment is known as Clarke–Riley diffusion flame. Burning rate Source: Consider a semi-infinite fuel surface with leading edge located at and let the free stream oxidizer velocity be . Through the solution of Blasius equation ( is the self-similar Howarth–Dorodnitsyn coordinate), the mass flux ( is density and is vertical velocity) in the vertical direction can be obtained where In deriving this, it is assumed that the density and the viscosity , where is the temperature. The subscript describes the values far away from the fuel surface. The main interest in combustion process is the fuel burning rate, which is obtained by evaluating at , as given below, See also Liñán's diffusion flame theory References Fluid dynamics Combustion
Emmons problem
[ "Chemistry", "Engineering" ]
231
[ "Piping", "Chemical engineering", "Combustion", "Fluid dynamics" ]
57,196,745
https://en.wikipedia.org/wiki/Burke%E2%80%93Schumann%20limit
In combustion, Burke–Schumann limit, or large Damköhler number limit, is the limit of infinitely fast chemistry (or in other words, infinite Damköhler number), named after S.P. Burke and T.E.W. Schumann, due to their pioneering work on Burke–Schumann flame. One important conclusion of infinitely fast chemistry is the non-co-existence of fuel and oxidizer simultaneously except in a thin reaction sheet. The inner structure of the reaction sheet is described by Liñán's equation. Limit description In a typical non-premixed combustion (fuel and oxidizer are separated initially), mixing of fuel and oxidizer takes place based on the mechanical time scale dictated by the convection/diffusion (the relative importance between convection and diffusion depends on the Reynolds number) terms. Similarly, chemical reaction takes certain amount of time to consume reactants. For one-step irreversible chemistry with Arrhenius rate, this chemical time is given by where is the pre-exponential factor, is the activation energy, is the universal gas constant and is the temperature. Similarly, one can define appropriate for particular flow configuration. The Damköhler number is then Due to the large activation energy, the Damköhler number at unburnt gas temperature is , because . On the other hand, the shortest chemical time is found at the flame (with burnt gas temperature ), leading to . Regardless of Reynolds number, the limit guarantees that chemical reaction dominates over the other terms. A typical conservation equation for the scalar (species concentration or energy) takes the following form, where is the convective-diffusive operator and are the mass fractions of fuel and oxidizer, respectively. Taking the limit in the above equation, we find that i.e., fuel and oxidizer cannot coexist, since far away from the reaction sheet, only one of the reactant is available (non premixed). On the fuel side of the reaction sheet, and on the oxidizer side, . Fuel and oxygen can coexist (with very small concentrations) only in a thin reaction sheet, where (diffusive transport will be comparable to reaction in this zone). In this thin reaction sheet, both fuel and oxygen are consumed and nothing leaks to the other side of the sheet. Due to the instantaneous consumption of fuel and oxidizer, the normal gradients of scalars exhibit discontinuities at the reaction sheet. See also Activation energy asymptotics Liñán's equation Liñán's diffusion flame theory References Fire Combustion Fluid dynamics
Burke–Schumann limit
[ "Chemistry", "Engineering" ]
537
[ "Chemical engineering", "Combustion", "Piping", "Fire", "Fluid dynamics" ]
32,492,587
https://en.wikipedia.org/wiki/Applied%20Spectroscopy%20%28journal%29
Applied Spectroscopy is a peer-reviewed scientific journal published monthly by the Society for Applied Spectroscopy, and it is also the official journal for this society. The editor-in-chief is Sergei G. Kazarian (Imperial College London). The journal covers applications of spectroscopy in analytical chemistry, materials science, biotechnology, and chemical characterization. The journal is a continuation of Bulletin of the Society for Applied Spectroscopy (), which was first published in February 1946. This title continued until July 1951. The frequency of this publication varied between 1951 and 1991. Then in 1992 it became a monthly journal. Aims and Scope The journal seeks to be comprehensive in scope, with its primary aim the publication of papers on both the fundamentals and applications of photon-based spectroscopy. These include, but are not limited to, ultraviolet-visible absorption, fluorescence and phosphorescence, mid-infrared, Raman, near-infrared, terahertz, and microwave, and atomic absorption, atomic emission, and laser-induced breakdown spectroscopies (and ICP-MS), as well as cutting-edge hyphenated and interdisciplinary techniques. Fundamental topics include, but are not restricted to, the theory of optical spectra and their interpretation, instrumentation design, and operational principles. Reports of spectral processing methodologies such as 2D correlation spectroscopy (2D-COS), baseline correction, and chemometric methods applied to spectra are also strongly encouraged. Application papers are intended to feature novel, innovative applications of spectroscopic methods and techniques. Papers from all fields of scientific endeavor in which applied spectroscopy can be utilized will be considered for publication. Representative fields include chemistry, physics, biological and health sciences, environmental science, materials science, archeology and art conservation, and forensic science. In addition to full papers, the journal publishes Rapid Communications, Spectroscopic Techniques, Notes, and Correspondence related to previously published papers. A regular feature of the journal, Focal Point Reviews, provides definitive, comprehensive reviews of spectroscopic techniques and applications and is available as open access. Abstracting and indexing This journal is abstracted and indexed in: Academic Search BIOSIS Previews Chemical Abstracts Service/CASSI Current Contents/Physical, Chemical & Earth Sciences Science Citation Index MEDLINE References External links Spectroscopy journals Materials science journals English-language journals Academic journals established in 1946
Applied Spectroscopy (journal)
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
473
[ "Spectrum (physical sciences)", "Materials science journals", "Materials science", "Spectroscopy journals", "Spectroscopy" ]
32,494,400
https://en.wikipedia.org/wiki/ROMeo%20%28process%20optimizer%29
ROMeoRigorous Online Modelling and Equation Based Optimization is an advanced online chemical process optimizer of SimSci, a brand of Aveva software It is mainly used by process engineers in the chemical, petroleum and natural gas industries. It includes a chemical component library, thermodynamic property prediction methods, and unit operations such as distillation columns, heat exchangers, compressors, and reactors as found in the chemical processing industries. It can perform steady state mass and energy balance calculations for modeling, simulating and optimizing continuous processes. ROMeo 6.0 has been released with increased access to native Refinery Process Models based on technology from ExxonMobil. From ROMeo 7.0, ROMeo changed from 32 bit to 64bit. ROMeo changed the name to AVEVA Process Optimization from 2020 version. See also List of chemical process simulators References External links Official ROMeo Website Invensys and ExxonMobil Research and Engineering Company Sign Licensing Agreement for Refinery Process Models Chemical engineering software
ROMeo (process optimizer)
[ "Chemistry", "Engineering" ]
202
[ "Chemical engineering software", "Chemical engineering" ]
32,495,538
https://en.wikipedia.org/wiki/Thermodynamic%20efficiency%20limit
Thermodynamic efficiency limit is the absolute maximum theoretically possible conversion efficiency of sunlight to electricity. Its value is about 86%, which is the Chambadal-Novikov efficiency, an approximation related to the Carnot limit, based on the temperature of the photons emitted by the Sun's surface. Effect of band gap energy Solar cells operate as quantum energy conversion devices, and are therefore subject to the thermodynamic efficiency limit. Photons with an energy below the band gap of the absorber material cannot generate an electron-hole pair, and so their energy is not converted to useful output and only generates heat if absorbed. For photons with an energy above the band gap energy, only a fraction of the energy above the band gap can be converted to useful output. When a photon of greater energy is absorbed, the excess energy above the band gap is converted to kinetic energy of the carrier recombination. The excess kinetic energy is converted to heat through phonon interactions as the kinetic energy of the carriers slows to equilibrium velocity. Hence, the solar energy cannot be converted to electricity beyond a certain limit. Solar cells with multiple band gap absorber materials improve efficiency by dividing the solar spectrum into smaller bins where the thermodynamic efficiency limit is higher for each bin. The thermodynamic limits of such cells (also called multi-junction cells, or tandem cells) can be analyzed using and online simulator in nanoHUB. Efficiency limits for different solar cell technologies Thermodynamic efficiency limits for different solar cell technologies are as follows: Single junctions ≈ 33% 3-cell stacks and impure PVs ≈ 50% Hot carrier- or impact ionization-based devices ≈ 54-68% Commercial modules are ≈ 12-21% Solar cell with an upconverter for operation in the AM1.5 spectrum and with a 2eV bandgap ≈ 50.7% Thermodynamic efficiency limit for excitonic solar cells Excitonic solar cells generates free charge by bound and intermediate exciton states unlike inorganic and crystalline solar cells. The efficiency of the excitonic solar cells and inorganic solar cells (with less exciton-binding energy) cannot go beyond 31% as explained by Shockley and Queisser. Thermodynamic efficiency limits with carrier multiplication Carrier multiplication facilitates multiple electron-hole pair generation for each photon absorbed. Efficiency limits for photovoltaic cells can be theoretically higher considering thermodynamic effects. For a solar cell powered by the Sun's unconcentrated black-body radiation, the theoretical maximum efficiency is 43% whereas for a solar cell powered by the Sun's full concentrated radiation, the efficiency limit is up to 85%. These high values of efficiencies are possible only when the solar cells use radiative recombination and carrier multiplication. See also Quantum efficiency of a solar cell Energy conversion efficiency Photoelectric effect Solar cell efficiency References Photovoltaics Solar cells Thermodynamic processes
Thermodynamic efficiency limit
[ "Physics", "Chemistry" ]
619
[ "Thermodynamic processes", "Thermodynamics" ]
49,966,064
https://en.wikipedia.org/wiki/Multichannel%20analyzer
A multichannel analyzer (MCA) is an instrument used in laboratory and field applications to analyze an input signal consisting of voltage pulses. MCAs are used extensively in digitizing various spectroscopy measurements, especially those related to nuclear physics, including various types of spectroscopy (alpha-, beta-, and gamma spectroscopy). Operation A multichannel analyzer uses a fast ADC to record incoming pulses and stores information about pulses in one of two ways: Pulse-height analysis In pulse-height analysis (PHA) mode, incoming pulses are characterized based on their amplitude (peak voltage). The output spectrum is a histogram of these pulses, where the height of each channel corresponds to the number pulses counted within a narrow range of amplitudes. The resolution of the output spectrum depends on the number of channels of the MCA, which is on the order of a few thousand for typical instruments. In alpha-, beta-, and gamma spectroscopy, PHA is used to measure the energy distribution of particles emitted in nuclear decay. Incoming particles are absorbed by a detector medium and excite voltage pulses whose amplitudes are proportional to their energy. After many pulses have been counted, the output spectrum shows the energy distribution of the radiation incident on the detector. Multichannel scaling mode In multichannel scaling (MCS) mode, the MCA records a pulse count-rate over time. Unlike PHA, MCS does not differentiate pulses of different amplitudes. Instead, the MCA records all measured counts in one channel for a set time interval (called the "dwell time"), then switches to the next channel to record the subsequent time interval, and so on. The internal control voltage signal used to switch channels when the dwell time elapses is often available to the experimenter and can be used to trigger changes in the experimental setup. In this arrangement, the MCA acts as an X–Y recorder, observing changes in the count rate as a function of the controlled experimental parameter. For example, a Geiger counter connected to an MCA in MCS mode could be used to record the amount of ionizing radiation emitted by a neutron generator at different voltages. Output interface Once a histogram has been recorded, the data is sent to a computer, displayed on a screen on the MCA, or (in older models) sent directly to a printer. Modern MCAs typically interface with a computer via USB or Ethernet, but some older or specialty models use RS-232 or PCI. Sound card MCA A USB sound card can serve as a cheap, consumer off-the-shelf ADC, a technique pioneered by Marek Dolleiser. The data is sent to the computer as normal sound and stored in a WAV file. Specialized software processes the "sound" to perform pulse-height analysis and multichannel scaling, forming a complete MCA. Sound cards have high-speed but low-resolution (up to 192 kHz) ADC chips, allowing for reasonable gamma spectroscopy performance for a low-to-medium count rate. The "sound card spectrometer" has been further refined in amateur and professional circles. See also Geiger counter Oscilloscope References Radioactivity Laboratory equipment
Multichannel analyzer
[ "Physics", "Chemistry" ]
640
[ "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Radioactivity", "Nuclear physics" ]
49,974,099
https://en.wikipedia.org/wiki/Moving%20bed%20biofilm%20reactor
Moving bed biofilm reactor (MBBR) is a type of wastewater treatment process that was first invented by Professor Hallvard Ødegaard at Norwegian University of Science and Technology in the late 1980s. The process takes place in an aeration tank with plastic carriers that a biofilm can grow on. The compact size and cheap wastewater treatment costs offers many advantages for the system. The main objective of using MBBR being water reuse and nutrient removal or recovery. In theory, wastewater will be no longer considered waste, it can be considered a resource. Background Overview Due to early issues with biofilm reactors, like hydraulic instability and uneven biofilm distribution, moving bed biofilm technology was developed. The MBBR system consists of an aeration tank (similar to an activated sludge tank) with special plastic carriers that provide a surface where a biofilm can grow. There is a wide variety of plastic carriers used in these systems. These carriers vary in surface area and in shape, each offering different advantages and disadvantages. Surface area plays a very important role in biofilm formation. Free-floating carriers allow biofilms to form on the surface, therefore a large internal surface area is crucial for contact with water, air, bacteria, and nutrients. The carriers will be mixed in the tank by the aeration system and thus will have good contact between the substrate in the influent wastewater and the biomass on the carriers. The most preferable material is currently high density polyethylene (HDPE) due to its plasticity, density, and durability. To achieve higher concentration of biomass in the bioreactors, hybrid MBBR systems have been used where suspended and attached biomass co-exist contributing both to biological processes. Additionally, there are anaerobic MBBRs that have been mainly used for industrial wastewater treatment. A 2019 article described a combination of anaerobic (methanogenic) MBBR with aerobic MBBR that was applied in a municipal wastewater treatment laboratory, with simultaneous production of biogas. History The development of MBBR technology is attributed to Professor Hallvard Ødegaard and his colleagues at Norwegian University of Science and Technology (NTNU). This is traced back to the late 1970s to early 1980s. The first MBBR pilot plant was installed at NTNU in the early 1980s in which its success lead to the construction and start-up of the first full-scale MBBR plant in Norway in 1985. It was commercialized by Kaldnes Miljöteknologi (now called AnoxKaldnes and owned by Veolia Water Technologies). Since then, MBBR technology has been widely adopted throughout the world, mainly in Europe and Asia. Now, there are over 700 wastewater treatment systems (both municipal and industrial) installed in over 50 countries. Current Usage Today, MBBR technology is used for municipal sewage treatment, industrial wastewater treatment, and decentralized wastewater treatment. This technology has been used in many different industries, some of them being: Automotive industry Chemical industry Food and beverage Metal plating and finishing The MBBR system is considered a biofilm or biological process, not a chemical or mechanical process. Other conventional biofilm processes for wastewater treatment are called trickling filter, rotating biological contactor (RBC) and biological aerated filter (BAF). Important applications: Denitrification Nitrification BOD/COD removal Anaerobic ammonium oxidation (ANAMMOX) process Methods There are many design components of MBBR that come together to make the technology highly efficient. First, the process occurs in a basin (or aeration tank). The overall size of this tank is dependent on both the type and volume of wastewater being processed. The influent enters the basin at the beginning of treatment. Second component being the media. The media consists of the free-floating biocarriers mentioned earlier and can occupy as much as 70 percent of the tank. Third, an aeration grid is responsible for helping the media move through the basin and ensure the carriers come into contact with as much waste as possible, in addition to introducing more oxygen into the basin. Lastly, a sieve keeps all the carriers in the tank to prevent the plastic carriers from escaping the aeration. Though there are a few different methods, they all use the same design components. The continuous flow method involves continuous flow of wastewater into the basin, with an equal flow of treated water exiting through the sieve. Intermittent aeration method operating in cycles of aeration and non-aeration, allowing for both aerobic conditions and anoxic conditions. Sequencing batch reactor (SBR) method is completed in a single reactor where several treatment steps occur in a sequence, where the treated water is removed before the cycle begins again. Large diameter submersible mixers are commonly used as a method for mixing in these systems. Removal of Micropollutants Moving bed biofilm reactors have shown promising results to remove micropollutations (MPs) from wastewater. MPs fall into several groups of chemicals such as pharmaceuticals, organophosphorus pesticides (OPs), care products and endocrine disruptors. A 2012 article reported described the use of MBBR technology to remove pharmaceuticals such as beta-blockers, analgesics, anti-depressants, and antibiotics from hospital wastewater. Moreover, application of MBBR as a biological technique combined with chemical treatment has attracted a great deal of attention for removal of organophosphorous pesticide from wastewater. The advantage of MBBRs can be associated with its high solid retention time, which allows the proliferation of slow-growing microbial communities with multiple functions in biofilms. The dynamics of such microbial communities greatly depends on organic loading in MBBR systems. Moving bed biofilm reactors can efficiently treat hospital wastewater and remove pharmaceutical micropollutants. A 2023 study has shown that a strictly anaerobic MBBR, combined with an aerobic biofilm reactor can achieve high removal rates of pharmaceuticals, such as metronidazole, trimethoprim, sulfamethoxazole, and valsartan. Advantages Biofilm processes in general require less space than activated sludge systems because the biomass is more concentrated, and the efficiency of the system is less dependent on the final sludge separation. MBBR systems do not need a recycling of the sludge, which is the case with activated sludge systems. The MBBR system is often installed as a retrofit of existing activated sludge tanks to increase the capacity of the existing system. The degree of filling of carriers can be adapted to the specific situation and the desired capacity. Thus an existing treatment plant can increase its capacity without increasing the footprint by constructing new tanks. Some other advantages are: Increased performance and volumetric treatment capacity Higher effective sludge retention time (SRT) which is favorable for nitrification Responds to load fluctuations without operator intervention Lower sludge production Less area required Resilient to toxic shock Process performance independent of secondary clarifier (due to the fact that there is no sludge return line). Disadvantages A disadvantage with other biofilm processes is that they experience bioclogging and build-up of headloss. Depending on the type of waste and design of the process, several problems can occur during the full-scale process. Some of the disadvantages are: Feed pipe/effluent sieve blocking Nonhomogeneous mixing Carrier voids blocking Destroyed carriers Carriers accumulating at the effluent sieves Carrier overflow Alternative Wastewater Treatment Systems There are many alternative wastewater treatment systems that can be used in place of MBBRs. The selection of the appropriate system depends on the wastewater coming in, treatment objectives, available space, and budgets. Some other options are: Sequencing Batch Reactors (SBR) Membrane Bioreactors (MBRs) Fixed Film Systems Integrated Fixed-Film Activated Sludge (IFAS) Submerged Aerated Filters (SAFs) Rotating Biological Contactors (RBCs) See also List of waste-water treatment technologies References Environmental engineering Sewerage Waste treatment technology
Moving bed biofilm reactor
[ "Chemistry", "Engineering", "Environmental_science" ]
1,640
[ "Water treatment", "Chemical engineering", "Water pollution", "Sewerage", "Civil engineering", "Environmental engineering", "Waste treatment technology" ]
49,976,857
https://en.wikipedia.org/wiki/Tardiness%20%28scheduling%29
In scheduling, tardiness is a measure of a delay in executing certain operations and earliness is a measure of finishing operations before due time. The operations may depend on each other and on the availability of equipment to perform them. Typical examples include job scheduling in manufacturing and data delivery scheduling in data processing networks. In manufacturing environment, inventory management considers both tardiness and earliness undesirable. Tardiness involves backlog issues such as customer compensation for delays and loss of goodwill. Earliness incurs expenses for storage of the manufactured items and ties up capital. Mathematical formulations In an environment with multiple jobs, let the deadline be and the completion time be of job . Then for job lateness is , earliness is , tardiness is . In scheduling common objective functions are or weighted version of these sums, , where every job comes with a weight . The weight is a representation of job cost, priority, etc. In a large number of cases the problems of optimizing these functions are NP-hard. References Time management Scheduling (computing) Schedule (project management) Theoretical computer science
Tardiness (scheduling)
[ "Physics", "Mathematics" ]
224
[ "Physical quantities", "Time", "Theoretical computer science", "Applied mathematics", "Time management", "Spacetime", "Schedule (project management)" ]
45,598,981
https://en.wikipedia.org/wiki/Gaseous%20signaling%20molecules
Gaseous signaling molecules are gaseous molecules that are either synthesized internally (endogenously) in the organism, tissue or cell or are received by the organism, tissue or cell from outside (say, from the atmosphere or hydrosphere, as in the case of oxygen) and that are used to transmit chemical signals which induce certain physiological or biochemical changes in the organism, tissue or cell. The term is applied to, for example, oxygen, carbon dioxide, sulfur dioxide, nitrous oxide, hydrogen cyanide, ammonia, methane, hydrogen, ethylene, etc. Select gaseous signaling molecules behave as neurotransmitters and are called gasotransmitters. These include nitric oxide, carbon monoxide, and hydrogen sulfide. Historically, the study of gases and physiological effects was categorized under factitious airs. The biological roles of each of the gaseous signaling molecules are outlined below. Gasotransmitters Gasotransmitters are a class of neurotransmitters. Only three gases are accepted to be classified as gasotransmitters including nitric oxide, carbon monoxide, and hydrogen sulfide. Gaseous Signaling Molecules Oxygen Oxygen, O2, is an essential gaseous signaling molecule & biological messenger important in many physiological and pathological processes, acting via cellular gasoreceptor proteins and other signaling pathways. The levels of O2 in cells or organisms must be tighly regulated to ensure normoxic and not uncontrolled hypoxic or anoxic or hyperoxic states. In mammals, specialized tissues such as carotid body sense O2 levels. Carbon dioxide Carbon dioxide, CO2, is one of the mediators of local autoregulation of blood supply. If its levels are high, the capillaries expand to allow a greater blood flow to that tissue. Mosquitoes are attracted to humans by sensing the CO2 via gustatory receptors, a type of gasoreceptor. Although the body requires oxygen for metabolism, low oxygen levels normally do not stimulate breathing. Rather, breathing is stimulated by higher carbon dioxide levels. The respiratory centers try to maintain an arterial CO2 pressure of 40 mm Hg. With intentional hyperventilation, the CO2 content of arterial blood may be lowered to 10–20 mm Hg (the oxygen content of the blood is little affected), and the respiratory drive is diminished. This is why one can hold one's breath longer after hyperventilating than without hyperventilating. This carries the risk that unconsciousness may result before the need to breathe becomes overwhelming, which is why hyperventilation is particularly dangerous before free diving. Nitric oxide Nitric oxide, NO, is a key vertebrate biological messenger important in many physiological and pathological processes, acting, for instance, as a powerful vasodilator in humans (see Biological functions of nitric oxide). Mammalian cells have a specialized gasoreceptor soluble guanylyl cyclase that bind to NO and trigger NO-dependent cellular signaling. Nitrous oxide Nitrous oxide, N2O, in biological systems can be formed by an enzymatic or non-enzymatic reduction of nitric oxide. In vitro studies have shown that endogenous nitrous oxide can be formed by the reaction between nitric oxide and thiol. Some authors have shown that this process of NO reduction to N2O takes place in hepatocytes, specifically in their cytoplasm and mitochondria, and suggested that the N2O can possibly be produced in mammalian cells. It is well known that N2O is produced by some bacteria during process called denitrification. In 1981, it was first suggested from clinical work with nitrous oxide (N2O) that a gas had a direct action at pharmacological receptors and thereby acted as a neurotransmitter. In vitro experiments confirmed these observations which were replicated at NIDA later. Apart from its direct and indirect actions at opioid receptors, it was also shown that N2O inhibits NMDA receptor-mediated activity and ionic currents and diminishes NMDA receptor-mediated excitotoxicity and neurodegeneration. Nitrous oxide also inhibits methionine synthase and slows the conversion of homocysteine to methionine, increases homocysteine concentration and decreases methionine concentration. This effect was shown in lymphocyte cell cultures and in human liver biopsy samples. Nitrous oxide does not bind as a ligand to the heme and does not react with thiol-containing proteins. Nevertheless, studies have shown that nitrous oxide can reversibly and non-covalently "insert" itself into the inner structures of some heme-containing proteins such as hemoglobin, myoglobin, cytochrome oxidase and alter their structure and function. The ability of nitrous oxide to alter the structure and function of these proteins was demonstrated by shifts in infrared spectra of cysteine thiols of hemoglobin and by partial and reversible inhibition of cytochrome oxidase. Endogenous nitrous oxide can possibly play a role in modulating endogenous opioid and NMDA systerosclerosis, severe sepsis, severe malaria, or autoimmunity. Clinical tests involving humans have been performed, but the results have not yet been released. Carbon suboxide Carbon suboxide, C3O2, can be produced in small amounts in any biochemical process that normally produces carbon monoxide, CO, for example, during heme oxidation by heme oxygenase-1. It can also be formed from malonic acid. It has been shown that carbon suboxide in an organism can quickly polymerize into macrocyclic polycarbon structures with the common formula (C3O2)n (mostly (C3O2)6 and (C3O2)8), and that those macrocyclic compounds are potent inhibitors of Na+/K+-ATP-ase and Ca-dependent ATP-ase, and have digoxin-like physiological properties and natriuretic and antihypertensive actions. Those macrocyclic carbon suboxide polymer compounds are thought to be endogenous digoxin-like regulators of Na+/K+-ATP-ases and Ca-dependent ATP-ases, and endogenous natriuretics and antihypertensives. Other than that, some authors think also that those macrocyclic compounds of carbon suboxide can possibly diminish free radical formation and oxidative stress and play a role in endogenous anticancer protective mechanisms, for example in the retina. Sulfur dioxide The role of sulfur dioxide, SO2, in mammalian biology is not well understood. Sulfur dioxide blocks nerve signals from the pulmonary stretch receptors and abolishes the Hering–Breuer inflation reflex. Sulfur dioxide plays a role in diminishing an experimental lung damage caused by oleic acid. Endogenous sulfur dioxide lowered lipid peroxidation, free radical formation, oxidative stress and inflammation during an experimental lung damage. Conversely, a successful lung damage caused a significant lowering of endogenous sulfur dioxide production, and an increase in lipid peroxidation, free radical formation, oxidative stress and inflammation. Moreover, blockade of an enzyme that produces endogenous SO2 significantly increased the amount of lung tissue damage in the experiment. Conversely, adding acetylcysteine or glutathione to the rat diet increased the amount of endogenous SO2 produced and decreased the lung damage, the free radical formation, oxidative stress, inflammation and apoptosis. Endogenous sulfur dioxide may play a role in regulating cardiac and blood vessel function, and aberrant or deficient sulfur dioxide metabolism can contribute to several different cardiovascular diseases, such as arterial hypertension, atherosclerosis, pulmonary arterial hypertension, stenocardia. In children with pulmonary arterial hypertension due to congenital heart diseases, the level of homocysteine is higher and the level of endogenous sulfur dioxide is lower than in normal control children. Moreover, these biochemical parameters strongly correlated to the severity of pulmonary arterial hypertension. Authors considered homocysteine to be one of useful biochemical markers of disease severity and sulfur dioxide metabolism to be one of potential therapeutic targets in those patients. Endogenous sulfur dioxide also lowers the proliferation rate of endothelial smooth muscle cells in blood vessels, via lowering the MAPK activity and activating adenylyl cyclase and protein kinase A. Smooth muscle cell proliferation is one of important mechanisms of hypertensive remodeling of blood vessels and their stenosis, so it is an important pathogenetic mechanism in arterial hypertension and atherosclerosis. Endogenous sulfur dioxide in low concentrations causes endothelium-dependent vasodilation. In higher concentrations it causes endothelium-independent vasodilation and has a negative inotropic effect on cardiac output function, thus effectively lowering blood pressure and myocardial oxygen consumption. The vasodilating effects of sulfur dioxide are mediated via ATP-dependent calcium channels and L-type ("dihydropyridine") calcium channels. Endogenous sulfur dioxide is also a potent antiinflammatory, antioxidant and cytoprotective agent. It lowers blood pressure and slows hypertensive remodeling of blood vessels, especially thickening of their intima. It also regulates lipid metabolism. Endogenous sulfur dioxide also diminishes myocardial damage, caused by isoproterenol adrenergic hyperstimulation, and strengthens the myocardial antioxidant defense reserve. Hydrogen cyanide Some authors have shown that neurons can produce hydrogen cyanide, HCN, upon activation of their opioid receptors by endogenous or exogenous opioids. They have also shown that neuronal production of HCN activates NMDA receptors and plays a role in signal transduction between neuronal cells (neurotransmission). Moreover, increased endogenous neuronal HCN production under opioids was seemingly needed for adequate opioid analgesia, as analgesic action of opioids was attenuated by HCN scavengers. They considered endogenous HCN to be a neuromodulator. It was also shown that, while stimulating muscarinic cholinergic receptors in cultured pheochromocytoma cells increases HCN production, in a living organism (in vivo) muscarinic cholinergic stimulation actually decreases HCN production. Leukocytes generate HCN during phagocytosis. The vasodilatation, caused by sodium nitroprusside, has been shown to be mediated not only by NO generation, but also by endogenous cyanide generation, which adds not only toxicity, but also some additional antihypertensive efficacy compared to nitroglycerine and other non-cyanogenic nitrates which do not cause blood cyanide levels to rise. Ammonia Ammonia, NH3, also plays a role in both normal and abnormal animal physiology. It is biosynthesised through normal amino acid metabolism, but is toxic in high concentrations. The liver converts ammonia to urea through a series of reactions known as the urea cycle. Liver dysfunction, such as that seen in cirrhosis, may lead to elevated amounts of ammonia in the blood (hyperammonemia). Likewise, defects in the enzymes responsible for the urea cycle, such as ornithine transcarbamylase, lead to hyperammonemia. Hyperammonemia contributes to the confusion and coma of hepatic encephalopathy, as well as the neurologic disease common in people with urea cycle defects and organic acidurias. Ammonia is important for normal animal acid/base balance. After formation of ammonium from glutamine, α-ketoglutarate may be degraded to produce two molecules of bicarbonate, which are then available as buffers for dietary acids. Ammonium is excreted in the urine, resulting in net acid loss. Ammonia may itself diffuse across the renal tubules, combine with a hydrogen ion, and thus allow for further acid excretion. Methane Some authors have shown that endogenous methane, CH4, is produced not only by the intestinal flora and then absorbed into the blood, but also is produced - in small amounts - by eukaryotic cells (during process of lipid peroxidation). And they have also shown that the endogenous methane production rises during an experimental mitochondrial hypoxia, for example, sodium azide intoxication. They thought that methane could be one of intercellular signals of hypoxia and stress. Other authors have shown that cellular methane production also rises during sepsis or bacterial endotoxemia, including an experimental imitation of endotoxemia by lipopolysaccharide (LPS) administration. Some other researchers have shown that methane, produced by the intestinal flora, is not fully "biologically neutral" to the intestine, and it participates in the normal physiologic regulation of peristalsis. And its excess causes not only belching, flatulence and belly pain, but also functional constipation. Ethylene Ethylene, H2C=CH2, serves as a hormone in plants. It acts at trace levels throughout the life of the plant by stimulating or regulating the ripening of fruit, the opening of flowers, and the abscission (or shedding) of leaves. Commercial ripening rooms use "catalytic generators" to make ethylene gas from a liquid supply of ethanol. Typically, a gassing level of 500 to 2,000 ppm is used, for 24 to 48 hours. Care must be taken to control carbon dioxide levels in ripening rooms when gassing, as high temperature ripening () has been seen to produce CO2 levels of 10% in 24 hours. Ethylene has been used since the ancient Egyptians, who would gash figs in order to stimulate ripening (wounding stimulates ethylene production by plant tissues). The ancient Chinese would burn incense in closed rooms to enhance the ripening of pears. In 1864, it was discovered that gas leaks from street lights led to stunting of growth, twisting of plants, and abnormal thickening of stems. In 1901, a Russian scientist named Dimitry Neljubow showed that the active component was ethylene. Sarah Doubt discovered that ethylene stimulated abscission in 1917. It wasn't until 1934 that Gane reported that plants synthesize ethylene. In 1935, Crocker proposed that ethylene was the plant hormone responsible for fruit ripening as well as senescence of vegetative tissues. Ethylene is produced from essentially all parts of higher plants, including leaves, stems, roots, flowers, fruits, tubers, and seeds. Ethylene production is regulated by a variety of developmental and environmental factors. During the life of the plant, ethylene production is induced during certain stages of growth such as germination, ripening of fruits, abscission of leaves, and senescence of flowers. Ethylene production can also be induced by a variety of external aspects such as mechanical wounding, environmental stresses, and certain chemicals including auxin and other regulators. Ethylene is biosynthesized from the amino acid methionine to S-adenosyl-L-methionine (SAM, also called Adomet) by the enzyme Met Adenosyltransferase. SAM is then converted to 1-aminocyclopropane-1-carboxylic acid (ACC) by the enzyme ACC synthase (ACS). The activity of ACS determines the rate of ethylene production, therefore regulation of this enzyme is key for the ethylene biosynthesis. The final step requires oxygen and involves the action of the enzyme ACC-oxidase (ACO), formerly known as the ethylene forming enzyme (EFE). Ethylene biosynthesis can be induced by endogenous or exogenous ethylene. ACC synthesis increases with high levels of auxins, especially indole acetic acid (IAA) and cytokinins. Ethylene is perceived by a family of five transmembrane protein dimers such as the ETR1 gasoreceptor protein in Arabidopsis. The gene encoding an ethylene receptor has been cloned in Arabidopsis thaliana and then in tomato. Ethylene receptors are encoded by multiple genes in the Arabidopsis and tomato genomes. Mutations in any of the gene family, which comprises five receptors in Arabidopsis and at least six in tomato, can lead to insensitivity to ethylene. DNA sequences for ethylene receptors have also been identified in many other plant species and an ethylene binding protein has even been identified in Cyanobacteria. Environmental cues such as flooding, drought, chilling, wounding, and pathogen attack can induce ethylene formation in plants. In flooding, roots suffer from lack of oxygen, or anoxia, which leads to the synthesis of 1-aminocyclopropane-1-carboxylic acid (ACC). ACC is transported upwards in the plant and then oxidized in leaves. The ethylene produced causes nastic movements (epinasty) of the leaves, perhaps helping the plant to lose water. Ethylene in plant induces such responses: Seedling triple response, thickening and shortening of hypocotyl with pronounced apical hook. In pollination, when the pollen reaches the stigma, the precursor of the ethene, ACC, is secreted to the petal, the ACC releases ethylene with ACC oxidase. Stimulates leaf and flower senescence Stimulates senescence of mature xylem cells in preparation for plant use Induces leaf abscission Induces seed germination Induces root hair growth — increasing the efficiency of water and mineral absorption through rhizosheath formation Induces the growth of adventitious roots during flooding Stimulates survival under low-oxygen conditions (hypoxia) in submerged plant tissues Stimulates epinasty — leaf petiole grows out, leaf hangs down and curls into itself Stimulates fruit ripening Induces a climacteric rise in respiration in some fruit which causes a release of additional ethylene. Affects gravitropism Inhibits root growth in response to soil compaction, shade and flooding Stimulates nutational bending Inhibits stem growth and stimulates stem and cell broadening and lateral branch growth outside of seedling stage (see Hyponastic response) Interference with auxin transport (with high auxin concentrations) Inhibits shoot growth and stomatal closing except in some water plants or habitually flooded ones such as some rice varieties, where the opposite occurs (conserving and ) Induces flowering in pineapples Inhibits short day induced flower initiation in Pharbitus nil and Chrysanthemum morifolium Small amounts of endogenous ethylene are also produced in mammals, including humans, due to lipid peroxidation. Some of endogenous ethylene is then oxidized to ethylene oxide, which is able to alkylate DNA and proteins, including hemoglobin (forming a specific adduct with its N-terminal valine, N-hydroxyethyl-valine). Endogenous ethylene oxide, just as like environmental (exogenous) one, can alkylate guanine in DNA, forming an adduct 7-(2-hydroxyethyl)-guanine, and this poses an intrinsic carcinogenic risk. It is also mutagenic. See also Gas sensor protein (Gasoreceptor) References External links Biochemistry Molecular biology Molecules Signal transduction
Gaseous signaling molecules
[ "Physics", "Chemistry", "Biology" ]
4,219
[ "Molecular physics", "Molecules", "Signal transduction", "Gaseous signaling molecules", "Physical objects", "nan", "Molecular biology", "Biochemistry", "Neurochemistry", "Atoms", "Matter" ]
45,603,435
https://en.wikipedia.org/wiki/Lie%20algebra%20extension
In the theory of Lie groups, Lie algebras and their representation theory, a Lie algebra extension is an enlargement of a given Lie algebra by another Lie algebra . Extensions arise in several ways. There is the trivial extension obtained by taking a direct sum of two Lie algebras. Other types are the split extension and the central extension. Extensions may arise naturally, for instance, when forming a Lie algebra from projective group representations. Such a Lie algebra will contain central charges. Starting with a polynomial loop algebra over finite-dimensional simple Lie algebra and performing two extensions, a central extension and an extension by a derivation, one obtains a Lie algebra which is isomorphic with an untwisted affine Kac–Moody algebra. Using the centrally extended loop algebra one may construct a current algebra in two spacetime dimensions. The Virasoro algebra is the universal central extension of the Witt algebra. Central extensions are needed in physics, because the symmetry group of a quantized system usually is a central extension of the classical symmetry group, and in the same way the corresponding symmetry Lie algebra of the quantum system is, in general, a central extension of the classical symmetry algebra. Kac–Moody algebras have been conjectured to be symmetry groups of a unified superstring theory. The centrally extended Lie algebras play a dominant role in quantum field theory, particularly in conformal field theory, string theory and in M-theory. A large portion towards the end is devoted to background material for applications of Lie algebra extensions, both in mathematics and in physics, in areas where they are actually useful. A parenthetical link, (background material), is provided where it might be beneficial. History Due to the Lie correspondence, the theory, and consequently the history of Lie algebra extensions, is tightly linked to the theory and history of group extensions. A systematic study of group extensions was performed by the Austrian mathematician Otto Schreier in 1923 in his PhD thesis and later published. The problem posed for his thesis by Otto Hölder was "given two groups and , find all groups having a normal subgroup isomorphic to such that the factor group is isomorphic to ". Lie algebra extensions are most interesting and useful for infinite-dimensional Lie algebras. In 1967, Victor Kac and Robert Moody independently generalized the notion of classical Lie algebras, resulting in a new theory of infinite-dimensional Lie algebras, now called Kac–Moody algebras. They generalize the finite-dimensional simple Lie algebras and can often concretely be constructed as extensions. Notation and proofs Notational abuse to be found below includes for the exponential map given an argument, writing for the element in a direct product ( is the identity in ), and analogously for Lie algebra direct sums (where also and are used interchangeably). Likewise for semidirect products and semidirect sums. Canonical injections (both for groups and Lie algebras) are used for implicit identifications. Furthermore, if , , ..., are groups, then the default names for elements of , , ..., are , , ..., and their Lie algebras are , , ... . The default names for elements of , , ..., are , , ... (just like for the groups!), partly to save scarce alphabetical resources but mostly to have a uniform notation. Lie algebras that are ingredients in an extension will, without comment, be taken to be over the same field. The summation convention applies, including sometimes when the indices involved are both upstairs or both downstairs. Caveat: Not all proofs and proof outlines below have universal validity. The main reason is that the Lie algebras are often infinite-dimensional, and then there may or may not be a Lie group corresponding to the Lie algebra. Moreover, even if such a group exists, it may not have the "usual" properties, e.g. the exponential map might not exist, and if it does, it might not have all the "usual" properties. In such cases, it is questionable whether the group should be endowed with the "Lie" qualifier. The literature is not uniform. For the explicit examples, the relevant structures are supposedly in place. Definition Lie algebra extensions are formalized in terms of short exact sequences. A short exact sequence is an exact sequence of length three, such that is a monomorphism, is an epimorphism, and . From these properties of exact sequences, it follows that (the image of) is an ideal in . Moreover, but it is not necessarily the case that is isomorphic to a subalgebra of . This construction mirrors the analogous constructions in the closely related concept of group extensions. If the situation in prevails, non-trivially and for Lie algebras over the same field, then one says that is an extension of by . Properties The defining property may be reformulated. The Lie algebra is an extension of by if is exact. Here the zeros on the ends represent the zero Lie algebra (containing only the zero vector ) and the maps are the obvious ones; maps to and maps all elements of to . With this definition, it follows automatically that is a monomorphism and is an epimorphism. An extension of by is not necessarily unique. Let denote two extensions and let the primes below have the obvious interpretation. Then, if there exists a Lie algebra isomorphism such that then the extensions and are said to be equivalent extensions. Equivalence of extensions is an equivalence relation. Extension types Trivial A Lie algebra extension is trivial if there is a subspace such that and is an ideal in . Split A Lie algebra extension is split if there is a subspace such that as a vector space and is a subalgebra in . An ideal is a subalgebra, but a subalgebra is not necessarily an ideal. A trivial extension is thus a split extension. Central Central extensions of a Lie algebra by an abelian Lie algebra can be obtained with the help of a so-called (nontrivial) 2-cocycle (background) on . Non-trivial 2-cocycles occur in the context of projective representations (background) of Lie groups. This is alluded to further down. A Lie algebra extension is a central extension if is contained in the center of . Properties Since the center commutes with everything, in this case is abelian. Given a central extension of , one may construct a 2-cocycle on . Suppose is a central extension of by . Let be a linear map from to with the property that , i.e. is a section of . Use this section to define by The map satisfies To see this, use the definition of on the left hand side, then use the linearity of . Use Jacobi identity on to get rid of half of the six terms. Use the definition of again on terms sitting inside three Lie brackets, bilinearity of Lie brackets, and the Jacobi identity on , and then finally use on the three remaining terms that and that so that brackets to zero with everything. It then follows that satisfies the corresponding relation, and if in addition is one-dimensional, then is a 2-cocycle on (via a trivial correspondence of with the underlying field). A central extension is universal if for every other central extension there exist unique homomorphisms and such that the diagram commutes, i.e. and . By universality, it is easy to conclude that such universal central extensions are unique up to isomorphism. Construction By direct sum Let , be Lie algebras over the same field . Define and define addition pointwise on . Scalar multiplication is defined by With these definitions, is a vector space over . With the Lie bracket: is a Lie algebra. Define further It is clear that holds as an exact sequence. This extension of by is called a trivial extension. It is, of course, nothing else than the Lie algebra direct sum. By symmetry of definitions, is an extension of by as well, but . It is clear from that the subalgebra is an ideal (Lie algebra). This property of the direct sum of Lie algebras is promoted to the definition of a trivial extension. By semidirect sum Inspired by the construction of a semidirect product (background) of groups using a homomorphism , one can make the corresponding construct for Lie algebras. If is a Lie algebra homomorphism, then define a Lie bracket on by With this Lie bracket, the Lie algebra so obtained is denoted and is called the semidirect sum of and . By inspection of one sees that is a subalgebra of and is an ideal in . Define by and by . It is clear that . Thus is a Lie algebra extension of by . As with the trivial extension, this property generalizes to the definition of a split extension. ExampleLet be the Lorentz group and let denote the translation group in 4 dimensions, isomorphic to , and consider the multiplication rule of the Poincaré group (where and are identified with their images in ). From it follows immediately that, in the Poincaré group, . Thus every Lorentz transformation corresponds to an automorphism of with inverse and is clearly a homomorphism. Now define endowed with multiplication given by . Unwinding the definitions one finds that the multiplication is the same as the multiplication one started with and it follows that . From follows that and then from it follows that . By derivation Let be a derivation (background) of and denote by the one-dimensional Lie algebra spanned by . Define the Lie bracket on by It is obvious from the definition of the bracket that is and ideal in in and that is a subalgebra of . Furthermore, is complementary to in . Let be given by and by . It is clear that . Thus is a split extension of by . Such an extension is called extension by a derivation. If is defined by , then is a Lie algebra homomorphism into . Hence this construction is a special case of a semidirect sum, for when starting from and using the construction in the preceding section, the same Lie brackets result. By 2-cocycle If is a 2-cocycle (background) on a Lie algebra and is any one-dimensional vector space, let (vector space direct sum) and define a Lie bracket on by Here is an arbitrary but fixed element of . Antisymmetry follows from antisymmetry of the Lie bracket on and antisymmetry of the 2-cocycle. The Jacobi identity follows from the corresponding properties of and of . Thus is a Lie algebra. Put and it follows that . Also, it follows with and that . Hence is a central extension of by . It is called extension by a 2-cocycle. Theorems Below follows some results regarding central extensions and 2-cocycles. Theorem Let and be cohomologous 2-cocycles on a Lie algebra and let and be respectively the central extensions constructed with these 2-cocycles. Then the central extensions and are equivalent extensions. Proof By definition, . Define It follows from the definitions that is a Lie algebra isomorphism and holds. Corollary A cohomology class defines a central extension of which is unique up to isomorphism. The trivial 2-cocycle gives the trivial extension, and since a 2-coboundary is cohomologous with the trivial 2-cocycle, one has Corollary A central extension defined by a coboundary is equivalent with a trivial central extension. Theorem A finite-dimensional simple Lie algebra has only trivial central extensions. Proof Since every central extension comes from a 2-cocycle , it suffices to show that every 2-cocycle is a coboundary. Suppose is a 2-cocycle on . The task is to use this 2-cocycle to manufacture a 1-cochain such that . The first step is to, for each , use to define a linear map by . These linear maps are elements of . Let be the vector space isomorphism associated to the nondegenerate Killing form , and define a linear map by . This turns out to be a derivation (for a proof, see below). Since, for semisimple Lie algebras, all derivations are inner, one has for some . Then Let be the 1-cochain defined by Then showing that is a coboundary. To verify that actually is a derivation, first note that it is linear since is, then compute By appeal to the non-degeneracy of , the left arguments of are equal on the far left and far right. The observation that one can define a derivation , given a symmetric non-degenerate associative form and a 2-cocycle , by or using the symmetry of and the antisymmetry of , leads to a corollary. Corollary Let {{mvar|L:g × g: → F}} be a non-degenerate symmetric associative bilinear form and let be a derivation satisfying then defined by is a 2-cocycle. Proof The condition on ensures the antisymmetry of . The Jacobi identity for 2-cocycles follows starting with using symmetry of the form, the antisymmetry of the bracket, and once again the definition of in terms of . If is the Lie algebra of a Lie group and is a central extension of , one may ask whether there is a Lie group with Lie algebra . The answer is, by Lie's third theorem affirmative. But is there a central extension of with Lie algebra ? The answer to this question requires some machinery, and can be found in . Applications The "negative" result of the preceding theorem indicates that one must, at least for semisimple Lie algebras, go to infinite-dimensional Lie algebras to find useful applications of central extensions. There are indeed such. Here will be presented affine Kac–Moody algebras and Virasoro algebras. These are extensions of polynomial loop-algebras and the Witt algebra respectively. Polynomial loop-algebra Let be a polynomial loop algebra (background), where is a complex finite-dimensional simple Lie algebra. The goal is to find a central extension of this algebra. Two of the theorems apply. On the one hand, if there is a 2-cocycle on , then a central extension may be defined. On the other hand, if this 2-cocycle is acting on the part (only), then the resulting extension is trivial. Moreover, derivations acting on (only) cannot be used for definition of a 2-cocycle either because these derivations are all inner and the same problem results. One therefore looks for derivations on . One such set of derivations is In order to manufacture a non-degenerate bilinear associative antisymmetric form on , attention is focused first on restrictions on the arguments, with fixed. It is a theorem that every form satisfying the requirements is a multiple of the Killing form on . This requires Symmetry of implies and associativity yields With one sees that . This last condition implies the former. Using this fact, define . The defining equation then becomes For every the definition does define a symmetric associative bilinear form These span a vector space of forms which have the right properties. Returning to the derivations at hand and the condition one sees, using the definitions, that or, with , This (and the antisymmetry condition) holds if , in particular it holds when . Thus choose and . With these choices, the premises in the corollary are satisfied. The 2-cocycle defined by is finally employed to define a central extension of , with Lie bracket For basis elements, suitably normalized and with antisymmetric structure constants, one has This is a universal central extension of the polynomial loop algebra. A note on terminology In physics terminology, the algebra of above might pass for a Kac–Moody algebra, whilst it will probably not in mathematics terminology. An additional dimension, an extension by a derivation is required for this. Nonetheless, if, in a physical application, the eigenvalues of or its representative are interpreted as (ordinary) quantum numbers, the additional superscript on the generators is referred to as the level. It is an additional quantum number. An additional operator whose eigenvalues are precisely the levels is introduced further below. Current algebra As an application of a central extension of polynomial loop algebra, a current algebra of a quantum field theory is considered (background). Suppose one has a current algebra, with the interesting commutator being with a Schwinger term. To construct this algebra mathematically, let be the centrally extended polynomial loop algebra of the previous section with as one of the commutation relations, or, with a switch of notation () with a factor of under the physics convention, Define using elements of , One notes that so that it is defined on a circle. Now compute the commutator, For simplicity, switch coordinates so that and use the commutation relations, Now employ the Poisson summation formula, for in the interval and differentiate it to yield and finally or since the delta functions arguments only ensure that the arguments of the left and right arguments of the commutator are equal (formally ). By comparison with , this is a current algebra in two spacetime dimensions, including a Schwinger term, with the space dimension curled up into a circle. In the classical setting of quantum field theory, this is perhaps of little use, but with the advent of string theory where fields live on world sheets of strings, and spatial dimensions are curled up, there may be relevant applications. Kac–Moody algebra The derivation used in the construction of the 2-cocycle in the previous section can be extended to a derivation on the centrally extended polynomial loop algebra, here denoted by in order to realize a Kac–Moody algebra (background). Simply set Next, define as a vector space The Lie bracket on is, according to the standard construction with a derivation, given on a basis by For convenience, define In addition, assume the basis on the underlying finite-dimensional simple Lie algebra has been chosen so that the structure coefficients are antisymmetric in all indices and that the basis is appropriately normalized. Then one immediately through the definitions verifies the following commutation relations. These are precisely the short-hand description of an untwisted affine Kac–Moody algebra. To recapitulate, begin with a finite-dimensional simple Lie algebra. Define a space of formal Laurent polynomials with coefficients in the finite-dimensional simple Lie algebra. With the support of a symmetric non-degenerate alternating bilinear form and a derivation, a 2-cocycle is defined, subsequently used in the standard prescription for a central extension by a 2-cocycle. Extend the derivation to this new space, use the standard prescription for a split extension by a derivation and an untwisted affine Kac–Moody algebra obtains. Virasoro algebra The purpose is to construct the Virasoro algebra (named after Miguel Angel Virasoro) as a central extension by a 2-cocycle of the Witt algebra (background). The Jacobi identity for 2-cocycles yields Letting and using antisymmetry of one obtains In the extension, the commutation relations for the element are It is desirable to get rid of the central charge on the right hand side. To do this define Then, using as a 1-cochain, so with this 2-cocycle, equivalent to the previous one, one has With this new 2-cocycle (skip the prime) the condition becomes and thus where the last condition is due to the antisymmetry of the Lie bracket. With this, and with (cutting out a "plane" in ), yields that with (cutting out a "line" in ) becomes This is a difference equation generally solved by The commutator in the extension on elements of is then With it is possible to change basis (or modify the 2-cocycle by a 2-coboundary) so that with the central charge absent altogether, and the extension is hence trivial. (This was not (generally) the case with the previous modification, where only obtained the original relations.) With the following change of basis, the commutation relations take the form showing that the part linear in is trivial. It also shows that is one-dimensional (corresponding to the choice of ). The conventional choice is to take and still retaining freedom by absorbing an arbitrary factor in the arbitrary object . The Virasoro algebra is then with commutation relations Bosonic open strings The relativistic classical open string (background) is subject to quantization. This roughly amounts to taking the position and the momentum of the string and promoting them to operators on the space of states of open strings. Since strings are extended objects, this results in a continuum of operators depending on the parameter . The following commutation relations are postulated in the Heisenberg picture. All other commutators vanish. Because of the continuum of operators, and because of the delta functions, it is desirable to express these relations instead in terms of the quantized versions of the Virasoro modes, the Virasoro operators. These are calculated to satisfy They are interpreted as creation and annihilation operators acting on Hilbert space, increasing or decreasing the quantum of their respective modes. If the index is negative, the operator is a creation operator, otherwise it is an annihilation operator. (If it is zero, it is proportional to the total momentum operator.) In view of the fact that the light cone plus and minus modes were expressed in terms of the transverse Virasoro modes, one must consider the commutation relations between the Virasoro operators. These were classically defined (then modes) as Since, in the quantized theory, the alphas are operators, the ordering of the factors matter. In view of the commutation relation between the mode operators, it will only matter for the operator (for which ). is chosen normal ordered, where is a possible ordering constant. One obtains after a somewhat lengthy calculation the relations If one would allow for above, then one has precisely the commutation relations of the Witt algebra. Instead one has upon identification of the generic central term as times the identity operator, this is the Virasoro algebra, the universal central extension of the Witt algebra. The operator enters the theory as the Hamiltonian, modulo an additive constant. Moreover, the Virasoro operators enter into the definition of the Lorentz generators of the theory. It is perhaps the most important algebra in string theory. The consistency of the Lorentz generators, by the way, fixes the spacetime dimensionality to 26. While this theory presented here (for relative simplicity of exposition) is unphysical, or at the very least incomplete (it has, for instance, no fermions) the Virasoro algebra arises in the same way in the more viable superstring theory and M-theory. Group extension A projective representation of a Lie group (background) can be used to define a so-called group extension . In quantum mechanics, Wigner's theorem asserts that if is a symmetry group, then it will be represented projectively on Hilbert space by unitary or antiunitary operators. This is often dealt with by passing to the universal covering group of and take it as the symmetry group. This works nicely for the rotation group and the Lorentz group , but it does not work when the symmetry group is the Galilean group. In this case one has to pass to its central extension, the Bargmann group, which is the symmetry group of the Schrödinger equation. Likewise, if , the group of translations in position and momentum space, one has to pass to its central extension, the Heisenberg group. Let be the 2-cocycle on induced by . Define as a set and let the multiplication be defined by Associativity holds since is a 2-cocycle on . One has for the unit element and for the inverse The set is an abelian subgroup of . This means that is not semisimple. The center of , includes this subgroup. The center may be larger. At the level of Lie algebras it can be shown that the Lie algebra of is given by as a vector space and endowed with the Lie bracket Here is a 2-cocycle on . This 2-cocycle can be obtained from albeit in a highly nontrivial way. Now by using the projective representation one may define a map by It has the properties so is a bona fide representation of . In the context of Wigner's theorem, the situation may be depicted as such (replace by ); let denote the unit sphere in Hilbert space , and let be its inner product. Let denote ray space and the ray product. Let moreover a wiggly arrow denote a group action. Then the diagram commutes, i.e. Moreover, in the same way that is a symmetry of preserving , is a symmetry of preserving . The fibers of are all circles. These circles are left invariant under the action of . The action of on these fibers is transitive with no fixed point. The conclusion is that is a principal fiber bundle over with structure group . Background material In order to adequately discuss extensions, structure that goes beyond the defining properties of a Lie algebra is needed. Rudimentary facts about these are collected here for quick reference. Derivations A derivation on a Lie algebra is a map such that the Leibniz rule holds. The set of derivations on a Lie algebra is denoted . It is itself a Lie algebra under the Lie bracket It is the Lie algebra of the group of automorphisms of . One has to show If the rhs holds, differentiate and set implying that the lhs holds. If the lhs holds , write the rhs as and differentiate the rhs of this expression. It is, using , identically zero. Hence the rhs of this expression is independent of and equals its value for , which is the lhs of this expression. If , then , acting by , is a derivation. The set is the set of inner derivations on . For finite-dimensional simple Lie algebras all derivations are inner derivations. Semidirect product (groups) Consider two Lie groups and and , the automorphism group of . The latter is the group of isomorphisms of . If there is a Lie group homomorphism , then for each there is a with the property . Denote with the set and define multiplication by Then is a group with identity and the inverse is given by . Using the expression for the inverse and equation it is seen that is normal in . Denote the group with this semidirect product as . Conversely, if is a given semidirect product expression of the group , then by definition is normal in and for each where and the map is a homomorphism. Now make use of the Lie correspondence. The maps each induce, at the level of Lie algebras, a map . This map is computed by For instance, if and are both subgroups of a larger group and , then and one recognizes as the adjoint action of on restricted to . Now [ if is finite-dimensional] is a homomorphism, and appealing once more to the Lie correspondence, there is a unique Lie algebra homomorphism . This map is (formally) given by for example, if , then (formally) where a relationship between and the adjoint action rigorously proved in here is used. Lie algebra The Lie algebra is, as a vector space, . This is clear since generates and . The Lie bracket is given by To compute the Lie bracket, begin with a surface in parametrized by and . Elements of in are decorated with a bar, and likewise for . One has and by and thus Now differentiate this relationship with respect to and evaluate at : and by and thus Cohomology For the present purposes, consideration of a limited portion of the theory Lie algebra cohomology suffices. The definitions are not the most general possible, or even the most common ones, but the objects they refer to are authentic instances of more the general definitions. 2-cocycles The objects of primary interest are the 2-cocycles on , defined as bilinear alternating functions, that are alternating, and having a property resembling the Jacobi identity called the Jacobi identity for 2-cycles, The set of all 2-cocycles on is denoted . 2-cocycles from 1-cochains Some 2-cocycles can be obtained from 1-cochains. A 1-cochain on is simply a linear map, The set of all such maps is denoted and, of course (in at least the finite-dimensional case) . Using a 1-cochain , a 2-cocycle may be defined by The alternating property is immediate and the Jacobi identity for 2-cocycles is (as usual) shown by writing it out and using the definition and properties of the ingredients (here the Jacobi identity on and the linearity of ). The linear map is called the coboundary operator (here restricted to ). The second cohomology group Denote the image of of by . The quotient is called the second cohomology group of . Elements of are equivalence classes of 2-cocycles and two 2-cocycles and are called equivalent cocycles if they differ by a 2-coboundary, i.e. if for some . Equivalent 2-cocycles are called cohomologous. The equivalence class of is denoted . These notions generalize in several directions. For this, see the main articles. Structure constants Let be a Hamel basis for . Then each has a unique expression as for some indexing set of suitable size. In this expansion, only finitely many are nonzero. In the sequel it is (for simplicity) assumed that the basis is countable, and Latin letters are used for the indices and the indexing set can be taken to be . One immediately has for the basis elements, where the summation symbol has been rationalized away, the summation convention applies. The placement of the indices in the structure constants (up or down) is immaterial. The following theorem is useful: Theorem:There is a basis such that the structure constants are antisymmetric in all indices if and only if the Lie algebra is a direct sum of simple compact Lie algebras and Lie algebras. This is the case if and only if there is a real positive definite metric on satisfying the invariance condition in any basis. This last condition is necessary on physical grounds for non-Abelian gauge theories in quantum field theory. Thus one can produce an infinite list of possible gauge theories using the Cartan catalog of simple Lie algebras on their compact form (i.e., , etc. One such gauge theory is the gauge theory of the standard model with Lie algebra . Killing form The Killing form is a symmetric bilinear form on defined by Here is viewed as a matrix operating on the vector space . The key fact needed is that if is semisimple, then, by Cartan's criterion, is non-degenerate. In such a case may be used to identify and . If , then there is a such that This resembles the Riesz representation theorem and the proof is virtually the same. The Killing form has the property which is referred to as associativity. By defining and expanding the inner brackets in terms of structure constants, one finds that the Killing form satisfies the invariance condition of above. Loop algebra A loop group is taken as a group of smooth maps from the unit circle into a Lie group with the group structure defined by the group structure on . The Lie algebra of a loop group is then a vector space of mappings from into the Lie algebra of . Any subalgebra of such a Lie algebra is referred to as a loop algebra. Attention here is focused on polynomial loop algebras of the form To see this, consider elements near the identity in for in the loop group, expressed in a basis for where the are real and small and the implicit sum is over the dimension of . Now write to obtain Thus the functions constitute the Lie algebra. A little thought confirms that these are loops in as goes from to . The operations are the ones defined pointwise by the operations in . This algebra is isomorphic with the algebra where is the algebra of Laurent polynomials, The Lie bracket is In this latter view the elements can be considered as polynomials with (constant!) coefficients in . In terms of a basis and structure constants, It is also common to have a different notation, where the omission of should be kept in mind to avoid confusion; the elements really are functions . The Lie bracket is then which is recognizable as one of the commutation relations in an untwisted affine Kac–Moody algebra, to be introduced later, without the central term. With , a subalgebra isomorphic to is obtained. It generates (as seen by tracing backwards in the definitions) the set of constant maps from into , which is obviously isomorphic with when is onto (which is the case when is compact. If is compact, then a basis for may be chosen such that the are skew-Hermitian. As a consequence, Such a representation is called unitary because the representatives are unitary. Here, the minus on the lower index of is conventional, the summation convention applies, and the is (by the definition) buried in the s in the right hand side. Current algebra (physics) Current algebras arise in quantum field theories as a consequence of global gauge symmetry. Conserved currents occur in classical field theories whenever the Lagrangian respects a continuous symmetry. This is the content of Noether's theorem. Most (perhaps all) modern quantum field theories can be formulated in terms of classical Lagrangians (prior to quantization), so Noether's theorem applies in the quantum case as well. Upon quantization, the conserved currents are promoted to position dependent operators on Hilbert space. These operators are subject to commutation relations, generally forming an infinite-dimensional Lie algebra. A model illustrating this is presented below. To enhance the flavor of physics, factors of will appear here and there as opposed to in the mathematical conventions. Consider a column vector of scalar fields . Let the Lagrangian density be This Lagrangian is invariant under the transformation where are generators of either or a closed subgroup thereof, satisfying Noether's theorem asserts the existence of conserved currents, where is the momentum canonically conjugate to . The reason these currents are said to be conserved is because and consequently the charge associated to the charge density is constant in time. This (so far classical) theory is quantized promoting the fields and their conjugates to operators on Hilbert space and by postulating (bosonic quantization) the commutation relationsThere are alternative routes to quantization, e.g. one postulates the existence of creation and annihilation operators for all particle types with certain exchange symmetries based on which statistics, Bose–Einstein or Fermi–Dirac, the particles obey, in which case the above are derived for scalar bosonic fields using mostly Lorentz invariance and the demand for the unitarity of the S-matrix. In fact, all operators on Hilbert space can be built out of creation and annihilation operators. See e.g. , chapters 2–5. The currents accordingly become operators They satisfy, using the above postulated relations, the definitions and integration over space, the commutation relations where the speed of light and the reduced Planck constant have been set to unity. The last commutation relation does not follow from the postulated commutation relations (these are fixed only for , not for ), except for For the Lorentz transformation behavior is used to deduce the conclusion. The next commutator to consider is The presence of the delta functions and their derivatives is explained by the requirement of microcausality that implies that the commutator vanishes when . Thus the commutator must be a distribution supported at . The first term is fixed due to the requirement that the equation should, when integrated over , reduce to the last equation before it. The following terms are the Schwinger terms. They integrate to zero, but it can be shown quite generally that they must be nonzero. Consider a conserved current with a generic Schwinger term By taking the vacuum expectation value (VEV), one finds where and Heisenberg's equation of motion have been used as well as and its conjugate. Multiply this equation by and integrate with respect to and over all space, using integration by parts, and one finds Now insert a complete set of states, Here hermiticity of and the fact that not all matrix elements of between the vacuum state and the states from a complete set can be zero. Affine Kac–Moody algebra Let be an -dimensional complex simple Lie algebra with a dedicated suitable normalized basis such that the structure constants are antisymmetric in all indices with commutation relations An untwisted affine Kac–Moody algebra is obtained by copying the basis for each (regarding the copies as distinct), setting as a vector space and assigning the commutation relations If , then the subalgebra spanned by the is obviously identical to the polynomial loop algebra of above. Witt algebra The Witt algebra, named after Ernst Witt, is the complexification of the Lie algebra of smooth vector fields on the circle . In coordinates, such vector fields may be written and the Lie bracket is the Lie bracket of vector fields, on simply given by The algebra is denoted . A basis for is given by the set This basis satisfies This Lie algebra has a useful central extension, the Virasoro algebra. It has dimensional subalgebras isomorphic with and . For each , the set spans a subalgebra isomorphic to . For one has These are the commutation relations of with The groups and are isomorphic under the map and the same map holds at the level of Lie algebras due to the properties of the exponential map. A basis for is given, see classical group, by Now compute The map preserves brackets and there are thus Lie algebra isomorphisms between the subalgebra of spanned by with real coefficients, and . The same holds for any subalgebra spanned by , this follows from a simple rescaling of the elements (on either side of the isomorphisms). Projective representation If is a matrix Lie group, then elements of its Lie algebra m can be given by where is a differentiable path in that goes through the identity element at . Commutators of elements of the Lie algebra can be computed as Likewise, given a group representation , its Lie algebra is computed by where and . Then there is a Lie algebra isomorphism between and sending bases to bases, so that is a faithful representation of . If however is an admissible set of representatives of a projective unitary representation, i.e. a unitary representation up to a phase factor, then the Lie algebra, as computed from the group representation, is not isomorphic to . For , the multiplication rule reads The function ,often required to be smooth, satisfies It is called a 2-cocycle on . From the above equalities, , so one has because both and evaluate to the identity at . For an explanation of the phase factors , see Wigner's theorem. The commutation relations in for a basis, become in so in order for to be closed under the bracket (and hence have a chance of actually being a Lie algebra) a central charge must be included. Relativistic classical string theory A classical relativistic string traces out a world sheet in spacetime, just like a point particle traces out a world line. This world sheet can locally be parametrized using two parameters and . Points in spacetime can, in the range of the parametrization, be written . One uses a capital to denote points in spacetime actually being on the world sheet of the string. Thus the string parametrization is given by . The inverse of the parametrization provides a local coordinate system on the world sheet in the sense of manifolds. The equations of motion of a classical relativistic string derived in the Lagrangian formalism from the Nambu–Goto action are A dot over a quantity denotes differentiation with respect to and a prime differentiation with respect to . A dot between quantities denotes the relativistic inner product. These rather formidable equations simplify considerably with a clever choice of parametrization called the light cone gauge. In this gauge, the equations of motion become the ordinary wave equation. The price to be paid is that the light cone gauge imposes constraints, so that one cannot simply take arbitrary solutions of the wave equation to represent the strings. The strings considered here are open strings, i.e. they don't close up on themselves. This means that the Neumann boundary conditions have to be imposed on the endpoints. With this, the general solution of the wave equation (excluding constraints) is given by where is the slope parameter of the string (related to the string tension). The quantities and are (roughly) string position from the initial condition and string momentum. If all the are zero, the solution represents the motion of a classical point particle. This is rewritten, first defining and then writing In order to satisfy the constraints, one passes to light cone coordinates. For , where is the number of space'' dimensions, set Not all are independent. Some are zero (hence missing in the equations above), and the "minus coefficients" satisfy The quantity on the left is given a name, the transverse Virasoro mode'''. When the theory is quantized, the alphas, and hence the become operators. See also Group cohomology Group contraction (Inönu–Wigner contraction) Group extension Lie algebra cohomology Remarks Notes References Books Journals (English translation) This can be found in Kac–Moody and Virasoro algebras, A reprint Volume for Physicists (open access) Web Lie groups Quantum field theory Lie algebras Mathematical physics Conformal field theory String theory
Lie algebra extension
[ "Physics", "Astronomy", "Mathematics" ]
8,731
[ "Quantum field theory", "Astronomical hypotheses", "Mathematical structures", "Lie groups", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Algebraic structures", "String theory", "Mathematical physics" ]
45,604,402
https://en.wikipedia.org/wiki/Woolrich%20Electrical%20Generator
The Woolrich Electrical Generator, now in Thinktank, Birmingham Science Museum, England, is the earliest electrical generator used in an industrial process. Built in February 1844 at the Magneto Works of Thomas Prime and Son, Birmingham, to a design by John Stephen Woolrich (1820–1850), it was used by the firm of Elkingtons for commercial electroplating. Plaque The generator stood for some time in the chapel of Aston Hall, accompanied by a plaque bearing the following inscription: Construction The generator in its surviving form consists of eight axial bobbins with a magnetic field applied by four iron horseshoe magnets. The rectangular, wood frame measures tall, wide, and long. The generator was fitted with a commutator, as electroplating requires direct current. John Stephen Woolrich The generator's designer, John Stephen Woolrich, was born in Lichfield, England in late 1820. The second son of John Woolrich (c.1791–1843) and his wife Mary Woolrich (formerly Egginton), he was baptised at St Mary's Church, Lichfield on 6 November 1820. In August 1842 he was granted patent number 9431 for the use of a magneto-electrical machine (instead of batteries) in electroplating, and the use of gold sulphite and silver sulphite as electrolytes. He offered to sell the rights to Elkingtons for the enormous sum of £15,000; they declined, and after some heated correspondence eventually, in May 1845, agreed to pay Woolrich £100 initially and then £400 annually for the rest of the term of the patent. Woolrich later relicensed the patent himself to use in his own Magneto-Plating and Gilding Works in Great Charles Street, Birmingham, and in 1849 was listed as a "chemist & magneto-plater & gilder", residing at 12 James Street, just off St Paul's Square in the Jewellery Quarter. He died at the age of 29 in early 1850, and was buried at St Bartholomew's Church, Edgbaston on 4 March 1850. The elder John Woolrich is listed in the United Kingdom Census 1841 as a "Chemist", and at the time of his death on 20 April 1843 was a lecturer in chemistry at the Royal School of Medicine and Surgery in Birmingham. He had a particular interest in electrochemistry, and in February 1819 wrote a letter entitled On Galvanic Shocks to the Annals of Philosophy, pointing out an error in the editor Thomas Thomson's book System of Chemistry. He was granted a number of patents for chemical processes, including one in 1836 for an improved method of producing "carbonate of baryta" (barium carbonate) and another in 1839 for producing "carbonate of lead, commonly called white lead". See also Dynamo Electromagnetic induction Faraday's law of induction References Electrical generators Collection of Thinktank, Birmingham 1844 in England 1844 in science
Woolrich Electrical Generator
[ "Physics", "Technology" ]
598
[ "Physical systems", "Electrical generators", "Machines" ]
52,429,771
https://en.wikipedia.org/wiki/WAVE%20regulatory%20complex
The WAVE regulatory complex (WRC, SCAR complex) is a five-subunit protein complex in the Wiskott-Aldrich syndrome protein (WASP) family involved in the formation of the actin cytoskeleton through interaction with the Arp2/3 complex. The holocomplex comprises WAVE1 (also known as WASF1), CYFIP1, ABI2, Nap1 and HSPC300 in its canonical form, or orthologues of these. Composition The proteins within the WRC form a CYFIP1-Nap1 heterodimer and a WAVE1-Abi2-HSPC300 heterotrimer, and following interaction with Rac1, the holocomplex has been observed in a CYFIP1-Nap1-Abi2 heterotrimer subcomplex and an active WAVE1-HSPC300 heterodimer subcomplex. Function WRC recruitment to the sites of actin nucleation events at the cell periphery is mediated by the binding of a number of ligands containing a conserved WRC interacting receptor sequence (WIRS) which binds to a conserved location shared across the surfaces of Abi2 and CYFIP1. The WRC is activated by interaction with the Rac1 (via the CYFIP1 component of the complex) and Arf small GTPases (such as ARF1, ARF5, and ARF6 ) or the similar protein ARL1, which causes dissociation of the CYFIP1-Nap1-Abi2 heterotrimer at the membrane periphery. This allows the V domain of the WAVE1 component to interact with the actin monomers while its CA domain interacts with the Arp2/3 complex, allowing the Arp2/3 complex to act as a nucleation core for the branching and extension of actin filaments. References Protein articles without symbol Proteins
WAVE regulatory complex
[ "Chemistry" ]
406
[ "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Molecular biology", "Proteins" ]
52,431,039
https://en.wikipedia.org/wiki/List%20of%20signalling%20pathways
In cell biology, there are a multitude of signalling pathways. Cell signalling is part of the molecular biology system that controls and coordinates the actions of cells. Akt/PKB signalling pathway AMPK signalling pathway cAMP-dependent pathway Eph/ephrin signalling pathway Hedgehog signalling pathway Hippo signalling pathway Insulin signal transduction pathway JAK-STAT signalling pathway MAPK/ERK signalling pathway mTOR signalling pathway Nodal signalling pathway Notch signalling pathway PI3K/AKT/mTOR signalling pathway TGF beta signalling pathway TLR signalling pathway VEGF signalling pathway Wnt signalling pathway References Cell signaling Signalling pathways
List of signalling pathways
[ "Chemistry" ]
126
[ "Molecular-biology-related lists", "Molecular biology" ]
52,432,122
https://en.wikipedia.org/wiki/Allam%20power%20cycle
The Allam Cycle or Allam-Fetvedt Cycle is a process for converting carbonaceous fuels into thermal energy, while capturing the generated carbon dioxide and water. The inventors are English engineer Rodney John Allam, American engineer Jeremy Eron Fetvedt, American scientist Dr. Miles R Palmer, and American businessperson and innovator G. William Brown, Jr. The Allam-Fetvedt Cycle was recognized by MIT Technology Review on the 2018 list of 10 Breakthrough Technologies. This cycle was validated at a 50 MWth natural gas fed test facility in La Porte, Texas in May 2018. Description The Allam-Fetvedt Cycle is a recuperated, high-pressure, Brayton cycle employing a transcritical working fluid with an oxy-fuel combustion regime. This cycle begins by burning a gaseous fuel with oxygen and a hot, high-pressure, recycled supercritical working fluid in a combustor. The recycled stream serves the dual purpose of lowering the combustion flame temperature to a manageable level and diluting the combustion products such that the cycle working fluid is predominantly . The pressure in the combustor can be as high as approximately 30 MPa. The combustion feedstock consists of approximately 95% recycled by mass. The combustor provides high-pressure exhaust that can be supplied to a turbine expander operating at a pressure ratio between 6 and 12. The expander discharge leaves as a subcritical mixture predominantly commingled with combustion derived water. This fluid enters an economizer heat exchanger, which cools the expander discharge to below 65 °C against the stream of that is recycled to the combustor. Upon exiting the economizer heat exchanger, the expander exhaust is further cooled to near ambient temperature by a central cooling system, enabling liquid water to be removed from the working fluid and recycled for beneficial use. The remaining working fluid of nearly pure then enters a compression and pumping stage. The compression system consists of a conventional inter-cooled centrifugal compressor with an inlet pressure below the critical pressure. The working fluid is compressed and then cooled to near ambient temperature in the compressor after-cooler. At this point, the combination of compressing and cooling the working fluid permits it to achieve a density in excess of 500 kg/m3. In this condition, the stream can be pumped to the high combustion pressure required using a multi-stage centrifugal pump.  Finally, the high-pressure working fluid is sent back through the economizer heat exchanger to be reheated and returned to the combustor. The net product derived from the addition of fuel and oxygen in the combustor is removed from the high-pressure stream; at this point, the product is high-pressure and high purity, ready for sequestration or utilization without requiring further compression. In order for the system to achieve high thermal efficiency, a close temperature approach is needed on the high-temperature side of the primary heat exchanger. Due to the cooling process employed at the compression and pumping stage, a large energy imbalance would typically exist in the cycle between the cooling expander exhaust flow and the reheating recycle flow. The Allam-Fetvedt Cycle corrects this imbalance through the incorporation of low-grade heat at the low-temperature end of the recuperative heat exchanger. Due to the low temperatures at the cool end of the cycle, this low-grade heat only needs to be in the range of 100 °C to 400 °C. A convenient source of this heat is the Air Separation Unit (ASU) required for the oxy-fuel combustion regime. When burning natural gas as a fuel, this basic configuration has been modeled to achieve an efficiency up to 60% (LHV) as a power cycle net of all parasitic loads, including the energy-intensive ASU. Despite its novelty, the components required by this cycle are commercially available, with the exception of the combustion turbine package. The turbine relies on proven technologies and approaches used by existing gas and steam turbine design tools. Applications Construction began in March 2016 in La Porte, Texas on a 50 MWth industrial test facility to showcase the Allam-Fetvedt Cycle, finishing in 2017. In 2018, the Allam-Fetvedt Cycle and supporting technologies were validated, allowing OEMs to certify components for use with future production plants. On November 15, 2021, at approximately 7:40 pm EST the test facility successfully synchronized to the ERCOT grid proving that the Allam Fetvedt Cycle was capable of generating power at 60 Hz. This test facility is owned and operated by NET Power, which is owned by Constellation Energy Corporation, Occidental Petroleum (Oxy) Low Carbon Ventures, Baker Hughes and 8 Rivers Capital (the inventor of the technology). NET Power was awarded the 2018 International Excellence in Energy Breakthrough Technological Project of the Year at the Abu Dhabi International Petroleum Exhibition and Conference (ADIPEC). Patent history See also Oxy-fuel combustion process References External links Process diagram for natural gas Process diagram for coal Mass flow diagram Pressure and specific enthalpy diagram Energy conversion Thermodynamic cycles English inventions Power station technology Carbon capture and storage
Allam power cycle
[ "Engineering" ]
1,074
[ "Geoengineering", "Carbon capture and storage" ]
52,432,145
https://en.wikipedia.org/wiki/Rodney%20John%20Allam
Rodney John Allam, MBE (born 1940 in St Helens, Lancashire) is an English chemical engineer and fellow of the Institution of Chemical Engineers who is credited with inventions related to power generation, notably the Allam power cycle, which is a generation process for fossil fuels, with integrated carbon dioxide capture. Career Allam was employed by Air Products & Chemicals for 44 years, most recently as Director of Technology Development. In 2004, he was appointed member of the Order of the British Empire for services to the environment. He has also been a visiting professor at the Imperial College of Science and Technology and a lead author of the IPCC special report on carbon dioxide capture and storage, released in 2005. In 2007, the IPCC, along with Al Gore, was awarded with the Nobel Peace Prize. His work has included new processes and equipment for production of gases and cryogenic liquids, such as oxygen, nitrogen, argon, carbon monoxide, carbon dioxide, hydrogen and helium. Several of these gases are generally produced through air separation, which is also a necessary step in the practical application of the Allam cycle, in which gaseous fossil fuels, for example natural gas or gasified coal, are combusted with pure oxygen. A 50 MW demonstration plant being built in Texas is expected to start operating in 2017. In 2012, Allam was awarded the Global Energy Prize, for his work on processes and power generation, along with Russian scientists Valery Kostuk and Boris Katorgin. , he is chairman of the international award committee for the prize. , Allam works for 8 Rivers Capital, with among other things the commercialisation of the Allam cycle. See also Allam power cycle References External links Description of the Allam power cycle 1940 births Living people British chemical engineers 21st-century English engineers 20th-century British inventors Intergovernmental Panel on Climate Change lead authors Members of the Order of the British Empire Environmental engineers People from St Helens, Merseyside
Rodney John Allam
[ "Chemistry", "Engineering" ]
398
[ "Environmental engineers", "Environmental engineering" ]
34,111,023
https://en.wikipedia.org/wiki/Dynamical%20reduction
A dynamical reduction theory (DRT) is an extension of quantum mechanics (QM) that attempts to account for the collapse of the wave function. It is necessary because QM does not account for the specific measurements of observable quantities or events, in the familiar realm of Newtonian or classical physics, that we make in QM experiments. The reason that QM does not account for measurements is that the time evolution of the quantum state of a system is described probabilistically by linear superpositions of Schrödinger equations. Even if we include the quantum state of the measuring devices, and even if we include the quantum state of the surrounding universe, this gives no information about actual measurements, each of which always appears to choose a particular possible value. An example of a DRT is Continuous spontaneous localization (CSL). See also Copenhagen interpretation Objective-collapse theory References Quantum mechanics
Dynamical reduction
[ "Physics" ]
185
[ "Theoretical physics", "Quantum mechanics", "Quantum physics stubs" ]