id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
34,152,659 | https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Terahertz%20Science%20and%20Technology | IEEE Transactions on Terahertz Science and Technology is a bimonthly peer-reviewed scientific journal published bimonthly by IEEE. Sponsored by the IEEE Microwave Theory and Technology Society, it covers terahertz science, technology, instruments, and applications, with a focus on the frequency bands in between 300 GHz to 10 THz. Its editor-in-chief is Nuria Llombart (Delft University of Technology).
According to the Journal Citation Reports, the journal has a 2022 impact factor of 3.2.
References
External links
Transactions on Terahertz Science and Technology
Electrical and electronic engineering journals
Electromagnetism journals
Bimonthly journals
Academic journals established in 2011
English-language journals
Terahertz technology | IEEE Transactions on Terahertz Science and Technology | [
"Physics",
"Engineering"
] | 150 | [
"Spectrum (physical sciences)",
"Terahertz technology",
"Electromagnetic spectrum",
"Electronic engineering",
"Electrical engineering",
"Electrical and electronic engineering journals"
] |
40,903,552 | https://en.wikipedia.org/wiki/Mathisson%E2%80%93Papapetrou%E2%80%93Dixon%20equations | In physics, specifically general relativity, the Mathisson–Papapetrou–Dixon equations describe the motion of a massive spinning body moving in a gravitational field. Other equations with similar names and mathematical forms are the Mathisson–Papapetrou equations and Papapetrou–Dixon equations. All three sets of equations describe the same physics.
These equations are named after Myron Mathisson, William Graham Dixon, and Achilles Papapetrou, who worked on them.
Throughout, this article uses the natural units c = G = 1, and tensor index notation.
Mathisson–Papapetrou–Dixon equations
The Mathisson–Papapetrou–Dixon (MPD) equations for a mass spinning body are
Here is the proper time along the trajectory, is the body's four-momentum
the vector is the four-velocity of some reference point in the body, and the skew-symmetric tensor is the angular momentum
of the body about this point. In the time-slice integrals we are assuming that the body is compact enough that we can use flat coordinates within the body where the energy-momentum tensor is non-zero.
As they stand, there are only ten equations to determine thirteen quantities. These quantities are the six components of , the four components of and the three independent components of . The equations must therefore be supplemented by three additional constraints which serve to determine which point in the body has velocity . Mathison and Pirani originally chose to impose the condition which, although involving four components, contains only three constraints because is identically zero. This condition, however, does not lead to a unique solution and can give rise to the mysterious "helical motions". The Tulczyjew–Dixon condition does lead to a unique solution as it selects the reference point to be the body's center of mass in the frame in which its momentum is .
Accepting the Tulczyjew–Dixon condition , we can manipulate the second of the MPD equations into the form
This is a form of Fermi–Walker transport of the spin tensor along the trajectory – but one preserving orthogonality to the momentum vector rather than to the tangent vector . Dixon calls this M-transport.
See also
Introduction to the mathematics of general relativity
Geodesic equation
Pauli–Lubanski pseudovector
Test particle
Relativistic angular momentum
Center of mass (relativistic)
References
Notes
Selected papers
Equations
General relativity | Mathisson–Papapetrou–Dixon equations | [
"Physics",
"Mathematics"
] | 486 | [
"General relativity",
"Mathematical objects",
"Equations",
"Theory of relativity"
] |
32,536,445 | https://en.wikipedia.org/wiki/Melbourne%20Bioinformatics | Melbourne Bioinformatics (formerly the Victorian Life Sciences Computation Initiative, VLSCI) is a centre for computational life science expertise. It provides bioinformatics support for all researchers and students in a wide range of projects and services of local and national significance. Researchers can engage with Melbourne Bioinformatics through training and consulting with experts which can lead to project collaborations with academic staff within the University of Melbourne.
History
The VLSCI was established as part of the Victorian government's plans to support biotechnology, and was listed as a key infrastructure project in the Victorian Biotechnology Action Plan 2011. It was a $100m initiative of the Victorian Government in partnership with The University of Melbourne and the IBM Life Sciences Research Collaboratory, Melbourne. Other major stakeholders included key Victorian health and medical research institutions, major universities and public research organisations.
In 2015 the VLSCI transitioned to a new governance model after receiving a further $6.65m from the Victorian State Government. The first IBM Research Collaboratory for Life Sciences was co-located at VLSCI for 5 years. Since moving to the nearby offices of IBM Research Australia, Collaboratory staff have continued to work on a range of projects as part of a broader University of Melbourne/IBM partnership under the new name of Melbourne Bioinformatics.
Melbourne Bioinformatics's petascale high performance computation facility was accessible to researchers in Victoria by operating across various universities and research institutes in Melbourne. The clusters were in the top 500 worldwide in 2017. Technical experts were on staff to maximise the user experience, meet the skills gaps in research teams, build the necessary cross-disciplinary research collaborations, and provide skills to scale up projects to efficiently use the processing power being delivered. Ongoing skills development and training was provided in computational biology, molecular modelling and bioinformatics.
The Computers
At its peak, the VLSCI's Peak Computing Facility operated at 855 teraflops. The systems included 'Barcoo', an IBM iDataPlex x86, 'Merri', an IBM iDataPlex x86 and 'Avoca', comprising 4 racks of IBM Blue Gene/Q.
Peak Computing Facility
The VLSCI Peak Computing Facility (PCF) provided high-performance compute infrastructure and computational expertise to Life Sciences researchers across Victoria. The PCF had tightly-coupled clusters with very fast disk subsystems, currently operating at a peak capacity of 855 teraflops. To help researchers maximize their use of compute time and get the most out of their allocated resources, the PCF had a team of system administrators, programmers and application specialists accessible through its help request system.
Life Sciences Computation Centre
The VLSCI Life Science Computation Centre (LSCC) was physically housed at the University of Melbourne and La Trobe University, and with many staff working at various research institutes including The Peter Doherty Institute for Infection and Immunity (Doherty Institute) for some portion of their week. The LSCC was a distributed pool of expertise and infrastructure for computational life science research, servicing life science research institutions across Victoria. It aimed to foster research collaboration and support to a relatively small number of specific external projects; act as a source of common resources, software platforms and expertise to support life science researchers; offer research training, education, and career development for bioinformaticians and computational biologists, to support the advancement of the Victorian computational life sciences research community; and support the advancement of life science computation as a whole in Victoria.
Galaxy Australia
Melbourne Bioinformatics together with QFAB Bioinformatics, QCIF and the University of Queensland’s Research Computing Centre jointly built and operate Galaxy Australia which is a major feature of the Genomics Virtual Laboratory.
See also
National Computational Infrastructure
References
Bioinformatics organizations
University of Melbourne
Science and technology in Melbourne | Melbourne Bioinformatics | [
"Biology"
] | 772 | [
"Bioinformatics",
"Bioinformatics organizations"
] |
32,537,366 | https://en.wikipedia.org/wiki/Integrator%20workflow | Integrator workflow, also known as Integration Manager Workflow, is a method to handle source code contributions in work environments using distributed version control.
Scenario
Frequently, in a distributed team, each developer has write access to their own public repository and they have read access to everyone else’s. There is also a dedicated repository, the blessed repository, which contains the "reference" version of the project source code. To contribute to this, developers create their own public clone of the project and push their changes to those. Then, they request one or more maintainers of the blessed repository to pull in their changes.
Implementations
GitHub
Bitbucket
CodePlex
References
Distributed version control systems | Integrator workflow | [
"Technology",
"Engineering"
] | 139 | [
"Computing stubs",
"Computer engineering stubs",
"Computer engineering"
] |
32,537,722 | https://en.wikipedia.org/wiki/Steam%20mill | A steam mill is a type of grinding mill using a stationary steam engine to power its mechanism.
And did those feet in ancient time, Albion Flour Mills, first steam mill in London from around 1790
Aurora Steam Grist Mill, a historic grist mill located in Aurora, Cayuga County, New York, United States
Cincinnati Steam Paper Mill, the first steam-powered mill in Cincinnati, Ohio, United States
Sutherland Steam Mill Museum, a restored steam woodworking mill from the 1890s located in Denmark, Nova Scotia, Canada
References
External links
Grinding mills
Steam power | Steam mill | [
"Physics",
"Engineering"
] | 113 | [
"Physical quantities",
"Steam power",
"Power (physics)",
"Architecture stubs",
"Architecture"
] |
32,538,447 | https://en.wikipedia.org/wiki/Pyrrho%27s%20lemma | In statistics, Pyrrho's lemma is the result that if one adds just one extra variable as a regressor from a suitable set to a linear regression model, one can get any desired outcome in terms of the coefficients (signs and sizes), as well as predictions, the R-squared, the t-statistics, prediction- and confidence-intervals. The argument for the coefficients was advanced by Herman Wold and Lars Juréen but named, extended to include the other statistics and explained more fully by Theo Dijkstra. Dijkstra named it after the sceptic philosopher Pyrrho and concludes his article by noting that this lemma provides "some ground for a wide-spread scepticism concerning products of extensive datamining". One can only prove that a model 'works' by testing it on data different from the data that gave it birth.
The result has been discussed in the context of econometrics.
References
Theorems in statistics
Regression analysis
Lemmas | Pyrrho's lemma | [
"Mathematics"
] | 207 | [
"Mathematical theorems",
"Mathematical problems",
"Lemmas",
"Theorems in statistics"
] |
32,538,999 | https://en.wikipedia.org/wiki/Hiroshi%20Yasuda | Hiroshi Yasuda (born 1944) is an Emeritus Professor at the University of Tokyo and works as a Consultant for Nippon Telegraph and Telephone.
In the sphere of international standardization, together with Leonardo Chiariglione he founded the Moving Picture Experts Group which standardized MPEG-1 Audio Layer 3, better known as MP3.
Prof. Hiroshi Yasuda received his B.E., M.E. and Dr.E. degrees from the University of Tokyo, Japan in 1967, 1969, and 1972 respectively. Thereafter, he joined the Electrical Communication Laboratories of NTT in 1972 where he has been involved in works on Video Coding, Facsimile Network, Image Processing, Telepresence, B-ISDN Network and Services, Internet and Computer Communication Applications. He worked four years (1988–1992) as the Executive Manager of Visual Media Lab. of NTT Human Interface Labs., and served three years (1992–1995) as the Executive Manager of System Services Department of NTT Business Communications Systems Headquarters and became vice president, Director of NTT Information and Communication Systems Laboratories at Yokosuka since July 1995. After serving twenty-five years for NTT (1972–1997) he left NTT to work at The University of Tokyo. From 2003 until 2005 he was acting director of The Center for Collaborative Research (CCR) and is now a Professor at Tokyo Denki University. He is a member of the IT Strategic Headquarters (Japan).
Professor Yasuda also served as the International Organization for Standardization's chairperson of ISO/IEC JTC 1/SC 29 (JPEG/MPEG Standardization) from 1991 - 1999.
He has served as a guest editor of IEEE Journal on SAC several times, such as Vol.11, No.1. and has served as The Exhibition Chair of 1996 Multimedia Systems Conference sponsored by Computer Society. He also served as the president of DAVIC (Digital Audio Video Council) from 1996 -1998.
He has received numerous awards, including the Takayanagi Award in 1987, the Achievement Award of EICEJ in 1995, The EMMY from The National Academy of Television Arts and Science in 1995, the IEEE fellowship grade in 1998 for contributions to the international standardization activities on video coding technologies and the research and development of visual communications and multimedia communications systems, the Charles Proteus Steinmetz Award from IEEE in 2000, the Takayanagi Award in 2005 and The Medal with Purple Ribbon from The Emperor of Japan in 2009. He is a Life Fellow of IEEE, Fellow of EICEJ and IPSJ, and a member of Television Institute. He wrote International Standardization of Multimedia Coding in 1991, MPEG/International Standardization of Multimedia Coding in 1994, The Base for the Digital Image Coding in 1995, The Text for Internet in 1996, The Text for MPEG" in 2002 and The Text for Content Distribution in 2003.
References
1944 births
Japanese engineers
Japanese inventors
Living people
MPEG
Nippon Telegraph and Telephone
Academic staff of the University of Tokyo | Hiroshi Yasuda | [
"Technology"
] | 604 | [
"Multimedia",
"MPEG"
] |
32,541,936 | https://en.wikipedia.org/wiki/Ligand%20efficiency | Ligand efficiency is a measurement of the binding energy per atom of a ligand to its binding partner, such as a receptor or enzyme.
Ligand efficiency is used in drug discovery research programs to assist in narrowing focus to lead compounds with optimal combinations of physicochemical properties and pharmacological properties.
Mathematically, ligand efficiency (LE) can be defined as the ratio of Gibbs free energy (ΔG) to the number of non-hydrogen atoms of the compound:
LE = -(ΔG)/N
where ΔG = −RTlnKi and N is the number of non-hydrogen atoms. It can be transformed to the equation:
LE = 1.4(−log IC50)/N
Other metrics
Some suggest that better metrics for ligand efficiency are
percentage/potency efficiency index (PEI), binding efficiency index (BEI) and surface-binding efficiency index (SEI) because they are easier to calculate and take into account the differences between elements in different rows of the periodic table. It is important to note that PEI is a relative measure for comparing compounds tested in the same conditions (e.g. a single-point assay) and are not comparable at different inhibitor concentrations. Also for BEI and SEI, similar measurements must be used (e.g. always using pKi).
PEI = (% inhibition at a given compound concentration as fraction: 0 – 1.0) / (molecular weight, kDa)
BEI = (pKi, pKd, or pIC50) / (molecular weight, kDa)
SEI = (pKi, pKd, or pIC50) / (PSA/100 Å)
where pKi, pKd and pIC50 is defined as −log(Ki), −log(Kd), or −log(IC
50), respectively. Ki and IC50 in mol/L.
The authors suggest plotting compounds SEI and BEI on a plane and optimizing compounds towards the diagonal and so optimizing both SEI and BEI which incorporate potency, molecular weight and PSA.
There are other metrics which can be useful during hit to lead optimization: group efficiency (GE), lipophilic efficiency/lipophilic ligand efficiency (LipE/LLE), ligand lipophilicity index (LLEAT) ligand efficiency dependent lipophilicity (LELP), fit quality scaled ligand efficiency (LEscale), size independent ligand efficiency (SILE).
Group efficiency (GE) is a metric used to estimate the binding efficiency of groups added to a ligand. Unlike ligand efficiency which evaluates the efficiency of the entire molecule, group efficiency measures the relative change of the Gibbs free energy (ΔΔG), caused by addition or modification of groups, normalized by the change in the number heavy atoms in those groups (ΔN), using the equation:
GE = -(ΔΔG)/ΔN
See also
Drug design
Drug discovery hit to lead
References
Drug discovery
Medicinal chemistry | Ligand efficiency | [
"Chemistry",
"Biology"
] | 619 | [
"Life sciences industry",
"Drug discovery",
"Medicinal chemistry",
"nan",
"Biochemistry"
] |
32,544,339 | https://en.wikipedia.org/wiki/Fracking | Hydraulic fracturing is a well stimulation technique involving the fracturing of formations in bedrock by a pressurized liquid. The process involves the high-pressure injection of "fracking fluid" (primarily water, containing sand or other proppants suspended with the aid of thickening agents) into a wellbore to create cracks in the deep rock formations through which natural gas, petroleum, and brine will flow more freely. When the hydraulic pressure is removed from the well, small grains of hydraulic fracturing proppants (either sand or aluminium oxide) hold the fractures open.
Hydraulic fracturing began as an experiment in 1947, and the first commercially successful application followed in 1949. As of 2012, 2.5 million "frac jobs" had been performed worldwide on oil and gas wells, over one million of those within the U.S. Such treatment is generally necessary to achieve adequate flow rates in shale gas, tight gas, tight oil, and coal seam gas wells. Some hydraulic fractures can form naturally in certain veins or dikes. Drilling and hydraulic fracturing have made the United States a major crude oil exporter as of 2019, but leakage of methane, a potent greenhouse gas, has dramatically increased. Increased oil and gas production from the decade-long fracking boom has led to lower prices for consumers, with near-record lows of the share of household income going to energy expenditures.
Hydraulic fracturing is highly controversial. Its proponents highlight the economic benefits of more extensively accessible hydrocarbons (such as petroleum and natural gas), the benefits of replacing coal with natural gas, which burns more cleanly and emits less carbon dioxide (CO2), and the benefits of energy independence. Opponents of fracking argue that these are outweighed by the environmental impacts, which include groundwater and surface water contamination, noise and air pollution, the triggering of earthquakes, and the resulting hazards to public health and the environment. Research has found adverse health effects in populations living near hydraulic fracturing sites, including confirmation of chemical, physical, and psychosocial hazards such as pregnancy and birth outcomes, migraine headaches, chronic rhinosinusitis, severe fatigue, asthma exacerbations and psychological stress. Adherence to regulation and safety procedures are required to avoid further negative impacts.
The scale of methane leakage associated with hydraulic fracturing is uncertain, and there is some evidence that leakage may cancel out any greenhouse gas emissions benefit of natural gas relative to other fossil fuels.
Increases in seismic activity following hydraulic fracturing along dormant or previously unknown faults are sometimes caused by the deep-injection disposal of hydraulic fracturing flowback (a byproduct of hydraulically fractured wells), and produced formation brine (a byproduct of both fractured and non-fractured oil and gas wells). For these reasons, hydraulic fracturing is under international scrutiny, restricted in some countries, and banned altogether in others. The European Union is drafting regulations that would permit the controlled application of hydraulic fracturing.
Geology
Mechanics
Fracturing rocks at great depth frequently become suppressed by pressure due to the weight of the overlying rock strata and the cementation of the formation. This suppression process is particularly significant in "tensile" (Mode 1) fractures which require the walls of the fracture to move against this pressure. Fracturing occurs when effective stress is overcome by the pressure of fluids within the rock. The minimum principal stress becomes tensile and exceeds the tensile strength of the material. Fractures formed in this way are generally oriented in a plane perpendicular to the minimum principal stress, and for this reason, hydraulic fractures in wellbores can be used to determine the orientation of stresses. In natural examples, such as dikes or vein-filled fractures, the orientations can be used to infer past states of stress.
Veins
Most mineral vein systems are a result of repeated natural fracturing during periods of relatively high pore fluid pressure. The effect of high pore fluid pressure on the formation process of mineral vein systems is particularly evident in "crack-seal" veins, where the vein material is part of a series of discrete fracturing events, and extra vein material is deposited on each occasion. One example of long-term repeated natural fracturing is in the effects of seismic activity. Stress levels rise and fall episodically, and earthquakes can cause large volumes of connate water to be expelled from fluid-filled fractures. This process is referred to as "seismic pumping".
Dikes
Minor intrusions in the upper part of the crust, such as dikes, propagate in the form of fluid-filled cracks. In such cases, the fluid is magma. In sedimentary rocks with a significant water content, fluid at fracture tip will be steam.
History
Precursors
Fracturing as a method to stimulate shallow, hard rock oil wells dates back to the 1860s. Dynamite or nitroglycerin detonations were used to increase oil and natural gas production from petroleum bearing formations. On 24 April 1865, US Civil War veteran Col. Edward A. L. Roberts received a patent for an "exploding torpedo". It was employed in Pennsylvania, New York, Kentucky, Oklahoma, Texas, and West Virginia using liquid and also, later, solidified nitroglycerin. Companies like Lighting Torpedo Company used this process in Oklahoma and Texas. Later still the same method was applied to water and gas wells. Stimulation of wells with acid, instead of explosive fluids, was introduced in the 1930s. Due to acid etching, fractures would not close completely resulting in further productivity increase.
20th century applications
Harold Hamm, Aubrey McClendon, Tom Ward and George P. Mitchell are each considered to have pioneered hydraulic fracturing innovations toward practical applications.
Oil and gas wells
The relationship between well performance and treatment pressures was studied by Floyd Farris of Stanolind Oil and Gas Corporation. This study was the basis of the first hydraulic fracturing experiment, conducted in 1947 at the Hugoton gas field in Grant County of southwestern Kansas by Stanolind. For the well treatment, of gelled gasoline (essentially napalm) and sand from the Arkansas River was injected into the gas-producing limestone formation at . The experiment was not very successful as the deliverability of the well did not change appreciably. The process was further described by J.B. Clark of Stanolind in his paper published in 1948. A patent on this process was issued in 1949 and an exclusive license was granted to the Halliburton Oil Well Cementing Company. On 17 March 1949, Halliburton performed the first two commercial hydraulic fracturing treatments in Stephens County, Oklahoma, and Archer County, Texas. Since then, hydraulic fracturing has been used to stimulate approximately one million oil and gas wells in various geologic regimes with good success.
In contrast with large-scale hydraulic fracturing used in low-permeability formations, small hydraulic fracturing treatments are commonly used in high-permeability formations to remedy "skin damage", a low-permeability zone that sometimes forms at the rock-borehole interface. In such cases the fracturing may extend only a few feet from the borehole.
In the Soviet Union, the first hydraulic proppant fracturing was carried out in 1952. Other countries in Europe and Northern Africa subsequently employed hydraulic fracturing techniques including Norway, Poland, Czechoslovakia (before 1989), Yugoslavia (before 1991), Hungary, Austria, France, Italy, Bulgaria, Romania, Turkey, Tunisia, and Algeria.
Massive fracturing
Massive hydraulic fracturing (also known as high-volume hydraulic fracturing) is a technique first applied by Pan American Petroleum in Stephens County, Oklahoma, US in 1968. The definition of massive hydraulic fracturing varies, but generally refers to treatments injecting over 150 short tons, or approximately 300,000 pounds (136 metric tonnes), of proppant.
American geologists gradually became aware that there were huge volumes of gas-saturated sandstones with permeability too low (generally less than 0.1 millidarcy) to recover the gas economically. Starting in 1973, massive hydraulic fracturing was used in thousands of gas wells in the San Juan Basin, Denver Basin, the Piceance Basin, and the Green River Basin, and in other hard rock formations of the western US. Other tight sandstone wells in the US made economically viable by massive hydraulic fracturing were in the Clinton-Medina Sandstone (Ohio, Pennsylvania, and New York), and Cotton Valley Sandstone (Texas and Louisiana).
Massive hydraulic fracturing quickly spread in the late 1970s to western Canada, Rotliegend and Carboniferous gas-bearing sandstones in Germany, Netherlands (onshore and offshore gas fields), and the United Kingdom in the North Sea.
Horizontal oil or gas wells were unusual until the late 1980s. Then, operators in Texas began completing thousands of oil wells by drilling horizontally in the Austin Chalk, and giving massive slickwater hydraulic fracturing treatments to the wellbores. Horizontal wells proved much more effective than vertical wells in producing oil from tight chalk; sedimentary beds are usually nearly horizontal, so horizontal wells have much larger contact areas with the target formation.
Hydraulic fracturing operations have grown exponentially since the mid-1990s, when technologic advances and increases in the price of natural gas made this technique economically viable.
Shales
Hydraulic fracturing of shales goes back at least to 1965, when some operators in the Big Sandy gas field of eastern Kentucky and southern West Virginia started hydraulically fracturing the Ohio Shale and Cleveland Shale, using relatively small fracs. The frac jobs generally increased production, especially from lower-yielding wells.
In 1976, the United States government started the Eastern Gas Shales Project, which included numerous public-private hydraulic fracturing demonstration projects. During the same period, the Gas Research Institute, a gas industry research consortium, received approval for research and funding from the Federal Energy Regulatory Commission.
In 1997, Nick Steinsberger, an engineer of Mitchell Energy (now part of Devon Energy), applied the slickwater fracturing technique, using more water and higher pump pressure than previous fracturing techniques, which was used in East Texas in the Barnett Shale of north Texas. In 1998, the new technique proved to be successful when the first 90 days gas production from the well called S.H. Griffin No. 3 exceeded production of any of the company's previous wells. This new completion technique made gas extraction widely economical in the Barnett Shale, and was later applied to other shales, including the Eagle Ford and Bakken Shale. George P. Mitchell has been called the "father of fracking" because of his role in applying it in shales. The first horizontal well in the Barnett Shale was drilled in 1991, but was not widely done in the Barnett until it was demonstrated that gas could be economically extracted from vertical wells in the Barnett.
As of 2013, massive hydraulic fracturing is being applied on a commercial scale to shales in the United States, Canada, and China. Several additional countries are planning to use hydraulic fracturing.
Process
According to the United States Environmental Protection Agency (EPA), hydraulic fracturing is a process to stimulate a natural gas, oil, or geothermal well to maximize extraction. The EPA defines the broader process to include acquisition of source water, well construction, well stimulation, and waste disposal.
Method
A hydraulic fracture is formed by pumping fracturing fluid into a wellbore at a rate sufficient to increase pressure at the target depth (determined by the location of the well casing perforations), to exceed that of the fracture gradient (pressure gradient) of the rock. The fracture gradient is defined as pressure increase per unit of depth relative to density, and is usually measured in pounds per square inch, per foot (psi/ft). The rock cracks, and the fracture fluid permeates the rock extending the crack further, and further, and so on. Fractures are localized as pressure drops off with the rate of frictional loss, which is relative to the distance from the well. Operators typically try to maintain "fracture width", or slow its decline following treatment, by introducing a proppant into the injected fluida material such as grains of sand, ceramic, or other particulate, thus preventing the fractures from closing when injection is stopped and pressure removed. Consideration of proppant strength and prevention of proppant failure becomes more important at greater depths where pressure and stresses on fractures are higher. The propped fracture is permeable enough to allow the flow of gas, oil, salt water and hydraulic fracturing fluids to the well.
During the process, fracturing fluid leakoff (loss of fracturing fluid from the fracture channel into the surrounding permeable rock) occurs. If not controlled, it can exceed 70% of the injected volume. This may result in formation matrix damage, adverse formation fluid interaction, and altered fracture geometry, thereby decreasing efficiency.
The location of one or more fractures along the length of the borehole is strictly controlled by various methods that create or seal holes in the side of the wellbore. Hydraulic fracturing is performed in cased wellbores, and the zones to be fractured are accessed by perforating the casing at those locations.
Hydraulic-fracturing equipment used in oil and natural gas fields usually consists of a slurry blender, one or more high-pressure, high-volume fracturing pumps (typically powerful triplex or quintuplex pumps) and a monitoring unit. Associated equipment includes fracturing tanks, one or more units for storage and handling of proppant, high-pressure treating iron, a chemical additive unit (used to accurately monitor chemical addition), fracking hose (low-pressure flexible hoses), and many gauges and meters for flow rate, fluid density, and treating pressure. Chemical additives are typically 0.5% of the total fluid volume. Fracturing equipment operates over a range of pressures and injection rates, and can reach up to and .
Well types
A distinction can be made between conventional, low-volume hydraulic fracturing, used to stimulate high-permeability reservoirs for a single well, and unconventional, high-volume hydraulic fracturing, used in the completion of tight gas and shale gas wells. High-volume hydraulic fracturing usually requires higher pressures than low-volume fracturing; the higher pressures are needed to push out larger volumes of fluid and proppant that extend farther from the borehole.
Horizontal drilling involves wellbores with a terminal drillhole completed as a "lateral" that extends parallel with the rock layer containing the substance to be extracted. For example, laterals extend in the Barnett Shale basin in Texas, and up to in the Bakken formation in North Dakota. In contrast, a vertical well only accesses the thickness of the rock layer, typically . Horizontal drilling reduces surface disruptions as fewer wells are required to access the same volume of rock.
Drilling often plugs up the pore spaces at the wellbore wall, reducing permeability at and near the wellbore. This reduces flow into the borehole from the surrounding rock formation, and partially seals off the borehole from the surrounding rock. Low-volume hydraulic fracturing can be used to restore permeability.
Fracturing fluids
The main purposes of fracturing fluid are to extend fractures, add lubrication, change gel strength, and to carry proppant into the formation. There are two methods of transporting proppant in the fluidhigh-rate and high-viscosity. High-viscosity fracturing tends to cause large dominant fractures, while high-rate (slickwater) fracturing causes small spread-out micro-fractures.
Water-soluble gelling agents (such as guar gum) increase viscosity and efficiently deliver proppant into the formation.
Fluid is typically a slurry of water, proppant, and chemical additives. Additionally, gels, foams, and compressed gases, including nitrogen, carbon dioxide and air can be injected. Typically, 90% of the fluid is water and 9.5% is sand with chemical additives accounting to about 0.5%. However, fracturing fluids have been developed using liquefied petroleum gas (LPG) and propane. This process is called waterless fracturing.
When propane is used it is turned into vapor by the high pressure and high temperature. The propane vapor and natural gas both return to the surface and can be collected, making it easier to reuse and/or resale. None of the chemicals used will return to the surface. Only the propane used will return from what was used in the process.
The proppant is a granular material that prevents the created fractures from closing after the fracturing treatment. Types of proppant include silica sand, resin-coated sand, bauxite, and man-made ceramics. The choice of proppant depends on the type of permeability or grain strength needed. In some formations, where the pressure is great enough to crush grains of natural silica sand, higher-strength proppants such as bauxite or ceramics may be used. The most commonly used proppant is silica sand, though proppants of uniform size and shape, such as a ceramic proppant, are believed to be more effective.
The fracturing fluid varies depending on fracturing type desired, and the conditions of specific wells being fractured, and water characteristics. The fluid can be gel, foam, or slickwater-based. Fluid choices are tradeoffs: more viscous fluids, such as gels, are better at keeping proppant in suspension; while less-viscous and lower-friction fluids, such as slickwater, allow fluid to be pumped at higher rates, to create fractures farther out from the wellbore. Important material properties of the fluid include viscosity, pH, various rheological factors, and others.
Water is mixed with sand and chemicals to create hydraulic fracturing fluid. Approximately 40,000 gallons of chemicals are used per fracturing.
A typical fracture treatment uses between 3 and 12 additive chemicals. Although there may be unconventional fracturing fluids, typical chemical additives can include one or more of the following:
Acids—hydrochloric acid or acetic acid is used in the pre-fracturing stage for cleaning the perforations and initiating fissure in the near-wellbore rock.
Sodium chloride (salt)—delays breakdown of gel polymer chains.
Polyacrylamide and other friction reducers decrease turbulence in fluid flow and pipe friction, thus allowing the pumps to pump at a higher rate without having greater pressure on the surface.
Ethylene glycol—prevents formation of scale deposits in the pipe.
Borate salts—used for maintaining fluid viscosity during the temperature increase.
Sodium and potassium carbonates—used for maintaining effectiveness of crosslinkers.
Glutaraldehyde- a biocide that prevents pipe corrosion from microbial activity.
Guar gum and other water-soluble gelling agents—increases viscosity of the fracturing fluid to deliver proppant into the formation more efficiently.
Citric acid—used for corrosion prevention.
Isopropanol—used to winterize the chemicals to ensure it doesn't freeze.
The most common chemical used for hydraulic fracturing in the United States in 2005–2009 was methanol, while some other most widely used chemicals were isopropyl alcohol, 2-butoxyethanol, and ethylene glycol.
Typical fluid types are:
Conventional linear gels. These gels are cellulose derivative (carboxymethyl cellulose, hydroxyethyl cellulose, carboxymethyl hydroxyethyl cellulose, hydroxypropyl cellulose, hydroxyethyl methyl cellulose), guar or its derivatives (hydroxypropyl guar, carboxymethyl hydroxypropyl guar), mixed with other chemicals.
Borate-crosslinked fluids. These are guar-based fluids cross-linked with boron ions (from aqueous borax/boric acid solution). These gels have higher viscosity at pH 9 onwards and are used to carry proppant. After the fracturing job, the pH is reduced to 3–4 so that the cross-links are broken, and the gel is less viscous and can be pumped out.
Organometallic-crosslinked fluids – zirconium, chromium, antimony, titanium salts – are known to crosslink guar-based gels. The crosslinking mechanism is not reversible, so once the proppant is pumped down along with cross-linked gel, the fracturing part is done. The gels are broken down with appropriate breakers.
Aluminium phosphate-ester oil gels. Aluminium phosphate and ester oils are slurried to form cross-linked gel. These are one of the first known gelling systems.
For slickwater fluids the use of sweeps is common. Sweeps are temporary reductions in the proppant concentration, which help ensure that the well is not overwhelmed with proppant. As the fracturing process proceeds, viscosity-reducing agents such as oxidizers and enzyme breakers are sometimes added to the fracturing fluid to deactivate the gelling agents and encourage flowback. Such oxidizers react with and break down the gel, reducing the fluid's viscosity and ensuring that no proppant is pulled from the formation. An enzyme acts as a catalyst for breaking down the gel. Sometimes pH modifiers are used to break down the crosslink at the end of a hydraulic fracturing job, since many require a pH buffer system to stay viscous. At the end of the job, the well is commonly flushed with water under pressure (sometimes blended with a friction reducing chemical.) Some (but not all) injected fluid is recovered. This fluid is managed by several methods, including underground injection control, treatment, discharge, recycling, and temporary storage in pits or containers. New technology is continually developing to better handle waste water and improve re-usability.
Fracture monitoring
Measurements of the pressure and rate during the growth of a hydraulic fracture, with knowledge of fluid properties and proppant being injected into the well, provides the most common and simplest method of monitoring a hydraulic fracture treatment. This data along with knowledge of the underground geology can be used to model information such as length, width and conductivity of a propped fracture.
Radionuclide monitoring
Injection of radioactive tracers along with the fracturing fluid is sometimes used to determine the injection profile and location of created fractures. Radiotracers are selected to have the readily detectable radiation, appropriate chemical properties, and a half life and toxicity level that will minimize initial and residual contamination. Radioactive isotopes chemically bonded to glass (sand) and/or resin beads may also be injected to track fractures. For example, plastic pellets coated with 10 GBq of Ag-110mm may be added to the proppant, or sand may be labelled with Ir-192, so that the proppant's progress can be monitored. Radiotracers such as Tc-99m and I-131 are also used to measure flow rates. The Nuclear Regulatory Commission publishes guidelines which list a wide range of radioactive materials in solid, liquid and gaseous forms that may be used as tracers and limit the amount that may be used per injection and per well of each radionuclide.
A new technique in well-monitoring involves fiber-optic cables outside the casing. Using the fiber optics, temperatures can be measured every foot along the well – even while the wells are being fracked and pumped. By monitoring the temperature of the well, engineers can determine how much hydraulic fracturing fluid different parts of the well use as well as how much natural gas or oil they collect, during hydraulic fracturing operation and when the well is producing.
Microseismic monitoring
For more advanced applications, microseismic monitoring is sometimes used to estimate the size and orientation of induced fractures. Microseismic activity is measured by placing an array of geophones in a nearby wellbore. By mapping the location of any small seismic events associated with the growing fracture, the approximate geometry of the fracture is inferred. Tiltmeter arrays deployed on the surface or down a well provide another technology for monitoring strain
Microseismic mapping is very similar geophysically to seismology. In earthquake seismology, seismometers scattered on or near the surface of the earth record S-waves and P-waves that are released during an earthquake event. This allows for motion along the fault plane to be estimated and its location in the Earth's subsurface mapped. Hydraulic fracturing, an increase in formation stress proportional to the net fracturing pressure, as well as an increase in pore pressure due to leakoff. Tensile stresses are generated ahead of the fracture's tip, generating large amounts of shear stress. The increases in pore water pressure and in formation stress combine and affect weaknesses near the hydraulic fracture, like natural fractures, joints, and bedding planes.
Different methods have different location errors and advantages. Accuracy of microseismic event mapping is dependent on the signal-to-noise ratio and the distribution of sensors. Accuracy of events located by seismic inversion is improved by sensors placed in multiple azimuths from the monitored borehole. In a downhole array location, accuracy of events is improved by being close to the monitored borehole (high signal-to-noise ratio).
Monitoring of microseismic events induced by reservoir stimulation has become a key aspect in evaluation of hydraulic fractures, and their optimization. The main goal of hydraulic fracture monitoring is to completely characterize the induced fracture structure, and distribution of conductivity within a formation. Geomechanical analysis, such as understanding a formations material properties, in-situ conditions, and geometries, helps monitoring by providing a better definition of the environment in which the fracture network propagates. The next task is to know the location of proppant within the fracture and the distribution of fracture conductivity. This can be monitored using multiple types of techniques to finally develop a reservoir model than accurately predicts well performance.
Horizontal completions
Since the early 2000s, advances in drilling and completion technology have made horizontal wellbores much more economical. Horizontal wellbores allow far greater exposure to a formation than conventional vertical wellbores. This is particularly useful in shale formations which do not have sufficient permeability to produce economically with a vertical well. Such wells, when drilled onshore, are now usually hydraulically fractured in a number of stages, especially in North America. The type of wellbore completion is used to determine how many times a formation is fractured, and at what locations along the horizontal section.
In North America, shale reservoirs such as the Bakken, Barnett, Montney, Haynesville, Marcellus, and most recently the Eagle Ford, Niobrara and Utica shales are drilled horizontally through the producing intervals, completed and fractured. The method by which the fractures are placed along the wellbore is most commonly achieved by one of two methods, known as "plug and perf" and "sliding sleeve".
The wellbore for a plug-and-perf job is generally composed of standard steel casing, cemented or uncemented, set in the drilled hole. Once the drilling rig has been removed, a wireline truck is used to perforate near the bottom of the well, and then fracturing fluid is pumped. Then the wireline truck sets a plug in the well to temporarily seal off that section so the next section of the wellbore can be treated. Another stage is pumped, and the process is repeated along the horizontal length of the wellbore.
The wellbore for the sliding sleeve technique is different in that the sliding sleeves are included at set spacings in the steel casing at the time it is set in place. The sliding sleeves are usually all closed at this time. When the well is due to be fractured, the bottom sliding sleeve is opened using one of several activation techniques and the first stage gets pumped. Once finished, the next sleeve is opened, concurrently isolating the previous stage, and the process repeats. For the sliding sleeve method, wireline is usually not required.
These completion techniques may allow for more than 30 stages to be pumped into the horizontal section of a single well if required, which is far more than would typically be pumped into a vertical well that had far fewer feet of producing zone exposed.
Uses
Hydraulic fracturing is used to increase the rate at which substances such as petroleum or natural gas can be recovered from subterranean natural reservoirs. Reservoirs are typically porous sandstones, limestones or dolomite rocks, but also include "unconventional reservoirs" such as shale rock or coal beds. Hydraulic fracturing enables the extraction of natural gas and oil from rock formations deep below the earth's surface (generally ), which is greatly below typical groundwater reservoir levels. At such depth, there may be insufficient permeability or reservoir pressure to allow natural gas and oil to flow from the rock into the wellbore at high economic return. Thus, creating conductive fractures in the rock is instrumental in extraction from naturally impermeable shale reservoirs. Permeability is measured in the microdarcy to nanodarcy range. Fractures are a conductive path connecting a larger volume of reservoir to the well. So-called "super fracking" creates cracks deeper in the rock formation to release more oil and gas, and increases efficiency. The yield for typical shale bores generally falls off after the first year or two, but the peak producing life of a well can be extended to several decades.
Non-oil/gas uses
While the main industrial use of hydraulic fracturing is in stimulating production from oil and gas wells, hydraulic fracturing is also applied:
To stimulate groundwater wells
To precondition or induce rock cave-ins mining
As a means of enhancing waste remediation, usually hydrocarbon waste or spills
To dispose waste by injection deep into rock
To measure stress in the Earth
For electricity generation in enhanced geothermal systems
To increase injection rates for geologic sequestration of
To store electrical energy, pumped storage hydroelectricity
Since the late 1970s, hydraulic fracturing has been used, in some cases, to increase the yield of drinking water from wells in a number of countries, including the United States, Australia, and South Africa.
Economic effects
Hydraulic fracturing has been seen as one of the key methods of extracting unconventional oil and unconventional gas resources. According to the International Energy Agency, the remaining technically recoverable resources of shale gas are estimated to amount to , tight gas to , and coalbed methane to . As a rule, formations of these resources have lower permeability than conventional gas formations. Therefore, depending on the geological characteristics of the formation, specific technologies such as hydraulic fracturing are required. Although there are also other methods to extract these resources, such as conventional drilling or horizontal drilling, hydraulic fracturing is one of the key methods making their extraction economically viable. The multi-stage fracturing technique has facilitated the development of shale gas and light tight oil production in the United States and is believed to do so in the other countries with unconventional hydrocarbon resources.
A large majority of studies indicate that hydraulic fracturing in the United States has had a strong positive economic benefit so far. The Brookings Institution estimates that the benefits of Shale Gas alone has led to a net economic benefit of $48 billion per year. Most of this benefit is within the consumer and industrial sectors due to the significantly reduced prices for natural gas. Other studies have suggested that the economic benefits are outweighed by the externalities and that the levelized cost of electricity (LCOE) from less carbon and water intensive sources is lower.
The primary benefit of hydraulic fracturing is to offset imports of natural gas and oil, where the cost paid to producers otherwise exits the domestic economy. However, shale oil and gas is highly subsidised in the US, and has not yet covered production costs – meaning that the cost of hydraulic fracturing is paid for in income taxes, and in many cases is up to double the cost paid at the pump.
Research suggests that hydraulic fracturing wells have an adverse effect on agricultural productivity in the vicinity of the wells. One paper found "that productivity of an irrigated crop decreases by 5.7% when a well is drilled during the agriculturally active months within 11–20 km radius of a producing township. This effect becomes smaller and weaker as the distance between township and wells increases." The findings imply that the introduction of hydraulic fracturing wells to Alberta cost the province $14.8 million in 2014 due to the decline in the crop productivity,
The Energy Information Administration of the US Department of Energy estimates that 45% of US gas supply will come from shale gas by 2035 (with the vast majority of this replacing conventional gas, which has a lower greenhouse-gas footprint).
Public debate
Politics and public policy
Popular movement and civil society organizations
An anti-fracking movement has emerged both internationally with involvement of international environmental organizations and nations such as France and locally in affected areas such as Balcombe in Sussex where the Balcombe drilling protest was in progress during mid-2013. The considerable opposition against hydraulic fracturing activities in local townships in the United States has led companies to adopt a variety of public relations measures to reassure the public, including the employment of former military personnel with training in psychological warfare operations. According to Matt Pitzarella, the communications director at Range Resources, employees trained in the Middle East have been valuable to Range Resources in Pennsylvania, when dealing with emotionally charged township meetings and advising townships on zoning and local ordinances dealing with hydraulic fracturing.
There have been many protests directed at hydraulic fracturing. For example, ten people were arrested in 2013 during an anti-fracking protest near New Matamoras, Ohio, after they illegally entered a development zone and latched themselves to drilling equipment. In northwest Pennsylvania, there was a drive-by shooting at a well site, in which someone shot two rounds of a small-caliber rifle in the direction of a drilling rig. In Washington County, Pennsylvania, a contractor working on a gas pipeline found a pipe bomb that had been placed where a pipeline was to be constructed, which local authorities said would have caused a "catastrophe" had they not discovered and detonated it.
U.S. government and Corporate lobbying
The United States Department of State established the Global Shale Gas Initiative to persuade governments around the world to give concessions to the major oil and gas companies to set up fracking operations. A document from the United States diplomatic cables leak show that, as part of this project, U.S. officials convened conferences for foreign government officials that featured presentations by major oil and gas company representatives and by public relations professionals with expertise on how to assuage populations of target countries whose citizens were often quite hostile to fracking on their lands. The US government project succeeded as many countries on several continents acceded to the idea of granting concessions for fracking; Poland, for example, agreed to permit fracking by the major oil and gas corporations on nearly a third of its territory. The US Export-Import Bank, an agency of the US government, provided $4.7 billion in financing for fracking operations set up since 2010 in Queensland, Australia.
Alleged Russian state advocacy
In 2014 a number of European officials suggested that several major European protests against hydraulic fracturing (with mixed success in Lithuania and Ukraine) may be partially sponsored by Gazprom, Russia's state-controlled gas company. The New York Times suggested that Russia saw its natural gas exports to Europe as a key element of its geopolitical influence, and that this market would diminish if hydraulic fracturing is adopted in Eastern Europe, as it opens up significant shale gas reserves in the region. Russian officials have on numerous occasions made public statements to the effect that hydraulic fracturing "poses a huge environmental problem".
Current fracking operations
Hydraulic fracturing is currently taking place in the United States in Arkansas, California, Colorado, Louisiana, North Dakota, Oklahoma, Pennsylvania, Texas, Virginia, West Virginia, and Wyoming. Other states, such as Alabama, Indiana, Michigan, Mississippi, New Jersey, New York, and Ohio, are either considering or preparing for drilling using this method. Maryland and Vermont have permanently banned hydraulic fracturing, and New York and North Carolina have instituted temporary bans. New Jersey currently has a bill before its legislature to extend a 2012 moratorium on hydraulic fracturing that recently expired. Although a hydraulic fracturing moratorium was recently lifted in the United Kingdom, the government is proceeding cautiously because of concerns about earthquakes and the environmental effect of drilling. Hydraulic fracturing is currently banned in France and Bulgaria.
Documentary films
Josh Fox's 2010 Academy Award nominated film Gasland became a center of opposition to hydraulic fracturing of shale. The movie presented problems with groundwater contamination near well sites in Pennsylvania, Wyoming and Colorado. Energy in Depth, an oil and gas industry lobbying group, called the film's facts into question. In response, a rebuttal of Energy in Depth's claims of inaccuracy was posted on Gasland's website. The Director of the Colorado Oil and Gas Conservation Commission (COGCC) offered to be interviewed as part of the film if he could review what was included from the interview in the final film but Fox declined the offer. ExxonMobil, Chevron Corporation and ConocoPhillips aired advertisements during 2011 and 2012 that claimed to describe the economic and environmental benefits of natural gas and argue that hydraulic fracturing was safe.
The 2012 film Promised Land, starring Matt Damon, takes on hydraulic fracturing. The gas industry countered the film's criticisms of hydraulic fracturing with flyers, and Twitter and Facebook posts.
In January 2013, Northern Irish journalist and filmmaker Phelim McAleer released a crowdfunded documentary called FrackNation as a response to the statements made by Fox in Gasland, claiming it "tells the truth about fracking for natural gas". FrackNation premiered on Mark Cuban's AXS TV. The premiere corresponded with the release of Promised Land.
In April 2013, Josh Fox released Gasland 2, his "international odyssey uncovering a trail of secrets, lies and contamination related to hydraulic fracking". It challenges the gas industry's portrayal of natural gas as a clean and safe alternative to oil as a myth, and that hydraulically fractured wells inevitably leak over time, contaminating water and air, hurting families, and endangering the Earth's climate with the potent greenhouse gas methane.
In 2014, Scott Cannon of Video Innovations released the documentary The Ethics of Fracking. The film covers the politics, spiritual, scientific, medical and professional points of view on hydraulic fracturing. It also digs into the way the gas industry portrays hydraulic fracturing in their advertising.
In 2015, the Canadian documentary film Fractured Land had its world premiere at the Hot Docs Canadian International Documentary Festival.
Research issues
Typically the funding source of the research studies is a focal point of controversy. Concerns have been raised about research funded by foundations and corporations, or by environmental groups, which can at times lead to at least the appearance of unreliable studies. Several organizations, researchers, and media outlets have reported difficulty in conducting and reporting the results of studies on hydraulic fracturing due to industry and governmental pressure, and expressed concern over possible censoring of environmental reports. Some have argued there is a need for more research into the environmental and health effects of the technique.
Health risks
There is concern over the possible adverse public health implications of hydraulic fracturing activity. A 2013 review on shale gas production in the United States stated, "with increasing numbers of drilling sites, more people are at risk from accidents and exposure to harmful substances used at fractured wells." A 2011 hazard assessment recommended full disclosure of chemicals used for hydraulic fracturing and drilling as many have immediate health effects, and many may have long-term health effects.
In June 2014 Public Health England published a review of the potential public health impacts of exposures to chemical and radioactive pollutants as a result of shale gas extraction in the UK, based on the examination of literature and data from countries where hydraulic fracturing already occurs. The executive summary of the report stated: "An assessment of the currently available evidence indicates that the potential risks to public health from exposure to the emissions associated with shale gas extraction will be low if the operations are properly run and regulated. Most evidence suggests that contamination of groundwater, if it occurs, is most likely to be caused by leakage through the vertical borehole. Contamination of groundwater from the underground hydraulic fracturing process itself (i.e. the fracturing of the shale) is unlikely. However, surface spills of hydraulic fracturing fluids or wastewater may affect groundwater, and emissions to air also have the potential to impact on health. Where potential risks have been identified in the literature, the reported problems are typically a result of operational failure and a poor regulatory environment."
A 2012 report prepared for the European Union Directorate-General for the Environment identified potential risks to humans from air pollution and ground water contamination posed by hydraulic fracturing. This led to a series of recommendations in 2014 to mitigate these concerns. A 2012 guidance for pediatric nurses in the US said that hydraulic fracturing had a potential negative impact on public health and that pediatric nurses should be prepared to gather information on such topics so as to advocate for improved community health.
A 2017 study in The American Economic Review found that "additional well pads drilled within 1 kilometer of a community water system intake increases shale gas-related contaminants in drinking water."
A 2022 study conduced by Harvard T.H. Chan School of Public Health and published in Nature Energy found that elderly people living near or downwind of unconventional oil and gas development (UOGD) -- which involves extraction methods including fracking—are at greater risk of experiencing early death compared with elderly persons who don't live near such operations.
Statistics collected by the U.S. Department of Labor and analyzed by the U.S. Centers for Disease Control and Prevention show a correlation between drilling activity and the number of occupational injuries related to drilling and motor vehicle accidents, explosions, falls, and fires. Extraction workers are also at risk for developing pulmonary diseases, including lung cancer and silicosis (the latter because of exposure to silica dust generated from rock drilling and the handling of sand). The U.S. National Institute for Occupational Safety and Health (NIOSH) identified exposure to airborne silica as a health hazard to workers conducting some hydraulic fracturing operations. NIOSH and OSHA issued a joint hazard alert on this topic in June 2012.
Additionally, the extraction workforce is at increased risk for radiation exposure. Fracking activities often require drilling into rock that contains naturally occurring radioactive material (NORM), such as radon, thorium, and uranium.
Another report done by the Canadian Medical Journal reported that after researching they identified 55 factors that may cause cancer, including 20 that have been shown to increase the risk of leukemia and lymphoma. The Yale Public Health analysis warns that millions of people living within a mile of fracking wells may have been exposed to these chemicals.
Environmental effects
The potential environmental effects of hydraulic fracturing include air emissions and climate change, high water consumption, groundwater contamination, land use, risk of earthquakes, noise pollution, and various health effects on humans. Air emissions are primarily methane that escapes from wells, along with industrial emissions from equipment used in the extraction process. Modern UK and EU regulation requires zero emissions of methane, a potent greenhouse gas. Escape of methane is a bigger problem in older wells than in ones built under more recent EU legislation.
In December 2016 the United States Environmental Protection Agency (EPA) issued the "Hydraulic Fracturing for Oil and Gas: Impacts from the Hydraulic Fracturing Water Cycle on Drinking Water Resources in the United States (Final Report)." The EPA found scientific evidence that hydraulic fracturing activities can impact drinking water resources. A few of the main reasons why drinking water can be contaminated according to the EPA are:
Water removal to be used for fracking in times or areas of low water availability
Spills while handling fracking fluids and chemicals that result in large volumes or high concentrations of chemicals reaching groundwater resources
Injection of fracking fluids into wells when mishandling machinery, allowing gases or liquids to move to groundwater resources
Injection of fracking fluids directly into groundwater resources
Leak of defective hydraulic fracturing wastewater to surface water
Disposal or storage of fracking wastewater in unlined pits resulting in contamination of groundwater resources.
The lifecycle greenhouse gas emissions of shale oil are 21%-47% higher than those of conventional oil, while emissions from unconventional gas are from 6% lower to 43% higher than the emissions of conventional gas.
Hydraulic fracturing uses between of water per well, with large projects using up to . Additional water is used when wells are refractured. An average well requires of water over its lifetime. According to the Oxford Institute for Energy Studies, greater volumes of fracturing fluids are required in Europe, where the shale depths average 1.5 times greater than in the U.S. Surface water may be contaminated through spillage and improperly built and maintained waste pits, and ground water can be contaminated if the fluid is able to escape the formation being fractured (through, for example, abandoned wells, fractures, and faults) or by produced water (the returning fluids, which also contain dissolved constituents such as minerals and brine waters). The possibility of groundwater contamination from brine and fracturing fluid leakage through old abandoned wells is low. Produced water is managed by underground injection, municipal and commercial wastewater treatment and discharge, self-contained systems at well sites or fields, and recycling to fracture future wells. Typically less than half of the produced water used to fracture the formation is recovered.
In the United States over 12 million acres are being used for fossil fuels. About of land is needed per each drill pad for surface installations. This is equivalent of six Yellowstone National Parks. Well pad and supporting structure construction significantly fragments landscapes which likely has negative effects on wildlife. These sites need to be remediated after wells are exhausted. Research indicates that effects on ecosystem services costs (i.e., those processes that the natural world provides to humanity) has reached over $250 million per year in the U.S. Each well pad (in average 10 wells per pad) needs during preparatory and hydraulic fracturing process about 800 to 2,500 days of noisy activity, which affect both residents and local wildlife. In addition, noise is created by continuous truck traffic (sand, etc.) needed in hydraulic fracturing. Research is underway to determine if human health has been affected by air and water pollution, and rigorous following of safety procedures and regulation is required to avoid harm and to manage the risk of accidents that could cause harm.
In July 2013, the US Federal Railroad Administration listed oil contamination by hydraulic fracturing chemicals as "a possible cause" of corrosion in oil tank cars.
Hydraulic fracturing has been sometimes linked to induced seismicity or earthquakes. The magnitude of these events is usually too small to be detected at the surface, although tremors attributed to fluid injection into disposal wells have been large enough to have often been felt by people, and to have caused property damage and possibly injuries. A U.S. Geological Survey reported that up to 7.9 million people in several states have a similar earthquake risk to that of California, with hydraulic fracturing and similar practices being a prime contributing factor.
Microseismic events are often used to map the horizontal and vertical extent of the fracturing. A better understanding of the geology of the area being fracked and used for injection wells can be helpful in mitigating the potential for significant seismic events.
People obtain drinking water from either surface water, which includes rivers and reservoirs, or groundwater aquifers, accessed by public or private wells. There are already a host of documented instances in which nearby groundwater has been contaminated by fracking activities, requiring residents with private wells to obtain outside sources of water for drinking and everyday use.
Per- and polyfluoroalkyl substances also known as "PFAS" or "forever chemicals" have been linked to cancer and birth defects. The chemicals used in fracking stay in the environment. Once there those chemicals will eventually break down into PFAS. These chemicals can escape from drilling sites and into the groundwater. PFAS are able to leak into underground wells that store million gallons of wastewater.
Despite these health concerns and efforts to institute a moratorium on fracking until its environmental and health effects are better understood, the United States continues to rely heavily on fossil fuel energy. In 2017, 37% of annual U.S. energy consumption is derived from petroleum, 29% from natural gas, 14% from coal, and 9% from nuclear sources, with only 11% supplied by renewable energy, such as wind and solar power.
In 2022 the USA experienced a fracking boom, when the war in Ukraine led to a massive increase in approval of new drillings. Planned drillings will release 140 billion tons of carbon, 4 times more that the annual global emissions.
Regulations
Countries using or considering use of hydraulic fracturing have implemented different regulations, including developing federal and regional legislation, and local zoning limitations. In 2011, after public pressure France became the first nation to ban hydraulic fracturing, based on the precautionary principle as well as the principle of preventive and corrective action of environmental hazards. The ban was upheld by an October 2013 ruling of the Constitutional Council. Some other countries such as Scotland have placed a temporary moratorium on the practice due to public health concerns and strong public opposition. Countries like South Africa have lifted their bans, choosing to focus on regulation instead of outright prohibition. Germany has announced draft regulations that would allow using hydraulic fracturing for the exploitation of shale gas deposits with the exception of wetland areas. In China, regulation on shale gas still faces hurdles, as it has complex interrelations with other regulatory regimes, especially trade. Many states in Australia have either permanently or temporarily banned fracturing for hydrocarbons. In 2019, hydraulic fracturing was banned in UK.
The European Union has adopted a recommendation for minimum principles for using high-volume hydraulic fracturing. Its regulatory regime requires full disclosure of all additives. In the United States, the Ground Water Protection Council launched FracFocus.org, an online voluntary disclosure database for hydraulic fracturing fluids funded by oil and gas trade groups and the U.S. Department of Energy. Hydraulic fracturing is excluded from the Safe Drinking Water Act's underground injection control's regulation, except when diesel fuel is used. The EPA assures surveillance of the issuance of drilling permits when diesel fuel is employed.
In 2012, Vermont became the first state in the United States to ban hydraulic fracturing. On 17 December 2014, New York became the second state to issue a complete ban on any hydraulic fracturing due to potential risks to human health and the environment.
See also
Directional drilling
Environmental impact of electricity generation
Environmental effects of petroleum
Fracking by country
Fracking in the United States
Fracking in the United Kingdom
In situ leach
Nuclear power
Peak oil
Stranded asset
Shale oil extraction
Vaca Muerta
Notes and references
Explanatory notes
References
Further reading
Gallegos, T. J. and B. A. Varela (2015). Hydraulic Fracturing Distributions and Treatment Fluids, Additives, Proppants, and Water Volumes Applied to Wells Drilled in the United States from 1947 through 2010. U.S. Geological Survey.
Gamper-Rabindran, Shanti, ed. The Shale Dilemma: A Global Perspective on Fracking and Shale Development (U of Pittsburgh Press, 2018) online review
External links
Hydraulic Fracturing Litigation Summary (22 April 2021)
1947 introductions
Unconventional oil | Fracking | [
"Chemistry"
] | 10,594 | [
"Unconventional oil",
"Petroleum technology",
"Petroleum",
"Natural gas technology",
"Hydraulic fracturing"
] |
42,347,710 | https://en.wikipedia.org/wiki/Laser%20printing%20of%20single%20nanoparticles | The laser printing of single nanoparticles is a method of applying optical forces that direct single nanoparticles to targeted substrate regions. Van der Waals interactions cause attachment of the single nanoparticles to the substrate areas. This has been accomplished with gold and silicon nanoparticles.
References
Further reading
Nanoparticles
Lithography (microfabrication) | Laser printing of single nanoparticles | [
"Materials_science"
] | 78 | [
"Nanotechnology",
"Microtechnology",
"Lithography (microfabrication)"
] |
42,349,252 | https://en.wikipedia.org/wiki/C16H23NO4 | {{DISPLAYTITLE:C16H23NO4}}
The molecular formula C16H23NO4 (molar mass: 293.36 g/mol) may refer to:
Cinamolol
N-t-BOC-MDMA | C16H23NO4 | [
"Chemistry"
] | 55 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
42,351,905 | https://en.wikipedia.org/wiki/Rossby%20wave%20instability | Rossby Wave Instability (RWI) is a concept related to astrophysical accretion discs. In non-self-gravitating discs, for example around newly forming stars, the instability can be triggered by an axisymmetric bump, at some radius , in the disc surface mass-density. It gives rise to exponentially growing non-axisymmetric perturbation in the vicinity of consisting of anticyclonic vortices. These vortices are regions of high pressure and consequently act to trap dust particles which in turn can facilitate planetesimal growth in proto-planetary discs. The Rossby vortices in the discs around stars and black holes may cause the observed quasi-periodic modulations of the disc's thermal emission.
Rossby waves, named after Carl-Gustaf Arvid Rossby, are important in planetary atmospheres and oceans and are also known as planetary waves. These waves have a significant role in the transport of heat from equatorial to polar regions of the Earth. They may have a role in the formation of the long-lived ( yr) Great Red Spot on Jupiter which is an anticyclonic vortex. The Rossby waves have the notable property of having the phase velocity opposite to the direction of motion of the atmosphere or disc in the comoving frame of the fluid.
The theory of the Rossby wave instability in accretion discs was developed by Lovelace et al. and Li et al. for thin Keplerian discs with negligible self-gravity and earlier by Lovelace and Hohlfeld for thin disc galaxies where the self-gravity may or may not be important and where the rotation is in general non-Keplerian.
The Rossby wave instability occurs because of the local wave trapping in a disc. It is related to the Papaloizou and Pringle instability; where the wave is trapped between the inner and outer radii of a disc or torus.
References
Further reading
Astrophysics | Rossby wave instability | [
"Physics",
"Astronomy"
] | 401 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
42,352,871 | https://en.wikipedia.org/wiki/Hidden%20states%20of%20matter | A hidden state of matter is a state of matter which cannot be reached under ergodic conditions, and is therefore distinct from known thermodynamic phases of the material. Examples exist in condensed matter systems, and are typically reached by the non-ergodic conditions created through laser photo excitation.
Short-lived hidden states of matter have also been reported in crystals using lasers. Recently a persistent hidden state was discovered in a crystal of Tantalum(IV) sulfide (TaS2), where the state is stable at low temperatures.
A hidden state of matter is not to be confused with hidden order, which exists in equilibrium, but is not immediately apparent or easily observed.
Using ultrashort laser pulses impinging on solid state matter, the system may be knocked out of equilibrium so that not only are the individual subsystems out of equilibrium with each other but also internally. Under such conditions, new states of matter may be created which are not otherwise reachable under equilibrium, ergodic system evolution.
Such states are usually unstable and decay very rapidly, typically in nanoseconds or less. The difficulty is in distinguishing a genuine hidden state from one which is simply out of thermal equilibrium.
Probably the first instance of a photoinduced state is described for the organic molecular compound TTF-CA, which turns from neutral to ionic species as a result of excitation by laser pulses. However, a similar transformation is also possible by the application of pressure, so strictly speaking the photoinduced transition is not to a hidden state under the definition given in the introductory paragraph. A few further examples are given in ref.
Photoexcitation has been shown to produce persistent states in vanadates and manganite materials,
leading to filamentary paths of a modified charge ordered phase which is sustained by a passing current. Transient superconductivity was also reported in cuprates.
A photoexcited transition to an H state
A hypothetical schematic diagram for the transition to an H state by photo excitation is shown in the Figure (After ). An absorbed photon causes an electron from the ground state G to an excited state E (red arrow). State E rapidly relaxes via Franck-Condon relaxation to an intermediate locally reordered state I. Through interactions with others of its kind, this state collectively orders to form a macroscopically ordered metastable state H, further lowering its energy as a result. The new state has a broken symmetry with respect to the G or E state, and may also involve further relaxation compared to the I state. The barrier EB prevents state H from reverting to the ground state G. If the barrier is sufficiently large compared to thermal energy kBT, where kB is the Boltzmann constant, the H state can be stable indefinitely.
References
Condensed matter physics
Engineering thermodynamics
Phases of matter | Hidden states of matter | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 591 | [
"Engineering thermodynamics",
"Phases of matter",
"Materials science",
"Thermodynamics",
"Condensed matter physics",
"Mechanical engineering",
"Matter"
] |
38,137,594 | https://en.wikipedia.org/wiki/David%20C.%20Muddiman | David C. Muddiman is an American chemist and distinguished professor of chemistry at North Carolina State University in Raleigh, North Carolina. His research is focused on developing innovative tools for mass spectrometry based proteomics, metabolomics, and glycomics as well as novel imaging mass spectrometry ionization methods.
He received his B.S. in Chemistry with a Minor in Business (1985) and a Ph.D. from the University of Pittsburgh in Bioanalytical Chemistry, and he carried out a post-doctoral fellowship at the Pacific Northwest National Laboratory (1995–1997). His independent academic career prior to his appointment at NC State University were at Virginia Commonwealth University (1997–2002) and the Mayo Clinic College of Medicine (2002–2005).
He has received several awards during his career including the Arthur F. Findeis Award from the American Chemical Society (2004), the NC State Alumni Association Outstanding Research Award (2009), and was the recipient of the Biemann Medal (2010).
References
External links
Current Research Page at NC State University
Living people
North Carolina State University faculty
University of Pittsburgh alumni
Year of birth missing (living people)
Place of birth missing (living people)
21st-century American chemists
Mass spectrometrists | David C. Muddiman | [
"Physics",
"Chemistry"
] | 260 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
38,138,287 | https://en.wikipedia.org/wiki/Beta%20function%20%28accelerator%20physics%29 | The beta function in accelerator physics is a function related to the transverse size of the particle beam at the location s along the nominal beam trajectory.
It is related to the transverse beam size as follows:
where
is the location along the nominal beam trajectory
the beam is assumed to have a Gaussian shape in the transverse direction
is the width parameter of this Gaussian
is the RMS geometrical beam emittance, which is normally constant along the trajectory when there is no acceleration
Typically, separate beta functions are used for two perpendicular directions in the plane transverse to the beam direction (e.g. horizontal and vertical directions).
The beta function is one of the Courant–Snyder parameters (also called Twiss parameters).
Beta star
The value of the beta function at an interaction point is referred to as beta star.
The beta function is typically adjusted to have a local minimum at such points (in order to
minimize the beam size and thus maximise the interaction rate). Assuming that this point is
in a drift space, one can show that the evolution of the beta function around the
minimum point is given by:
where z is the distance along the nominal beam direction from the minimum point.
This implies that the smaller the beam size at the interaction
point, the faster the rise of the beta function (and thus the beam size) when going away from the interaction point.
In practice, the aperture of the beam line elements (e.g. focusing magnets) around the interaction point
limit how small beta star can be made.
References
Accelerator physics | Beta function (accelerator physics) | [
"Physics"
] | 312 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
38,145,479 | https://en.wikipedia.org/wiki/Jaynes%E2%80%93Cummings%E2%80%93Hubbard%20model | The Jaynes–Cummings–Hubbard (JCH) model is a many-body quantum system modeling the quantum phase transition of light. As the name suggests, the Jaynes–Cummings–Hubbard model is a variant on the Jaynes–Cummings model; a one-dimensional JCH model consists of a chain of N coupled single-mode cavities, each with a two-level atom. Unlike in the competing Bose–Hubbard model, Jaynes–Cummings–Hubbard dynamics depend on photonic and atomic degrees of freedom and hence require strong-coupling theory for treatment. One method for realizing an experimental model of the system uses circularly-linked superconducting qubits.
History
The combination of Hubbard-type models with Jaynes-Cummings (atom-photon) interactions near the photon blockade regime originally appeared in three, roughly simultaneous papers in 2006.
All three papers explored systems of interacting atom-cavity systems, and shared much of the essential underlying physics. Nevertheless, the term Jaynes–Cummings–Hubbard was not coined until 2008.
Properties
Using mean-field theory to predict the phase diagram of the JCH model, the JCH model should exhibit Mott insulator and superfluid phases.
Hamiltonian
The Hamiltonian of the JCH model is
():
where are Pauli operators for the two-level atom at the
n-th cavity. The is the tunneling rate between neighboring cavities, and is the vacuum Rabi frequency which characterizes to the photon-atom interaction strength. The cavity frequency is and atomic transition frequency is . The cavities are treated as periodic, so that the cavity labelled by n = N+1 corresponds to the cavity n = 1. Note that the model exhibits quantum tunneling; this process is similar to the Josephson effect.
Defining the photonic and atomic excitation number operators as and , the total number of excitations is a conserved quantity,
i.e., .
Two-polariton bound states
The JCH Hamiltonian supports two-polariton bound states when the photon-atom interaction is sufficiently strong. In particular, the two polaritons associated with the bound states exhibit a strong correlation such that they stay close to each other in position space. This process is similar to the formation of a bound pair of repulsive bosonic atoms in an optical lattice.
Further reading
D. F. Walls and G. J. Milburn (1995), Quantum Optics, Springer-Verlag.
References
Quantum optics | Jaynes–Cummings–Hubbard model | [
"Physics"
] | 505 | [
"Quantum optics",
"Quantum mechanics"
] |
38,146,835 | https://en.wikipedia.org/wiki/Contagium%20vivum%20fluidum | Contagium vivum fluidum (Latin: "contagious living fluid") was a phrase first used to describe a virus, and underlined its ability to slip through the finest ceramic filters then available, giving it almost liquid properties. Martinus Beijerinck (1851–1931), a Dutch microbiologist and botanist, first used the term when studying the tobacco mosaic virus, becoming convinced that the virus had a liquid nature.
The word "virus", from the Latin for "poison", was originally used to refer to any infectious agent, and gradually became used to refer to infectious particles. Bacteria could be seen under microscope, and cultured on agar plates. In 1890, Louis Pasteur declared "tout virus est un microbe": "all infectious diseases are caused by microbes".
In 1892, Dmitri Ivanovsky discovered that the cause of tobacco mosaic disease could pass through Chamberland's porcelain filter. Infected sap, passed through the filter, retained its infectious properties. Ivanovsky thought the disease was caused by an extremely small bacteria, too small to see under microscope, which secreted a toxin. It was this toxin, he thought, which passed through the filter. However, he was unable to culture the purported bacteria.
In 1898, Beijerinck independently found the cause of the disease could pass through porcelain filters. He disproved Ivanovsky's toxin theory by demonstrating infection in series. He found that although he could not culture the infectious agent, it would diffuse through an agar gel. This diffusion inspired him to put forward the idea of a non-cellular "contagious living fluid", which he called a "virus". This was somewhere between a molecule and a cell.
Ivanovsky, irked that Beijerinck had not cited him, demonstrated that particles of ink could also diffuse through agar gel, thus leaving the particulate or fluid nature of the pathogen unresolved. Beijerinck's critics including Ivanovsky argued that the idea of a "contagious living fluid" was a contradiction in terms. However, Beijerinck only used the phrase "contagium vivum fluidum" in the title of his paper, using the word "virus" throughout.
Other scientists began to identify other diseases caused by infectious agents which could pass through a porcelain filter. These became known as "filterable viruses", and later just "viruses". In 1923 Edmund Beecher Wilson wrote "We have now arrived at a borderland, where the cytologist and the colloidal chemist are almost within hailing distance of each other". In 1935 American biochemist and virologist Wendell Meredith Stanley was able to crystallize and isolate the tobacco mosaic virus. Stanley found the crystals were effectively living chemicals: they could be dissolved and would regain their infectious properties.
The tobacco mosaic virus was the first virus to be photographed with an electron microscope, in 1939. Over the second half of the twentieth century, more than 2,000 virus species infecting animals, plants and bacteria were discovered.
References
External links
A Contagium vivum fluidum as the Cause of the Mosaic Diseases of Tobacco Leaves – Martinus W. Beijerinck (1899)
Viruses
Martinus Beijerinck
Latin words and phrases
Biology in the Netherlands | Contagium vivum fluidum | [
"Biology"
] | 677 | [
"Viruses",
"Tree of life (biology)",
"Microorganisms"
] |
38,150,904 | https://en.wikipedia.org/wiki/Primordial%20element%20%28algebra%29 | In algebra, a primordial element is a particular kind of a vector in a vector space.
Definition
Let be a vector space over a field and let be an -indexed basis of vectors for
By the definition of a basis, every vector can be expressed uniquely as
for some -indexed family of scalars where all but finitely many are zero.
Let
denote the set of all indices for which the expression of has a nonzero coefficient.
Given a subspace of a nonzero vector is said to be if it has both of the following two properties:
is minimal among the sets where and
for some index
References
Vectors (mathematics and physics) | Primordial element (algebra) | [
"Mathematics"
] | 132 | [
"Mathematical structures",
"Vector spaces",
"Space (mathematics)"
] |
53,848,852 | https://en.wikipedia.org/wiki/Klaus%20M%C3%B8lmer | Klaus Mølmer is a Danish physicist who is currently a professor at the Niels Bohr Institute of the University of Copenhagen. From 2000 to 2022, he was a professor of physics at the University of Aarhus.
In 1999, Mølmer and Anders Sørensen proposed the Mølmer–Sørensen gate for trapped ion quantum computing, which was one of the first proposals for the implementation of a multi-qubit gate on a physical system.
Mølmer was awarded the status of Fellow in the American Physical Society, after he was nominated by their Division of Atomic, Molecular & Optical Physics in 2008, for his outstanding and insightful contributions to theoretical quantum optics, quantum information science and quantum atom optics, including the development of novel computational methods to treat open systems in quantum mechanics and theoretical proposals for the quantum logic gates with trapped ions.
Awards
Mølmer has been awarded several prizes, some of which are listed below.
Dirac Medal, University of New South Wales, 2023.
Villum Kann Rasmussens Årslegat 2012.
Fellow of the American Physical Society (APS), 2008.
The EliteForsk Award of the Ministry of Science, Technology and Innovation of Denmark, 2007.
Rigmor og Carl Holst-Knudens Videnskabspris (University of Aarhus biennial Research Award) 2004.
Biennial Award of the Danish Physical Society, NKT's forsker pris. 1999.
Annual Award of the Danish Optical Society, DOPS-prisen. 1998.
Rømer Fondets Legat. 1995.
PhD Award from the Danish Academy for Natural Sciences. 1993.
See also
Entanglement depth
Quantum jump method
Mølmer–Sørensen gate
References
Fellows of the American Physical Society
21st-century Danish physicists
Living people
Year of birth missing (living people)
University of Copenhagen
Quantum optics
Quantum physicists | Klaus Mølmer | [
"Physics"
] | 384 | [
"Quantum optics",
"Quantum physicists",
"Quantum mechanics"
] |
53,849,239 | https://en.wikipedia.org/wiki/Cary%2014%20Spectrophotometer | The Cary Model 14 UV-VIS Spectrophotometer was a double beam recording spectrophotometer designed to operate over the wide spectral range of ultraviolet, visible and near infrared wavelengths (UV/Vis/NIR). This included wavelengths ranging from 185 nanometers to 870 nanometers. (The Cary Model 14B, almost identical in exterior appearance, measured wavelengths from .5 to 6.0 microns.)
The Cary 14 spectrophotometer was first produced in 1954 by the Applied Physics Corporation, which later was named the Cary Instruments Corporation after co-founder Howard Cary. The instrument was a successor to the Cary 11, which was the first commercially available recording UV/Vis spectrophotometer. It was produced until 1980, and refurbished models can still be obtained.
Design and use
The double beam design of the Cary 14 provided rapid, simplified analysis by simultaneously measuring the transmittance of both the sample and the reference over the entire spectral range.
The optics of the Cary 14 were a key feature.
The double monochromator in particular was described and patented.
The Cary 14 was one of the first instruments to incorporate high-quality gratings into its monochromators.
To take readings in the ultraviolet or visible range, either a deuterium or tungsten lamp was used, with the light focussed into the entrance slit. The light passed through the first monochromator, which used a 30° Littrow prism, through the intermediate slit, and then into the second monochromator, which used an echelette grating with 6000 grooves/cm, to the exit slit. The beam from the monochromator then reflected from a rotating semicircular mirror and beam chopper, sending the light alternately into compartments for the sample and the reference, separated by dark periods. The beams from the sample and reference alternately registered on the single photomultiplier (pmt) detector, with the pmt output in the dark intervals subtracted from both measurements. The measured absorption or transmittance was calculated from the difference in the dark-corrected sample and reference measurements.
To take readings in the near infrared, a tungsten ribbon-filament lamp was used instead of the photomultiplier tube. An additional mirror was used to direct the light beam onto a lead sulfide photoconductive cell and reverse the light's path through the monochromators.
The instrument had a built-in chart recorder for displaying the analog signal on paper. By using a double-pen mechanism, an effective chart width of 20 inches could be obtained. The original hydrogen discharge lamp was water cooled. Samples and reference solvent were held in the sample chambers by a variety of means, most typically using 1 centimeter pathlength cuvettes made of glass or quartz.
By combining the Littrow prism and the echelette grating, the Cary design minimized noise and interference (stray light) while obtaining high resolution measurements over a very wide dynamic range.
When used in double-beam mode, the instrument was almost entirely free from Wood’s anomalies and other artifacts. A series of publications in the scholarly literature validated the optical quality of the Cary 14, including benchmarking with the Beckman DU Spectrophotometer, which was another leading spectrophotometer of the time.
The instrument was widely used for studies of chemical bonding, quantitative analysis, and rates of chemical reaction. The use of the instrument generally necessitated that substances being studied are in the solution state. Integrating sphere accessories were available which enabled diffuse reflectance measurements,
Production
The Cary 14 was produced until 1980. Its selling price in 1960 was approximately US $20,000. Cary Instruments replaced production of the Cary 14 with the Cary 17 beginning in 1970. Cary recording spectrophotometers, including the Cary 14, were contemporary to the single beam, non-scanning Spectronic 20 spectrophotometer. These instruments were complementary and were used in academic and analytical settings through the 1950s, 1960s, and 1970s.
Although the Cary 14 is out of production, refurbished versions of it that retain the original optics but with an air cooled deuterium lamp, a lead-sulphide IR detector, modernized, digital electronics and recording, automatic lamp and detector change at selected wavelengths, extensive accessories, and flexible operation automation that includes the ability to integrate the instrument into a larger system are commercially available as of 2017. Versions of modernized Cary 14 spectrophotometers extend the wavelength range to 2500 nanometers in the near infrared spectrum.
References
Spectrometers
Scientific instruments | Cary 14 Spectrophotometer | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 939 | [
"Spectrum (physical sciences)",
"Scientific instruments",
"Measuring instruments",
"Spectrometers",
"Spectroscopy"
] |
53,855,890 | https://en.wikipedia.org/wiki/Wah%20Chiu | Wah Chiu () is a Hong Kong-born American biophysicist, currently the Wallenberg-Bienenstock Chair Professor in the department of bioengineering, department of microbiology and immunology and the Photon Science Directorate of SLAC National Accelerator Laboratory at Stanford University. He is a Stanford Bio-X affiliated Faculty. He was formerly the Distinguished Service Professor and the Alvin Romansky Chair Professor at Baylor College of Medicine where he was the founding director of the National Center for Biomolecular Imaging, and has been active in the new cryo-EM techniques allowing much higher-resolution structures of large molecular complexes such as viruses and chaperonin.
Biography
Chiu was born in Hong Kong, and attended Pui Ching Middle School. He moved to the U.S. to study at University of California, Berkeley, where he received both his B.S. (1969) and Ph.D. (1975). He was elected an academician of Taiwan's Academia Sinica in 2008 and a member of the United States National Academy of Sciences in 2012. He has also received several honors including the Distinguished Science Award from the Microscopy Society of America and the Honorary Doctorate of Philosophy from the University of Helsinki, Finland in 2014.
References
Year of birth missing (living people)
Living people
Baylor College of Medicine faculty
American biochemists
University of California, Berkeley alumni
Hong Kong emigrants to the United States
Members of Academia Sinica
Members of the United States National Academy of Sciences
Chinese biochemists
American people of Chinese descent
Hong Kong scientists
Structural biologists | Wah Chiu | [
"Chemistry"
] | 318 | [
"Structural biologists",
"Structural biology"
] |
60,202,593 | https://en.wikipedia.org/wiki/Pole%20mass | In quantum field theory, the pole mass of an elementary particle is the limiting value of the rest mass of a particle, as the energy scale of measurement increases.
Running mass
In quantum field theory, quantities like coupling constant and mass "run" with the energy scale of high energy physics. The running mass of a fermion or massive boson depends on the energy scale at which the observation occurs, in a way described by a renormalization group equation (RGE) and calculated by a renormalization scheme such as the on-shell scheme or the minimal subtraction scheme. The running mass refers to a Lagrangian parameter whose value changes with the energy scale at which the renormalization scheme is applied. A calculation, typically done by a computerized algorithm intractable by paper calculations, relates the running mass to the pole mass. The algorithm typically relies on a perturbative calculation of the self energy.
See also
Bare mass
Relativistic Breit–Wigner distribution
Infraparticle
References
Quantum field theory
Renormalization group | Pole mass | [
"Physics"
] | 216 | [
"Quantum field theory",
"Physical phenomena",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Statistical mechanics",
"Quantum physics stubs"
] |
60,210,421 | https://en.wikipedia.org/wiki/Helena%20Nader | Helena Bonciani Nader (born November 5, 1947) is a Brazilian biomedical scientist based at the Federal University of São Paulo. She served as president of the Brazilian Society for the Advancement of Science from 2011 to 2017. She works in glycobiology, specialising in the characterisation of proteoglycans. She is a member of The World Academy of Sciences.
Early life and education
Nader was born in São Paulo, to a family of Syrian, Lebanese and Italian descent. She spent her childhood in São Paulo and Curitiba. She was a high school student in the United States. She studied biomedical sciences at the Federal University of São Paulo and graduated with a bachelor's degree in 1970. She simultaneously earned a bachelor's degree in education at the University of São Paulo, before beginning her doctoral studies in molecular biology. She was supervised by Carl Von Peter Dietrich. Nader earned a doctorate at the Federal University of São Paulo in 1974. She was a postdoctoral researcher at the University of Southern California.
Research and career
Nader returned to the Federal University of São Paulo, where she was made a professor in 1989. Nader studies glycobiology, investigating proteoglycans, a complex class of glycoconjugates. She studies how proteoglycans such as heparan sulfate are involved in hemostasis. Her work involves nuclear magnetic resonance spectroscopy and fragment analysis. She holds visiting professorships at Loyola University Medical Center and The W. Alton Jones Cell Science Center.
Nader is an advocate for equality, diversity and inclusion in science and engineering. She has been a member of the Brazilian Society for the Advancement of Science since 1969, and took part in resistance to the military dictatorship. She was appointed president of the Brazilian Society for the Advancement of Science in 2011. She was the third woman to hold the position, after Carolina Bori and Glaci Zancan. She held the position for three terms, handing over the position in 2017. During her time as president, she encouraged the allocation of oil royalties to science and technology.
On March 29, 2022, she was elected president of the Brazilian Academy of Sciences.
Awards and honours
2002 Class Commander of the National Order of Scientific Merit
2005 Honorary Professorship at the Federal University of Rio de Janeiro
2008 Grand Cross Class of the National Order of Scientific Merit
2009 President of the Brazilian Society of Biochemistry and Molecular Biology
2010 Moacyr Álvaro Gold Medal
2011-2017 President of the Brazilian Society for the Advancement of Science
2013 Brazilian Navy Merit Medal Tamandaré
2016 Nuclear National Energy Commission, Felippe Sheep Medal
2018 Brazilian Society of Cell Biology, Classics in Cell Biology
2018 Federation of Experimental Biology Societies, Science Service Award
2020 Prêmio Almirante Álvaro Alberto, CNPq
Books
2015 Sulfated Polysaccharides (Biochemistry and Molecular Biology in the Post Genomic Era)
References
21st-century Brazilian women scientists
Brazilian biochemists
1947 births
Living people
Women biologists
Women biochemists
20th-century Brazilian scientists
20th-century Brazilian women scientists
21st-century Brazilian biologists
21st-century women scientists
People from São Paulo
University of São Paulo alumni
Academic staff of the Federal University of São Paulo
Brazilian people of Syrian descent
20th-century Brazilian biologists | Helena Nader | [
"Chemistry"
] | 653 | [
"Biochemists",
"Women biochemists"
] |
60,212,212 | https://en.wikipedia.org/wiki/Medical%20device%20design | Due to the many regulations in the industry, the design of medical devices presents significant challenges from both engineering and legal perspectives.
Medical device design in the United States
The United States medical device industry is one of the largest markets globally, exceeding $110 billion annually. In 2012 it represented 38% of the global market and more than 6500 medical device companies exist nationwide. These companies are primarily small-scale operations with fewer than 50 employees. The most medical device companies are in the states of California, Florida, New York, Pennsylvania, Michigan, Massachusetts, Illinois, Minnesota, and Georgia. Washington, Wisconsin, and Texas also have high employment levels in the medical device industry. The industry is divided into branches: Electro-Medical Equipment, Irradiation Apparatuses, Surgical and Medical Instruments, Surgical Appliances and Supplies, and Dental Equipment and Supplies.
FDA regulation and oversight
Medical devices are defined by the US Food and Drug Administration (FDA) as any object or component used in diagnosis, treatment, prevention, or cure of medical conditions or diseases, or affects body structure or function through means other than chemical or metabolic reaction in humans or animals. This includes all medical tools, excluding drugs, ranging from tongue depressors to Computerized Axial Tomography (CAT) scanners to radiology treatments. Because of the wide variety of equipment classified as medical devices, the FDA has no single standard to which a specific device must be manufactured; instead they have created an encompassing guide that all manufacturers must follow. Manufacturers are required to develop comprehensive procedures within the FDA framework in order to produce a specific device to approved safety standards.
Pathway to clearance or approval
The US FDA allows for three regulatory pathways that allow the marketing of medical devices. The first is self-registration. The second, and by far the most common is the so-called 510(k) clearance process (named after the Food, Drug, and Cosmetic Act section that describes the process). A new medical device that can be demonstrated to be "substantially equivalent" to a previously legally marketed device can be "cleared" by the FDA for marketing as long as the general and special controls as described below are met. The vast majority of new medical devices (99%) enter the marketplace via this process. The 510(k) pathway rarely requires clinical trials.
The third regulatory pathway for new medical devices is the Premarket Approval process (PMA), described below, which is similar to the pathway for a new drug approval. Typically, clinical trials are required for this premarket approval pathway.
The FDA process between drugs and devices is different, with most devices requiring clearance for the market launch, not approval. Approval is required for the PMA process of Class III devices.
Timeline
In comparison to a device, a drug takes up to nine years longer to reach the market. It can take drugs up twelve years to be granted FDA approval. In general, for class I, II and III devices, from the design process until the final FDA market clearance, it can take anywhere from three to seven years.
Requirements for testing
Class I
Class I are low risk of illness or injury devices.
Around seventy-five percent of Class I devices, and a small number of class II devices qualify for exempt status. This means there is no requirement for safety data.
Class II
Class II are devices with moderate risk.
Class I and Class II devices are subject to less stringent regulatory processes than Class III devices.
Class I or II devices are focused on registration, manufacturing, and labeling. In general they do not require clinical data.
Most class II devices go through a PMN (a 510[k]) clearance. The PMN will not require stringent clinical trial evidence.
Class III
Class III are devices which support or sustain human life, are of substantial importance in preventing impairment of human health, or present a potential, unreasonable risk of illness or injury.
All new devices by default are placed in the class III category. The FDA then requires these devices to undergo stringent clinical reviews. For these reviews, the FDA require some type of clinical evidence or trials. If the sponsor believes the device is low to moderate risk, the sponsor may apply to change this default classification. The FDA, upon review may then reclassify these devices as de novo. De novo devices require a less rigorous FDA regulatory process and the FDA treats de novo devices like class I and II devices.
Class III devices with predicates
Class III devices with predicates (devices with a substantially equivalent device already on the market) are reclassified as class I or II devices. This is done through a 513(g) pathway. Class III devices reclassified as a class I or II, are then subject to less stringent testing requirements. As reclassified class II devices they would require a PMN (501[k]) process, not the PMA process.
Regulatory Controls
General Controls
General controls include provisions that relate to:
adulteration
misbranding
device registration and listing
premarket notification
banned devices
notification, including repair, replacement, or refund
records and reports
restricted devices
good manufacturing practices
Special Controls
Special controls were established for cases in which patient safety and product effectiveness are not fully guaranteed by general controls. Special controls may include special labeling requirements, mandatory performance standards and postmarket surveillance. Special controls are specific to each device and classification guides are available for various branches of medical devices.
Premarket Approval
Premarket Approval is a scientific review to ensure the device's safety and effectiveness, in addition to the general controls of Class I.
Risk Classification
Under the Food, Drug, and Cosmetic Act, the U.S. Food and Drug Administration recognizes three classes of medical devices, based on the level of control necessary to assure safety and effectiveness. The classification procedures are described in the Code of Federal Regulations, Title 21, part 860 (usually known as 21 CFR 860). Devices are classified into three brackets:
Class I: General Controls
Class II: General Controls and Special Controls
Class III: General Controls and Premarket Approval
Regulations differ by class based on their complexity or the potential hazards in the event of malfunction. Class I devices are the least likely to cause major bodily harm or death in the event of failure, and are subjected to less stringent regulations than are devices categorized as Class II or Class III.
In the regulation process, 2021 statistics showed: 47% of devices were class I, 43% were class II and 10% were class III.
Class I: General controls
Class I devices are subject to the least regulatory control. Class I devices are subject to "General Controls" as are Class II and Class III devices.
General controls are the only controls regulating Class I medical devices. They state that Class I devices are not intended to be:
For use in supporting or sustaining life;
Of substantial importance in preventing impairment to human life or health; and
May not present an unreasonable risk of illness or injury.
Most Class I devices are exempt from premarket notification and a few are also exempted from most good manufacturing practices regulations.
Examples of Class I devices include hand-held surgical instruments, (elastic) bandages, examination gloves, bed-patient monitoring systems, medical disposable bedding, and some prosthetics such as hearing aids.
Class II: General controls and special controls
Class II devices are those for which general controls alone cannot assure safety and effectiveness, and existing methods are available that provide such assurances. Devices in Class II are held to a higher level of assurance and subject to stricter regulatory requirements than Class I devices, and are designed to perform as indicated without causing injury or harm to patient or user. In addition to complying with general controls, Class II devices are also subject to special controls.
Examples of Class II devices include acupuncture needles, powered wheelchairs, infusion pumps, air purifiers, and surgical drapes.
A few Class II devices are exempt from the premarket notification.
Class III: General controls and premarket approval
A Class III device is one for which insufficient information exists to assure safety and effectiveness solely through the general or special controls sufficient for Class I or Class II devices. These devices are considered high-risk and are usually those that support or sustain human life, are of substantial importance in preventing impairment of human health, pose a potential, unreasonable risk of injury or illness, or are of great significance in preventative care. For these reasons, Class III devices require premarket approval.
Prior to marketing a Class III device, the rights-holder(s) or person(s) with authorized access must seek FDA approval. The review process may exceed six months for final determination of safety by an FDA advisory committee. Many Class III devices have established guidelines for Premarket Approval (PMA) and increasingly, must comply with unique device identifier regulations. However, with ongoing technological advances many Class III devices encompass concepts not previously marketed, These devices may not fit the scope of established device categories and do not yet have developed FDA guidelines.
Examples of Class III devices that require a premarket notification include implantable pacemaker, pulse generators, HIV diagnostic tests, automated external defibrillators, and endosseous implants.
Nanomanufacturing
Nanomanufacturing techniques provide a means of manufacturing cellular-scale medical devices (<100μm). They are particularly useful in the context of medical research, where cellular-scale sensors can be produced that provide high-resolution measurements of cellular-scale phenomena. Common techniques in the area are direct-write nanopatterning techniques such as dip-pen nanolithography, electron-beam photolithography and microcontact printing, directed self-assembly methods, and Functional Nanoparticle Delivery (NFP), where nanofountain probes deliver liquid molecular material that is drawn through nanopattern channels by capillary action.
Additive manufacturing
Additive manufacturing (AM) processes are a dominant mode of production for medical devices that are used inside the body, such as implants, transplants and prostheses, for their ability to replicate organic shapes and enclosed volumes that are difficult to fabricate. The inability of donation systems to meet the demand for organ transplantation in particular has led to the rise of AM in medical device manufacturing.
Biocompatibility
The largest issue in integrating AM techniques into medical device manufacturing is biocompatibility. These issues arise from the stability of 3D printed polymers in the body and the difficulty of sterilizing regions between printed layers. In addition to the use of primary cleaners and solvents to remove surface impurities, which are commonly isopropyl alcohol, peroxides, and bleach, secondary solvents must be use in succession to remove the cleaning chemicals applied before them, a problem that increases with the porosity of the material used. Common compatibility AM materials include nylon and tissue material from the host patient.
Cybersecurity
Many medical devices have either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment and implanted devices including pacemakers and insulin pumps. On 28 December 2016 the US Food and Drug Administration released its recommendations that are not legally enforceable for how medical device manufacturers should maintain the security of Internet-connected devices.
References
Medical devices
Product design
Production and manufacturing by product | Medical device design | [
"Engineering",
"Biology"
] | 2,289 | [
"Product design",
"Design",
"Medical devices",
"Medical technology"
] |
45,644,877 | https://en.wikipedia.org/wiki/J.%20L.%20Eve%20Construction | J. L. Eve Construction was a civil engineering company from south London.
History
The company was formed on 8 February 1930 by John Leonard Eve (3 February 1887 - 25 June 1954) from Aveley in Essex.
He grew up at Cranham Hall in Cranham in Essex, now part of London. Richard Newland Eve had lived there from 1896. His mother was Elizabeth Mary Manning, daughter of Abraham Manning, of Moor Hall in Rainham. His parents married on 5 June 1873 at Aveley church.
His father died on 19 October 1917, aged 72. Richard was the son of William Eve, of Manor House in North Ockendon. On 18 November 1937, his first wife Nancy Gill died, with the funeral at Hornchurch parish church. He remarried Doris Matthews in 1940, with a son David born on 16 February 1945.
In 1924 he was appointed as the Chief Engineer for the river crossings of the Scottish area of the Central Electricity Board (CEB, which existed from 1926 to 1947). He worked with Robert Chandler-Brown. The CEGB came into existence in 1957. J.L. Eve left a son and a daughter.
Chain Home and National Grid
In the 1930s the company built steel-lattice towers for the new National Grid and for the Chain Home transmitters. The electrical cable was often supplied by Pirelli UK of Eastleigh in Hampshire (now Prysmian Group).
The Air Ministry had contacted the company to build two test radar transmitters, one on the south coast, and one on Orkney. After 1939, the company extended it to over fifty radar sites. It built the first part of the supergrid in 1952 from Tilbury to Elstree - with a 275kV voltage instead of 132kV and 136 ft instead of 85 ft, with 45 miles for the British Electricity Authority
Ownership
It joined the Unlisted Securities Market in September 1986 From 1982 to 1988 it was known as Eve Construction. It would later be known as Eve Group plc from April 1988, then Eve Group Ltd and Babcock Networks Ltd from 2004. It was bought by the Peterhouse Group plc in January 2000. In the late 1990s the chief executive was Alan Robertson, with finance director Christopher Wigg.
Babcock Networks, its successor, is situated off the M1 at Sherwood Park at Annesley, next to E.ON UK; its training base is at the former RAF Newton in Nottinghamshire.
On Thursday March 2003 at 2.10pm Prince Andrew, Duke of York visited Eve Transcom in Annesley (Sherwood Park) in Nottinghamshire, near junction 27 of the M1, later visiting Carlsbro at 3.20pm.
Sponsorship
From 1982 to 2000, it sponsored the Surrey Championship (cricket), being replaced by Castle Lager.
Structure
It was based at Minster House on Plough Lane in Tooting. It was based south of Summerstown on the B235, north of Haydons Road railway station (on the A218).
Divisions
Later divisions of Eve Group were:
Eve Arclive, formed on 18 June 1976 - electrical contracting, later it became Eve Power
Trakway, later known as Eve Trakway and now known as Live (Trakway) and based in Bramley Vale and Doe Lea in Ault Hucknall near Glapwell in Derbyshire off the A617, east of the Heath Interchange M1 junction 29, and supplies crowd control barriers and temporary fencing. It is owned by Ashtead Group, who trade as A-Plant. In 2002 Eve Trakway built the Super Fortress security fence for the Glastonbury Festival. Eve Construction Trakway was at Lower Heyford, then moved to Sutton-in-Ashfield in the 1970s.
Eve Telecom
Eve Transcom, comprised
Eve Transmission - carried out construction and repair of transmission lines for the National Grid
Eve Cellular - in late 1999, it had built over 7,000 mobile phone base stations throughout the 1990s
Eve Engineering Design Services
Eve Structures
Products
It built structural steel fabricated buildings or structures.
Electrical substations at Bolney, Brockley, Longford in 1970, Capenhurst, and Sundon in 1961
Transmitters
Angus transmitting station near Forfar, north of Dundee, in October 1965
Bilsdale transmitting station on the North York Moors
Divis transmitting station (500 ft), carries television for eastern Northern Ireland in the mid-1950s
Durris transmitting station; 38-year-old Thomas Sutherland of Blairgowrie died in its construction on 24 October 1966, falling 175 ft from 300 ft up the mast; the company had a regional office in Edinburgh
The original Emley Moor steel-tube mast, which collapsed on 19 March 1969.; also built the 50-ton 180 ft top steel lattice, on the top of the current structure in December 1970
Meldrum transmitting station (500 ft) on Core Hill carries national radio in north-east Scotland, in the mid-1950s
Selkirk transmitting station in 1961/62, which is 925 ft above sea level
Skelton Transmitting Station, the tallest structure in the UK at 365 metres, and was built in the war for clandestine broadcasts, now a few miles west of the M6, north of Penrith
Start Point transmitting station on the most southern point of the Devon coast, in the late 1930s
Stockland Hill transmitting station, in the east of Devon, towards Dorset, for the IBA in 1961 for 405-line b/w television
Tacolneston transmitting station for the new BBC East services; the site was known for many years first as the Norwich television transmitter
Woofferton transmitting station, at Woofferton in the south of Shropshire, on the Herefordshire boundary, important in clandestine broadcasts in the Second World War
Powerlines
Aust Severn Powerline Crossing (488 ft tall) - the longest powerline crossing in the United Kingdom at 1700 m (5,310 ft) between towers (built around 1955)
275kV line from Beauly to Kintore, Aberdeenshire in 1960, for the North of Scotland Hydro-Electric Board
Llantarnam to Crumlin
Melksham - Bramley 275kV in 1958
Poplar - Brimsdown 132kV in 1950
Berkeley - Gloucester 132kV in 1952
Pennc - Round Oak 132kV in 1956
Tilbury - Basildon 132 kV in 1957
Drax - Eggborough-Keadby 400kV in 1968
Drax - Thornton Junction 400kV in 1969
See also
Bierrum, (Danish) builder of Britain's cooling towers
Powerline river crossings in the United Kingdom
References
External links
Grace's Guide
Renovation Construction
J.L. Eve Construction Co. Ltd. at the British Film Institute
Structural steel
1930 establishments in England
British companies established in 1930
Companies based in the London Borough of Merton
Companies formerly listed on the London Stock Exchange
Construction and civil engineering companies established in 1930
Construction and civil engineering companies of the United Kingdom
National Grid (Great Britain)
Technology companies established in 1930 | J. L. Eve Construction | [
"Engineering"
] | 1,407 | [
"Structural engineering",
"Structural steel"
] |
45,647,580 | https://en.wikipedia.org/wiki/DREAM%20complex | The dimerization partner, RB-like, E2F and multi-vulval class B (DREAM) complex is a protein complex responsible for the regulation of cell cycle-dependent gene expression. The complex is evolutionarily conserved, although some of its components vary from species to species. In humans, the key proteins in the complex are RBL1 (p107) and RBL2 (p130), both of which are homologs of RB (p105) and bind repressive E2F transcription factors E2F4 and E2F5; DP1, DP2 and DP3, dimerization partners of E2F; and MuvB, which is a complex of LIN9/37/52/54 and RBBP4.
Discovery
Genes encoding the MuvB complex were originally identified from loss-of-function mutation studies in C. elegans. When mutated, these genes produced worms with multiple vulva-like organs, hence the name ‘Muv’. Three classes of Muv genes were classified, with class B genes encoding homologues of mammalian RB, E2F, and DP1, and others such as LIN-54, LIN-37, LIN-7 and LIN-52, whose functions were not yet understood.
Studies in Drosophila melanogaster ovarian follicle cells identified a protein complex that bound to repeatedly amplifying chorion genes. The complex included genes that had close homology with the MuvB genes such as Mip130, Mip120 and Mip40. These Mip genes were identified as homologues of the MuvB genes LIN9, LIN54, and LIN37 respectively. Further studies in the fly embryo nuclear extracts confirmed the coexistence of these proteins with others such as the RB homologues Rbf1 and Rbf2, and others like E2f and Dp. The protein complex was thus termed as the Drosophila RBF, E2f2 and Mip (dREAM) complex. Disruption of the dREAM complex through RNAi knockdown of the components of dREAM complex led to higher expression of E2f regulated genes that are typically silenced, implicating dREAM’s role in gene down-regulation. Later in Drosophila melanogaster, there was also found a testis-specific paralog of the Myb-MuvB/DREAM complex known as tMAC (testis-specific meiotic arrest complex), which is involved in meiotic arrest.
A protein complex similar to dREAM was subsequently identified in C. elegans extract containing DP, RB, and MuvB, and was named as DRM. This complex included mammalian homologues of RB and DP, and other members of the MuvB complex.
The mammalian DREAM complex was identified following immunoprecipitation of p130 with mass-spectrometry analysis. The results showed that p130 was associated with E2F4, E2F5, the dimerization partner DP, and LIN9, LIN54, LIN37, LIN52, and RBBP4 that make up the MuvB complex. Immunoprecipitation of MuvB factors also revealed association of BMYB. Subsequent immunoprecipitation with BMYB yielded all the MuvB core proteins, but not other members of the DREAM complex – p130, p107, E2F4/5 and DP. This indicated that MuvB associated with BMYB to form the BMYB-MuvB complex or with p130/p107, E2F4/5 and DP to form the DREAM complex. The DREAM complex was found prevalent in quiescent or starved cells, and the BMYB-MuvB complex was found in actively dividing cells, hinting at separate functionalities of these two complexes.
MuvB-like complexes were also recently discovered in Arabidoposis that include E2F and MYB orthologs combined with LIN9 and LIN54 orthologs.
Function
The main function of the DREAM complex is to repress G1/S and G2/M gene expression during quiescence (G0). Entry into the cell cycle dissociates p130 from the complex and leads to subsequent recruitment of activating E2F proteins. This allows for the expression of E2F regulated late G1 and S phase genes. BMYB (MYBL2), which is repressed by the DREAM complex during G0 is also able to be expressed at this time, and binds to MuvB during S phase to promote the expression of key G2/M phase genes such as CDK1 and CCNB1. FOXM1 is then recruited in G2 to further promote gene expression (e.g. AURKA). During late S phase BMYB is degraded via CUL1 (SCF complex), while FOXM1 is degraded during mitosis by the APC/C. Near the end of the cell cycle, the DREAM complex is re-assembled by DYRK1A to repress G1/S and G2/M genes.
G0
During quiescence, the DREAM complex represses G1/S and G2/M gene expression. In mammalian systems, chromatin-immunoprecipitation (ChIP) studies have revealed that DREAM components are found together at promoters of genes that peak in G1/S or G2/M phase. Abrogation of the DREAM complex on the other hand, led to increased expression of E2F regulated genes normally repressed in the G0 phase. Contrary to mammalian cells, the fly dREAM complex was found at almost one-third of all promoters, which may reflect a broader role for dREAM in gene regulation, such as programmed cell death of neural precursor cells.
Docking of the DREAM complex to promoters is achieved by binding of LIN-54 to regions known as cell cycle genes homology region (CHR). These are specific sequence of nucleotides that are commonly found in the promoters of genes expressed during late S phase or G2/M phase. Docking can also be achieved via E2F proteins binding to sequences known as cell cycle-dependent element sites (CDEs). Some cell cycle dependent genes have been found where both CHRs and CDEs are in proximity to one another. Because p130-E2F4 can form stable associations with the MuvB complex, the proximity of CHRs to CDEs suggests that affinity of binding of the DREAM complex to target genes is cooperatively improved by association with both the binding sites.
When DREAM is docked onto the promoter, p130 is bound to LIN52, and this association inhibits LIN52 binding to chromatin modifier proteins. Therefore, unlike RB-E2F, the DREAM complex is unlikely to directly recruit chromatin modifiers to repress gene expression, although some associations have been suggested. DREAM complex may instead down-regulate gene expression by affecting nucleosome positioning. Compacted DNA at transcription start sites inhibit gene expression by blocking the docking of RNA polymerase. In worms for example, loss of a MuvB complex protein, LIN35, leads to loss of repressive histone associations and high expression of cell cycle dependent genes. However, direct evidence for the link between repressive histones and the DREAM complex remains to be elucidated.
G1/S
Like its counterpart, RB-E2F, the DREAM complex is also affected by similar growth stimuli and subsequent cyclin-CDK activity. Increasing cyclin D-CDK4 and cyclin E-CDK2 activity dissociates the DREAM complex from the promoter by phosphorylation of p130. Hyper-phosphorylated p130 is subsequently degraded and E2F4 exported from the nucleus. Once the repressive E2Fs are vacated, activating E2Fs bind to the promoter to up-regulate G1/S genes that promote DNA synthesis and transition of the cell cycle. BMYB is also up-regulated during this time, which then binds to genes that peak at G2/M phase. Binding of BMYB to late cell cycle genes is dependent on its association with the MuvB core to form the BMYB-MuvB complex, which is then able to up-regulate genes in the G2/M phase.
Late mitosis
Near the end of mitosis, p130 and p107 are dephosphorylated from their hyperphosphorylated state by the phosphatase PP2a. Inhibition of PP2a activity reduced promoter binding of some of the proteins of the DREAM complex in the subsequent G1 phase and de-repression of gene expression.
Other components have been shown to be phosphorylated for DREAM complex assembly to occur. Of these, LIN52 phosphorylation on its S28 residue is the most well-understood. Substitution of this serine to alanine led to reduced binding of the MuvB core to p130 and impaired the ability of cells to enter quiescence. This indicates that LIN52 S28 phosphorylation is required for proper association and function of the DREAM complex via binding with p130.
One known regulator of phosphorylation of the S28 residue is the DYRK1A. The loss of this kinase leads to decreased phosphorylation of the S28 residue and association of p130 with MuvB. DYRK1A was also found to degrade cyclin D1, which would increase p21 levels – both of which contribute to cell cycle exit.
The DREAM complex was also shown to regulate cytokinesis through GAS2L3.
Cancer therapy
Due to its regulatory role in the cell cycle, targeting the DREAM complex might enhance anticancer treatments such as imatinib.
See also
Pocket protein family
References
Further reading
Protein complexes
Cell cycle
Gene expression | DREAM complex | [
"Chemistry",
"Biology"
] | 2,078 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Cell cycle"
] |
45,650,148 | https://en.wikipedia.org/wiki/Saddlepoint%20approximation%20method | The saddlepoint approximation method, initially proposed by Daniels (1954) is a specific example of the mathematical saddlepoint technique applied to statistics, in particular to the distribution of the sum of independent random variables. It provides a highly accurate approximation formula for any PDF or probability mass function of a distribution, based on the moment generating function. There is also a formula for the CDF of the distribution, proposed by Lugannani and Rice (1980).
Definition
If the moment generating function of a random variable is written as and the cumulant generating function as then the saddlepoint approximation to the PDF of the distribution is defined as:
where contains higher order terms to refine the approximation and the saddlepoint approximation to the CDF is defined as:
where is the solution to , ,, and and are the cumulative distribution function and the probability density function of a normal distribution, respectively, and is the mean of the random variable :
.
When the distribution is that of a sample mean, Lugannani and Rice's saddlepoint expansion for the cumulative distribution function may be differentiated to obtain Daniels' saddlepoint expansion for the probability density function (Routledge and Tsao, 1997). This result establishes the derivative of a truncated Lugannani and Rice series as an alternative asymptotic approximation for the density function . Unlike the original saddlepoint approximation for , this alternative approximation in general does not need to be renormalized.
References
Asymptotic analysis
Perturbation theory | Saddlepoint approximation method | [
"Physics",
"Mathematics"
] | 298 | [
"Mathematical analysis",
"Asymptotic analysis",
"Quantum mechanics",
"Perturbation theory"
] |
45,652,259 | https://en.wikipedia.org/wiki/Degassed%20water | Degassed water is water subjected to a process of degassing, which essentially consists in the removal of gas dissolved in the liquid.
External links
Nature publication by Philip Ball
Journal of Physical Chemistry C publication
Gas-liquid separation | Degassed water | [
"Physics",
"Chemistry"
] | 47 | [
"Separation processes by phases",
"Materials stubs",
"Materials",
"Gas-liquid separation",
"Matter"
] |
51,024,643 | https://en.wikipedia.org/wiki/Curve%20complex | In mathematics, the curve complex is a simplicial complex C(S) associated to a finite-type surface S, which encodes the combinatorics of simple closed curves on S. The curve complex turned out to be a fundamental tool in the study of the geometry of the Teichmüller space, of mapping class groups and of Kleinian groups. It was introduced by W.J.Harvey in 1978.
Curve complexes
Definition
Let be a finite type connected oriented surface. More specifically, let be a connected oriented surface of genus with boundary components and punctures.
The curve complex is the simplicial complex defined as follows:
The vertices are the free homotopy classes of essential (neither homotopically trivial nor peripheral) simple closed curves on ;
If represent distinct vertices of , they span a simplex if and only if they can be homotoped to be pairwise disjoint.
Examples
For surfaces of small complexity (essentially the torus, punctured torus, and four-holed sphere), with the definition above the curve complex has infinitely many connected components. One can give an alternate and more useful definition by joining vertices if the corresponding curves have minimal intersection number. With this alternate definition, the resulting complex is isomorphic to the Farey graph.
Geometry of the curve complex
Basic properties
If is a compact surface of genus with boundary components the dimension of is equal to . In what follows, we will assume that . The complex of curves is never locally finite (i.e. every vertex has infinitely many neighbors). A result of Harer asserts that is in fact homotopically equivalent to a wedge sum of spheres.
Intersection numbers and distance on C(S)
The combinatorial distance on the 1-skeleton of is related to the intersection number between simple closed curves on a surface, which is the smallest number of intersections of two curves in the isotopy classes. For example
for any two nondisjoint simple closed curves . One can compare in the other direction but the results are much more subtle (for example there is no uniform lower bound even for a given surface) and harder to prove.
Hyperbolicity
It was proved by Masur and Minsky that the complex of curves is a Gromov hyperbolic space. Later work by various authors gave alternate proofs of this fact and better information on the hyperbolicity.
Relation with the mapping class group and Teichmüller space
Action of the mapping class group
The mapping class group of acts on the complex in the natural way: it acts on the vertices by and this extends to an action on the full complex. This action allows to prove many interesting properties of the mapping class groups.
While the mapping class group itself is not a hyperbolic group, the fact that is hyperbolic still has implications for its structure and geometry.
Comparison with Teichmüller space
There is a natural map from Teichmüller space to the curve complex, which takes a marked hyperbolic structures to the collection of closed curves realising the smallest possible length (the systole). It allows to read off certain geometric properties of the latter, in particular it explains the empirical fact that while Teichmüller space itself is not hyperbolic it retains certain features of hyperbolicity.
Applications to 3-dimensional topology
Heegaard splittings
A simplex in determines a "filling" of to a handlebody. Choosing two simplices in thus determines a Heegaard splitting of a three-manifold, with the additional data of an Heegaard diagram (a maximal system of disjoint simple closed curves bounding disks for each of the two handlebodies). Some properties of Heegaard splittings can be read very efficiently off the relative positions of the simplices:
the splitting is reducible if and only if it has a diagram represented by simplices which have a common vertex;
the splitting is weakly reducible if and only if it has a diagram represented by simplices which are linked by an edge.
In general the minimal distance between simplices representing diagram for the splitting can give information on the topology and geometry (in the sense of the geometrisation conjecture of the manifold) and vice versa. A guiding principle is that the minimal distance of a Heegaard splitting is a measure of the complexity of the manifold.
Kleinian groups
As a special case of the philosophy of the previous paragraph, the geometry of the curve complex is an important tool to link combinatorial and geometric properties of hyperbolic 3-manifolds, and hence it is a useful tool in the study of Kleinian groups. For example, it has been used in the proof of the ending lamination conjecture.
Random manifolds
A possible model for random 3-manifolds is to take random Heegaard splittings. The proof that this model is hyperbolic almost surely (in a certain sense) uses the geometry of the complex of curves.
Notes
References
Harvey, W. J. (1981). "Boundary Structure of the Modular Group". Riemann Surfaces and Related Topics. Proceedings of the 1978 Stony Brook Conference . 1981.
Benson Farb and Dan Margalit, A primer on mapping class groups. Princeton Mathematical Series, 49. Princeton University Press, Princeton, NJ, 2012.
Topology
Geometric group theory | Curve complex | [
"Physics",
"Mathematics"
] | 1,071 | [
"Geometric group theory",
"Group actions",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Symmetry"
] |
51,024,718 | https://en.wikipedia.org/wiki/Thurston%20norm | In mathematics, the Thurston norm is a function on the second homology group of an oriented 3-manifold introduced by William Thurston, which measures in a natural way the topological complexity of homology classes represented by surfaces.
Definition
Let be a differentiable manifold and . Then can be represented by a smooth embedding , where is a (not necessarily connected) surface that is compact and without boundary. The Thurston norm of is then defined to be
,
where the minimum is taken over all embedded surfaces (the being the connected components) representing as above, and is the absolute value of the Euler characteristic for surfaces which are not spheres (and 0 for spheres).
This function satisfies the following properties:
for ;
for .
These properties imply that extends to a function on which can then be extended by continuity to a seminorm on . By Poincaré duality, one can define the Thurston norm on .
When is compact with boundary, the Thurston norm is defined in a similar manner on the relative homology group and its Poincaré dual .
It follows from further work of David Gabai that one can also define the Thurston norm using only immersed surfaces. This implies that the Thurston norm is also equal to half the Gromov norm on homology.
Topological applications
The Thurston norm was introduced in view of its applications to fiberings and foliations of 3-manifolds.
The unit ball of the Thurston norm of a 3-manifold is a polytope with integer vertices. It can be used to describe the structure of the set of fiberings of over the circle: if can be written as the mapping torus of a diffeomorphism of a surface then the embedding represents a class in a top-dimensional (or open) face of : moreover all other integer points on the same face are also fibers in such a fibration.
Embedded surfaces which minimise the Thurston norm in their homology class are exactly the closed leaves of foliations of .
Notes
References
Topology
3-manifolds
Differential geometry | Thurston norm | [
"Physics",
"Mathematics"
] | 431 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
51,030,981 | https://en.wikipedia.org/wiki/Calcium%20triplet | The infrared Ca II triplet, commonly known as the calcium triplet, is a triplet of three ionised calcium spectral lines at the wavelengths of 8498 Å, 8542 Å and 8662 Å (measured in air). The triplet has a strong emission, and is most prominently observed in the absorption of spectral type G, K and M stars.
See also
Fraunhofer lines
Infrared spectroscopy
References
Astronomical spectroscopy | Calcium triplet | [
"Physics",
"Chemistry"
] | 89 | [
"Astronomical spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)",
"Astrophysics"
] |
56,792,828 | https://en.wikipedia.org/wiki/Optical%20properties | The optical properties of a material define how it interacts with light. The optical properties of matter are studied in optical physics (a subfield of optics) and applied in materials science. The optical properties of matter include:
Refractive index
Dispersion
Transmittance and Transmission coefficient
Absorption
Scattering
Turbidity
Reflectance and Reflectivity (reflection coefficient)
Albedo
Perceived color
Fluorescence
Phosphorescence
Photoluminescence
Optical bistability
Dichroism
Birefringence
Optical activity
Photosensitivity
A basic distinction is between isotropic materials, which exhibit the same properties regardless of the direction of the light, and anisotropic ones, which exhibit different properties when light passes through them in different directions.
The optical properties of matter can lead to a variety of interesting optical phenomena.
Properties of specific materials
Optical properties of water and ice
Optical properties of carbon nanotubes
Crystal optics
Literature
Properties
Materials science | Optical properties | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 186 | [
"Applied and interdisciplinary physics",
"Optics",
"Materials science",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
58,697,793 | https://en.wikipedia.org/wiki/Daniela%20Bortoletto | Daniela Bortoletto is an Italian-British high energy physicist, head of Particle Physics at the University of Oxford and Nicholas Kurti Senior Research Fellow in Physics at Brasenose College, University of Oxford. She works in silicon detector development and was a co-discoverer of both the Higgs boson and the top quark.
Early life and education
Bortoletto grew up in the Italian Alps and studied at the University of Pavia, graduating summa cum laude with a bachelor's degree in physics. She was a member of Collegio Ghislieri in Pavia. She earned her PhD in 1989 at Syracuse University, under the supervision of Sheldon Stone.
Research
Bortoletto moved to Purdue University to pursue a postdoctoral fellowship. In 1994, she received a NSF Career Advancement Award and became the Alfred P. Sloan Research Fellow. While part of the CDF collaboration in 1995, she co-discovered the top quark. Two years later, she won a NSF Faculty Early Career Development Award. In 2004, she gained fellowship of the American Physical Society.
In 2010, Bortoletto became the E. M. Purcell Distinguished Professor of Physics at Purdue University. For seven years, she was the upgrade coordinator for the US CMS collaboration, part of the CMS experiment at the Large Hadron Collider at CERN. In 2013, she moved to the University of Oxford, and transferred from the CMS collaboration to the ATLAS collaboration, again working on the LHC. Her research focuses on silicon detector development. Bortoletto became a fellow of the Institute of Physics in 2015. She is an editor for the journal Nuclear Instruments and Methods in Physics Research Section A.
In 2015, Bortoletto set up the UK arm of the Conference for Undergraduate Women in Physics. The first five conferences were run by Bortoletto at Oxford from 2015 to 2019; the conference is annually held across the UK and Ireland.
Bortoletto was appointed Officer of the Order of the British Empire (OBE) in the 2024 New Year Honours for services to particle physics and gender equality.
References
21st-century Italian physicists
Living people
Fellows of Brasenose College, Oxford
Italian women physicists
Particle physicists
University of Pavia alumni
Syracuse University alumni
Fellows of the American Physical Society
Purdue University faculty
People associated with CERN
Fellows of the Institute of Physics
Italian academic journal editors
Year of birth missing (living people)
Place of birth missing (living people)
Sloan Research Fellows
Officers of the Order of the British Empire
21st-century British physicists
British women physicists
Italian emigrants to England
Naturalised citizens of the United Kingdom | Daniela Bortoletto | [
"Physics"
] | 536 | [
"Particle physicists",
"Particle physics"
] |
58,701,508 | https://en.wikipedia.org/wiki/Anne%20M.%20Leggett | Anne Marie Leggett (born May 28, 1947) is an American mathematical logician. She is an associate professor emerita of mathematics at Loyola University Chicago.
Leggett was the editor-in-chief of the bi-monthly newsletter of the Association for Women in Mathematics (AWM), a position she held continuously from 1977 until the January-February 2024 issue. Leggett described her tenure as AWM Newsletter Editor in the article This and That: My Time as AWM Newsletter Editor which appeared in the volume Fifty Years of Women in Mathematics: Reminiscences, History, and Visions for the Future of AWM. She has served on the Executive Committee of the AWM since 1977 and the AWM Policy and Advocacy Committee (2008-2015). With Bettye Anne Case, she is the editor of the book Complexities: Women in Mathematics (with Anne M. Leggett, Princeton University Press, 2005). Leggett received an Alpha Sigma Nu Book Award for Complexities in 2006.
Education and career
Leggett did her undergraduate studies at Ohio State University, and completed her Ph.D. in 1973 at Yale University. Her dissertation, Maximal -r.e. sets and their complements, was supervised by Manuel Lerman.
She became a C. L. E. Moore instructor at the Massachusetts Institute of Technology in 1973, and was also on the faculties of Western Illinois University and the University of Texas at Austin. In 1982, she married another mathematician, Gerard McDonald (1946–2012), and in 1983, they both joined the Loyola University Chicago faculty.
Recognition
Leggett was chosen to be part of the 2019 class of fellows of the Association for Women in Mathematics, "for extraordinary contributions in promoting opportunities for women in the mathematical sciences through AWM and as a teacher and scholar; for her amazing and steady work as editor of the AWM Newsletter since 1977; and for her invaluable leadership and guidance."
References
External links
Anne M. Leggett's Author Profile Page on MathSciNet
Living people
20th-century American mathematicians
21st-century American mathematicians
Mathematical logicians
Women logicians
Ohio State University alumni
Yale University alumni
Western Illinois University faculty
University of Texas at Austin faculty
Massachusetts Institute of Technology School of Science faculty
Loyola University Chicago faculty
Fellows of the Association for Women in Mathematics
20th-century American women mathematicians
21st-century American women mathematicians
1947 births | Anne M. Leggett | [
"Mathematics"
] | 491 | [
"Mathematical logic",
"Mathematical logicians"
] |
58,703,933 | https://en.wikipedia.org/wiki/Sit%20Kim%20Ping | Sit Kim Ping is a Singaporean biochemist and an Emeritus Professor at the Department of Biochemistry at the National University of Singapore. She was the Head of the Department of Biochemistry (part of the Yong Loo Lin School of Medicine) from 1996 to 2000.
Early life
Sit was born in 1941 and attended Tanjong Katong Girls' School. She studied science at the National University of Singapore and obtained first-class honours when she graduated top of her class. She obtained her PhD in biochemistry from McGill University.
National University of Singapore
Sit was instrumental in the development of the New Life Science Undergraduate Curriculum, and was awarded the Emeritus Professorship in 2008.
Research
Sit studied detoxification, namely the process of conjugation by which metabolic by-products are made soluble prior to excretion. She also studied metabolism within cancer cells and found aerobic respiration within mitochondria in cancer cells, which contradicts the Warburg hypothesis.
Personal life
Sit is married to a clinician and has two children.
References
Biochemists
Women biochemists
Singaporean women scientists
20th-century women scientists
1941 births
Living people | Sit Kim Ping | [
"Chemistry",
"Biology"
] | 225 | [
"Biochemistry",
"Biochemists",
"Women biochemists"
] |
58,707,776 | https://en.wikipedia.org/wiki/Samaya%20Nissanke | Samaya Michiko Nissanke is an astrophysicist, associate professor in gravitational wave and multi-messenger astrophysics and the spokesperson for the GRAPPA Centre for Excellence in Gravitation and Astroparticle Physics at the University of Amsterdam. She works on gravitational-wave astrophysics and has played a founding role in the emerging field of multi-messenger astronomy. She played a leading role in the discovery paper of the first binary neutron star merger, GW170817, seen in gravitational waves and electromagnetic radiation.
In 2020, she was awarded the New Horizons in Physics Prize from the Breakthrough Prize Foundation with Jo Dunkley and Kendrick Smith for "the development of novel techniques to extract fundamental physics from astronomical data". She was awarded the 2021 Suffrage Award Award for Engineering and Physical Sciences for "outstanding science, science communication and support for women in STEM," nominated by Prof. Amina Helmi of the University of Groningen.
Early life and education
Nissanke was born in London to a Japanese mother and a Sri Lankan father. She completed her bachelor's and master's degrees in the Natural Sciences Tripos (Physics) at the University of Cambridge. She then joined the Paris Observatory for her postgraduate studies. Nissanke earned her PhD in analytical relativity at the Institut d'astrophysique de Paris in 2007 with a thesis titled Aspects théoriques de la forme des ondes gravitationnelles pour les phases spiralante et de fusion des systèmes binaires compacts (Theoretical aspects of the shape of gravitational waves for the spiraling and merging phases of compact binary systems).
Research
Nissanke completed her postdoctoral research at the Canadian Institute for Theoretical Astrophysics, the Jet Propulsion Lab, California Institute of Technology and Radboud University Nijmegen working on gravitational wave and electromagnetic emission from compact object mergers since 2007. She is a member of the Virgo collaboration and works with the BlackGEM, VLA, MeerKAT and LOFAR telescopes and was part of the group that discovered the radio counterpart to GW170817. She demonstrated it was possible to determine the Hubble constant using gravitational wave observations from merging neutron star binaries and how to identify the elusive electromagnetic counterparts of gravitational wave mergers.
Nissanke was working at Radboud as the group leader for the gravitational wave group when the first detection of gravitational waves was confirmed. In 2016 she was awarded Netherlands Organisation for Scientific Research (NWO) TOP and VIDI grants to study the birth of black holes and neutron star mergers. In June 2018 she joined the faculty at the Gravitational AstroParticle Physics Amsterdam (GRAPPA) Institute at the University of Amsterdam. She is the Astrophysics Working Group Chair of a European Cooperation in Science and Technology Action on Gravitational Waves.
Public engagement
Nissanke is a popular science communicator and has been interviewed by Scientific American, New Scientist, Nature, Vox Media, BBC Radio 4, BBC World Service and Die Zeit. She represented the Virgo Collaboration at the European Southern Observatory press conference in 2017, for the announcement of a merger of neutron stars. Before the detection of gravitational waves, Nissanke joined composer Arthur Jeffes at the Marshmallow Laser Feast to create a piece of music about merging neutron stars and black holes billions of years ago.
Awards and honours
As part of the LIGO Scientific and Virgo Collaborations, Nissanke was awarded the Special Breakthrough Prize in Fundamental Physics (2016) and the Gruber Prize in Cosmology (2016). In 2019, it was announced that Nissanke would receive the 2020 New Horizons in Physics Prize with Jo Dunkley and Kendrick Smith from the Breakthrough Prize Foundation. In 2021 Nissanke received a Suffrage Science award, nominated by Amina Helmi.
References
Living people
21st-century British physicists
Academic staff of Radboud University Nijmegen
Academic staff of the University of Amsterdam
Academic staff of the University of Toronto
Alumni of the University of Cambridge
Astroparticle physics
British astrophysicists
Dutch astrophysicists
English people of Japanese descent
English people of Sri Lankan descent
People educated at Westminster School
Scientists from London
Women astrophysicists
Year of birth missing (living people) | Samaya Nissanke | [
"Physics"
] | 844 | [
"Astroparticle physics",
"Particle physics",
"Astrophysics"
] |
58,709,102 | https://en.wikipedia.org/wiki/ASME%20Leonardo%20Da%20Vinci%20Award | The American Society of Mechanical Engineers Design and Engineering Division awards yearly the Leonardo Da Vinci Award to eminent engineers whose design or invention is recognized as an important advance in machine design. The award is named after Leonardo da Vinci.
Winners
See also
Other awards and medals of the ASME
ASME Achievement awards
ASME Medal - ASME highest award
List of engineering awards
References
Leonardo da Vinci
American Society of Mechanical Engineers
Awards established in 1978
American science and technology awards
1978 establishments in the United States
ASME Medals | ASME Leonardo Da Vinci Award | [
"Engineering"
] | 99 | [
"American Society of Mechanical Engineers",
"Mechanical engineering organizations"
] |
50,019,576 | https://en.wikipedia.org/wiki/Coex%20%28material%29 | Coex is a biopolymer with flame-retardant properties derived from the functionalization of cellulosic fibers such as cotton, linen, jute, cannabis, coconut, ramie, bamboo, raffia palm, stipa, abacà, sisal, nettle and kapok. The formation of coex has been proven possible on wood and semi-synthetic fibers such as cellulose acetate, cellulose triacetate, viscose, modal, lyocell and cupro.
The material is obtained by sulfation and phosphorylation reactions on glucan units linked to each other in position 1,4. Typical reaction locations are on the secondary and tertiary hydroxyl groups of the cellulosic fiber. The chemical modification of the cellulosic fibers does not involve physical and visual alterations compared to the starting material.
in 2015 the World Textile Information Network (WTiN) declared Coex the winner of the "Future Materials Award" as the best innovation in the Home Textile category.
Properties
Coex preserves the physical and chemical characteristics of the raw material from which it is formed. The main features of Coex materials are comfort, hydrophilicity, antistatic properties, mechanical resistance and versatility in the textile sector, like all natural and semi-synthetic cellulosic fibers.
Coex materials are resistant to moths, mildew and sunlight. The flame resistant nature of Coex is unique in that it acts as a barrier to the flames rather than only delaying the spread of fire; the biopolymer fibres carbonize and therefore extinguish the flame. The resulting products are hypoallergenic and biodegradable.
References
External links
Official Website
Super Absorbent Polymer
https://www.thomasnet.com/articles/plastics-rubber/plastic-coextrusion/
Organic polymers
Biomaterials
Brand name materials | Coex (material) | [
"Physics",
"Chemistry",
"Biology"
] | 393 | [
"Biomaterials",
"Organic polymers",
"Materials stubs",
"Biotechnology stubs",
"Organic compounds",
"Materials",
"Medical technology stubs",
"Matter",
"Medical technology"
] |
50,022,129 | https://en.wikipedia.org/wiki/Solar%20furnace%20of%20Uzbekistan | The solar furnace of Uzbekistan was built in 1981, and is located 45 kilometers away from Tashkent city. The furnace is the largest in Asia. It uses a curved mirror, or an array of mirrors, acting as a parabolic reflector, which can reach temperatures of up to 3,000 degrees Celsius. The solar furnace of Uzbekistan can be visited by the general public.
About
The heat which is produced by the solar furnace is considered to be very clean, without any pollutants. There are different ways of using this energy, such as hydrogen fuel production, foundry applications and high temperature testing. The solar furnace of Uzbekistan is sometimes called the Sun Institute of Uzbekistan. The furnace is a complex optical and mechanical construction, with 63 flat mirrors automatically controlled to track the sun in unison and redirect the solar thermal energy towards the crucible. The furnace was first opened in May 1981, and it is located 1100 meters above sea level. The furnace covers a huge area in the mountains, and consists of 4 complex subdivisions, which are: the main building of “Solar furnace of Uzbekistan”, heliostatic field, concentrator and manufacturing tower. The solar furnace of Uzbekistan was ready for use in 6 years, which means it was built between the years of 1981 and 1987. The place for the solar furnace of Uzbekistan was chosen carefully, because the sun shines there for 270 days a year. The small solar furnace at the complex has a diameter of 2 meters. The heliostatic field of the solar furnace of Uzbekistan currently consists of about 62 heliostats which are installed in a staggered order. The field uses 12090 mirrors in total, and is the largest concentrator in the world, with an area of 1849 square meters. The concentrator at the furnace uses 10,700 mirrors, and the southern part of the concentrator is covered with special sunscreens. The solar furnace of Uzbekistan is controlled by employees from the laboratory on the 6th floor, and the observation ground is located on the highest spot.
Location
The solar furnace of Uzbekistan is located in Tashkent region, Parkent city, Republic of Uzbekistan.
Links
Solar energy and technologies in Uzbekistan
Solar Furnace in Uzbekistan
Solar energy news in Uzbekistan
News about the solar institute in Uzbekistan
News about the solar technologies in Uzbekistan
See also
Solar power tower
Solar thermal energy
Odeillo solar furnace
References
Renewable energy power stations in Uzbekistan
Solar thermal energy
Buildings and structures in Uzbekistan
Industrial furnaces
1981 establishments in the Soviet Union
1980s establishments in Uzbekistan
Power stations built in the Soviet Union | Solar furnace of Uzbekistan | [
"Chemistry"
] | 515 | [
"Metallurgical processes",
"Industrial furnaces"
] |
50,025,232 | https://en.wikipedia.org/wiki/Luis%20%C3%81lvarez-Gaum%C3%A9 | Luis Álvarez-Gaumé (born 1955) is a Spanish theoretical physicist who works on string theory and quantum gravity.
Luis Álvarez-Gaumé obtained his PhD in 1981 from Stony Brook University and worked from 1981 to 1984 at Harvard University as a Junior Fellow, before he moved to Boston University to work as a professor. From 1986 until 2016, Álvarez-Gaumé was a permanent member of the CERN Theoretical Physics unit. In 2016, he became the director of the Simons Center for Geometry and Physics at Stony Brook.
In the 1980s, Álvarez-Gaumé had various important contributions to the field of string theory and its mathematical framework. Together with Edward Witten he showed in 1983 that quantum field theories generally have gravitational anomalies. Shortly after this, Michael Green and John Schwarz showed that such anomalies are avoided in various realizations of superstring theory. Álvarez-Gaumé is also known for a physical proof of the Atiyah–Singer theorem using supersymmetry.
His work spans a range of different subjects, including string perturbation theory at higher orders, quantum field theories on Riemann surfaces,
quantum groups, as well as dualities in string theory and black holes in string theory.
In the 1990s, Álvarez-Gaumé studied supersymmetry breaking at low energies (in N = 2 SUSY gauge theories). He has also co-authored a textbook on quantum field theory.
References
External links
Álvarez-Gaumé's profile on INSPIRE-HEP
Living people
Spanish physicists
Theoretical physicists
Stony Brook University alumni
People associated with CERN
1955 births | Luis Álvarez-Gaumé | [
"Physics"
] | 321 | [
"Theoretical physics",
"Theoretical physicists"
] |
52,482,504 | https://en.wikipedia.org/wiki/Mid-Infrared%20Instrument | MIRI, or the Mid-Infrared Instrument, is an instrument on the James Webb Space Telescope. MIRI is a camera and a spectrograph that observes mid to long infrared radiation from 5 to 28 microns. It also has coronagraphs, especially for observing exoplanets. Whereas most of the other instruments on Webb can see from the start of near infrared, or even as short as orange visible light, MIRI can see longer wavelength light.
MIRI uses silicon arrays doped with arsenic to make observations at these wavelengths. The imager is designed for wide views but the spectrograph has a smaller view. Because it views the longer wavelengths it needs to be cooler than the other instruments (see Infrared astronomy), and it has an additional cooling system. The cooling system for MIRI includes a pulse tube precooler and a Joule-Thomson loop heat exchanger. This allowed MIRI to be cooled down to a temperature of 7 kelvins during operations in space.
MIRI was built by the MIRI Consortium, a group that consists of scientists and engineers from 10 different European countries (the United Kingdom, France, Belgium, the Netherlands, Germany, Spain, Switzerland, Sweden, Denmark, and Ireland) with the United Kingdom heading the European consortium, as well as a team from the Jet Propulsion Lab in California, and scientists from several U.S. institutions.
Overview
The spectrograph can observe wavelengths between 4.6 and 28.6 microns, and it has four separate channels, each with its own gratings and image slicers. The field of view of the spectrograph is 3.5 by 3.5 arcseconds.
The spectrograph is capable of low-resolution spectroscopy (LRS) with or without a slit, as well as medium-resolution spectroscopy (MRS) taken with an integral field unit (IFU). This means that MRS with the IFU creates an image cube. Similar to other IFUs this can be compared to an image that has a spectrum for each pixel.
The imager has a plate scale of 0.11 arcseconds/pixel and a field of view of 74 by 113 arcseconds. Earlier in development the field of view was going to be 79 by 102 arcseconds (1.3 by 1.7 arcmin). The imaging channel has ten filters available and the detectors are made of arsenic-doped silicon (Si:As). The detectors (one for the imager, and two for the spectrometer) each have a resolution of 1024x1024 pixels, and they are called Focal Plane Modules or FPMs.
During 2013 and finishing in January 2014, MIRI was integrated into the Integrated Science Instrument Module (ISIM). MIRI successfully passed Cryo Vac 1 and Cryo Vac 2 tests as part of ISIM in the 2010s. MIRI was developed by an international consortium.
MIRI is attached to the ISIM by a carbon-fiber and plastic hexapod structure, which attaches it to the spacecraft but also helps thermally isolate it. (see also Carbon fiber reinforced plastic)
Parts summary:
Spectrometer optics
Spectrometer Main Optics (SMO)
Spectrometer Pre Optics (SPO)
Focal Plane Arrays
Input-Optics Calibration Module (IOC)
Pick-off Mirror
Calibration source for Imager
Contamination Control Cover (CCC)
CFRP hexapod
Imager
Image slicers
Deck
Most of MIRI is located in the main ISIM structure, however the cryocooler is in region 3 of ISIM which is located in the spacecraft bus.
The imager module of MIRI also includes the Low Resolution Spectrometer that can perform long-slit and slitless spectroscopy from 5 to 12 μm light wavelength. The LRS uses Ge (germanium) and ZnS (zinc sulfide) prisms to cause spectroscopic dispersion.
Commissioning is complete as of the following dates:
Imaging, 06/17/2022
Low resolution spectroscopy, 06/24/2022
Medium resolution spectroscopy, 06/24/2022
Coronagraphic imaging, 06/29/2022
Cryocooler
To allow mid-infrared observations within the JWST, the MIRI instrument has an additional cooling system. It works roughly similar to how most refrigerators or an air-conditioner works: a fluid is brought down to a cold temperature in the warm section, and sent back to the cold section where it absorbs heat, then it goes back to the condenser. One source of heat is the left-over heat of the spacecraft, but another is the spacecraft's own electronics, some of which are close to the actual instruments to process data from observations. Most of the electronics are in the much warmer spacecraft bus, but some of the electronics needed to be much closer, and great lengths were taken to reduce the heat they produce. By reducing how much heat the electronics make on the cold side, less heat needs to be removed.
In this case the JWST cryocooler resides in the spacecraft bus and it has lines of coolant that run to the MIRI instrument, chilling it. The cryocooler has a heat radiator on the spacecraft bus to emit the heat it collects. In this case the cooling system uses helium gas as the refrigerant.
The James Webb Space Telescope's cryocooler is based originally on the TRW ACTDP cryocooler. However, the JWST has had to develop a version to handle higher thermal loads. It has a multi-stage pulse tube refrigerator that chills an even more powerful cooler. That is a linear-motion Oxford-style compressor that powers a J-T loop. Its target is to cool the MIRI instrument down to 6 kelvins (−448.87 °F, or −267.15 °C). The ISIM is at about 40 K (due to the sunshield) and there is a dedicated MIRI radiation shield beyond which the temperature is 20 K. The J-T loop is a Joule–Thomson loop heat exchanger.
Filters
MIRI imaging has 10 filters available for observations.
F560W - Broadband Imaging
F770W - PAH, broadband imaging
F1000W - Silicate, broadband imaging
F1130W - PAH, broadband imaging
F1280W - Broadband imaging
F1500W - Broadband imaging
F1800W - Silicate, broadband imaging
F2100W - Broadband imaging
F2550W - Broadband imaging
F2550WR - Redundant filter, risk reduction
FND - For bright target acquisition
Opaque - Darks
MIRI Coronagraphic imaging has 4 filters available for observations.
F1065C - useful for ammonia and silicates
F1140C
F1550C
F2300C
The low-resolution spectrometer (LRS) uses a double zinc sulfide/germanium (ZnS/Ge) prism. The slit mask has a filter that blocks light with a wavelength shorter than 4.5 μm. LRS covers 5 to 14 μm.
The medium-resolution spectrometer (MRS) has 4 channels that are observed simultaneously. Each channel is however further divided into 3 different spectral settings (called short, medium and long). In one observation MIRI can only observe one of those three settings. An observation that aims to observe the entire spectrum has to carry out 3 separate observations of the individual settings. MRS covers 4.9 to 27.9 μm.
Diagrams
See also
Spitzer Space Telescope (NASA's mid-infrared space telescope launched in 2003, it could not see as deep into the infrared when its coolant supply was depleted in 2009)
Wide-field Infrared Survey Explorer (infrared survey telescope)
List of largest infrared telescopes (includes examples of space observatories that have designed for similar wavelengths)
Jovian Infrared Auroral Mapper (IR Imaging spectrometer on Juno Jupiter orbiter)
Infrared Array Camera (Spitzer near to mid infrared camera)
References
External links
ESA - MIRI - the mid-infrared instrument on JWST
Presentation on MIRI's coronographs (.pdf)
The Mid-Infrared Instrument for JWST, II: Design and Build - Wright, et al (long paper on Miri)
MIRI Encyclopedia at University of Arizona
NASA - JWST Cryocooler
Spectrographs
Space imagers
James Webb Space Telescope instruments | Mid-Infrared Instrument | [
"Physics",
"Chemistry"
] | 1,733 | [
"Spectrographs",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
52,484,092 | https://en.wikipedia.org/wiki/Flumedroxone%20acetate | Flumedroxone acetate, sold under the brand names Demigran and Leomigran, is a progestin medication which is or has been used as an antimigraine agent. It is taken by mouth.
Medical uses
Flumedroxone acetate has been assessed in over 1,000 patients for the treatment of migraine, with effectiveness ranging from excellent to less than that of the reference antimigraine drug methysergide. Other progestogens including medroxyprogesterone acetate, lynestrenol, allylestrenol, dydrogesterone, and normethandrone have also been found to be effective for migraine in a high percentage of women.
Side effects
In accordance with its progestogenic activity, flumedroxone acetate produces menstrual irregularities, namely polymenorrhea, and breast tension as side effects in women.
Pharmacology
Pharmacodynamics
Flumedroxone acetate is said to have weak or slight progestogenic activity without other hormonal activity, including no estrogenic, antiestrogenic, androgenic, anabolic, or glucocorticoid activity.
Chemistry
Flumedroxone acetate, also known as 6α-(trifluoromethyl)-17α-acetoxyprogesterone or as 6α-(trifluoromethyl)-17α-acetoxypregn-4-ene-3,20-dione, is a synthetic pregnane steroid and a derivative of progesterone and 17α-hydroxyprogesterone. It is specifically a derivative of 17α-hydroxyprogesterone with a trifluoromethyl group at the C6α position and an acetate ester attached to the C17α hydroxyl group. The medication is the C17α acetate ester of flumedroxone (6α-(trifluoromethyl)-17α-hydroxyprogesterone) and the C6α trifluoromethyl derivative of hydroxyprogesterone acetate (17α-acetoxyprogesterone).
History
Flumedroxone acetate was introduced for medical use in the 1960s.
Society and culture
Generic names
Flumedroxone is the and of the free alcohol form of the drug, flumedroxone. Flumedroxone acetate is also known by its developmental code name WG-537.
Brand names
Flumedroxone acetate is or has been marketed under the brand names Demigran and Leomigran.
Availability
Flumedroxone acetate is or has been marketed in Europe.
See also
Fluorometholone acetate
Mometasone furoate
References
Abandoned drugs
Acetate esters
Antimigraine drugs
Diketones
Pregnanes
Progestogen esters
Progestogens
Trifluoromethyl compounds | Flumedroxone acetate | [
"Chemistry"
] | 637 | [
"Drug safety",
"Abandoned drugs"
] |
52,487,607 | https://en.wikipedia.org/wiki/Vaginogram | A vaginogram is a medical imaging method in which a radiocontrast agent is injected while X-ray pictures are taken, to visualize structures of the vagina. It has been used to visualize ureterovaginal fistulas.
References
Gynaecology
Medical imaging
Medical physics
Radiology | Vaginogram | [
"Physics"
] | 65 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
52,492,399 | https://en.wikipedia.org/wiki/Etiocholanolone%20glucuronide | Etiocholanolone glucuronide (ETIO-G) is an endogenous, naturally occurring metabolite of testosterone. It is formed in the liver from etiocholanolone by UDP-glucuronyltransferases. ETIO-G has much higher water solubility than etiocholanolone and is eventually excreted in the urine via the kidneys. Along with androsterone glucuronide, it is one of the major inactive metabolites of testosterone.
See also
3α,5β-Androstanediol
5β-Dihydrotestosterone
Androstanediol glucuronide
References
External links
Metabocard for Etiocholanolone Glucuronide (HMDB04484) - Human Metabolome Database
Etiocholanes
Glucuronide esters
Human metabolites
Steroid esters | Etiocholanolone glucuronide | [
"Chemistry",
"Biology"
] | 202 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
39,589,389 | https://en.wikipedia.org/wiki/Frederick%20W.%20Stavely | Frederick W. Stavely (1894-1976) was a chemical research scientist who discovered polyisoprene.
Career
In 1950, Stavely served as chairman of the American Chemical Society Rubber Division.
In 1953, Stavely was working at the Firestone Tire & Rubber Company when he discovered polyisoprene. At the time he was investigating the reaction of butyl lithium on butadiene and discovered that polymerization of isoprene with metallic lithium produced polyisoprene (dubbed coral rubber because of its appearance) with a high cis content.
High cis content is associated with enhanced strain crystallization, important during World War II because other synthetics did not exhibit the crystallization effect.
In 1972, Stavely received the Charles Goodyear Medal in recognition of this discovery.
References
Polymer scientists and engineers
1894 births
1976 deaths
U.S. Synthetic Rubber Program
Tire industry people
Bridgestone people
20th-century American chemists | Frederick W. Stavely | [
"Chemistry",
"Materials_science"
] | 189 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
39,590,258 | https://en.wikipedia.org/wiki/Bridge%20bearing | A bridge bearing is a component of a bridge which typically provides a resting surface between bridge piers and the bridge deck. The purpose of a bearing is to allow controlled movement and thereby reduce the stresses involved. Possible causes of movement are thermal expansion and contraction, creep, shrinkage, or fatigue due to the properties of the material used for the bearing. External sources of movement include the settlement of the ground below, thermal expansion, and seismic activity. There are several different types of bridge bearings which are used depending on a number of different factors including the bridge span, loading conditions, and performance specifications. The oldest form of bridge bearing is simply two plates resting on top of each other. A common form of modern bridge bearing is the elastomeric bridge bearing. Another type of bridge bearing is the mechanical bridge bearing. There are several types of mechanical bridge bearing, such as the pinned bearing, which in turn includes specific types such as the rocker bearing, and the roller bearing. Another type of mechanical bearing is the fixed bearing, which allows rotation, but not other forms of movement.
History
The first bridge bearings to be used were plane bearings in the early 1800s, which included sliding bearings or roller bearings. Plane bearings allowed horizontal movement in one direction, and could therefore transfer horizontal load. Rotating bearings were used in the late 1800s and early 1900s and included rocker bearings, knuckle bearings, and ball bearings. Rotating bearings allowed movement in both the horizontal and vertical directions. Both plane bearings and roller bearings were made of metal. In the mid-1900s, deformation bearings began to be used, which were made of rubber. Deformation bearings primarily include elastomeric bearings, the most common type of bridge bearing used today.
Types of bridge bearings
Rocker bearings
Rocker bearings have curved surfaces that allow rocking. As the bridge expands, the bearing rocks to allow movement in the horizontal direction. Rocker bearings are primarily made of steel. Rocker bearings tend to be used for highway bridges.
Elastomeric bearings
Elastomeric bridge bearings are the most popular type of bridge bearing used today. They are made of rubber and do not have any moving parts, because the rubber itself allows movement in the bridge. Elastomeric bearings can be made at a low cost, and do not need to be maintained, like other forms of bearings that have moving parts and are made of metal. Elastomeric bearings can be reinforced with steel to make them stronger if needed.
Sliding bearings
Sliding bearings have both a flat sliding surface to allow horizontal movement and a spherical surface to allow rotation. Although they used to be made of metal, sliding bearings now tend to be made of Teflon.
Spherical bearings
As the name suggests, spherical bearings are in the shape of a sphere. These bearings only allow rotation, and prevent movement in the horizontal and vertical directions.
Functions of bridge bearings
They are one of the most important components of bridges.
They transfer forces from bridge superstructure to substructure. Mainly two types of loads: Vertical Loads such as the structure's weight and vehicle load, and Lateral Loads including earthquake and wind forces.
They permit movements like translation and rotation in between girders and pier caps of bridges to accommodate movements such as thermal expansion.
Neoprene bearing pads (rubber like structure), a special type of bridge bearing, loses its energy through deformation.
It simplifies the load transfer mechanism and hence, making analysis easier.
See also
Expansion joint
Pier (bridge structure)
References
Civil engineering
Bearings (mechanical)
Bridge components
Architectural elements | Bridge bearing | [
"Technology",
"Engineering"
] | 706 | [
"Building engineering",
"Construction",
"Architectural elements",
"Civil engineering",
"Bridge components",
"Components",
"Architecture"
] |
39,592,391 | https://en.wikipedia.org/wiki/Alkaline%20water%20electrolysis | Alkaline water electrolysis is a type of electrolysis that is characterized by having two electrodes operating in a liquid alkaline electrolyte. Commonly, a solution of potassium hydroxide (KOH) or sodium hydroxide (NaOH) at 25-40 wt% is used. These electrodes are separated by a diaphragm, separating the product gases and transporting the hydroxide ions (OH−) from one electrode to the other. A recent comparison showed that state-of-the-art nickel based water electrolysers with alkaline electrolytes lead to competitive or even better efficiencies than acidic polymer electrolyte membrane water electrolysis with platinum group metal based electrocatalysts.
The technology has a long history in the chemical industry. The first large-scale demand for hydrogen emerged in late 19th century for lighter-than-air aircraft, and before the advent of steam reforming in the 1930s, the technique was competitive.
Hydrogen-based technologies have evolved significantly since the initial discovery of hydrogen and its early application as a buoyant gas approximately 250 years ago. In 1804, the Swiss inventor Francois Isaac de Rivaz secured a patent for the inaugural hydrogen-powered vehicle. This prototype, equipped with a four-wheel design, utilised an internal combustion engine (ICE) fuelled by a mixture of hydrogen and oxygen gases. The hydrogen fuel was stored in a balloon, and ignition was achieved through an electrical starter known as a Volta starter. The combustion process propelled the piston within the cylinder, which, upon descending, activated a wheel through a ratchet mechanism. This invention could be viewed as an early embodiment of a system comprising hydrogen storage, conduits, valves, and a conversion device.
Approximately four decades after the military scientist Ritter developed the first electrolyser, the chemists Schoenbein and Sir Grove independently identified and showcased the fuel cell concept. This technology operates in reverse to electrolysis around the year 1839. This discovery marked a significant milestone in the field of hydrogen technology, demonstrating the potential for hydrogen as a source of clean energy.
Structure and materials
The electrodes are typically separated by a thin porous foil, commonly referred to as diaphragm or separator. The diaphragm is non-conductive to electrons, thus avoiding electrical shorts between the electrodes while allowing small distances between the electrodes. The ionic conductivity is supplied by the aqueous alkaline solution, which penetrates in the pores of the diaphragm. Asbestos diaphragms have been used for a long time due to their effective gas separation, low cost, and high chemical stability; however, their use is restricted by the Rotterdam Convention. The state-of-the-art diaphragm is Zirfon, a composite material of zirconia and Polysulfone. The diaphragm further avoids the mixing of the produced hydrogen and oxygen at the cathode and anode, respectively. The thickness of asbestos diaphragms ranges from 2 to 5 mm, while Zirfon diaphragms range from 0.2 to 0.5 mm.
Typically, Nickel based metals are used as the electrodes for alkaline water electrolysis. Considering pure metals, Ni is the least active non-noble metal. The high price of good noble metal electrocatalysts such as platinum group metals and their dissolution during the oxygen evolution is a drawback. Ni is considered as more stable during the oxygen evolution, but stainless steel has shown good stability and better catalytic activity than Ni at high temperatures during the Oxygen Evolution Reaction (OER).
High surface area Ni catalysts can be achieved by dealloying of Nickel-Zinc or Nickel-Aluminium alloys in alkaline solution, commonly referred to as Raney nickel. In cell tests the best performing electrodes thus far reported consisted of plasma vacuum sprayed Ni alloys on Ni meshes
and hot dip galvanized Ni meshes. The latter approach might be interesting for large scale industrial manufacturing as it is cheap and easily scalable, but unfortunately, all the strategies show some degradation.
Electrochemistry
Anode reaction
In alkaline media oxygen evolution reactions, multiple adsorbent species (O, OH, OOH, and OO–) and multiple steps are involved. Steps 4 and 5 often occur in a single step, but there is evidence that suggests steps 4 and 5 occur separately at pH 11 and higher.
Where the * indicate species adsorbed to the surface of the catalyst.
Cathode reaction
The hydrogen evolution reaction in alkaline conditions starts with water adsorption and dissociation in the Volmer step and either hydrogen desorption in the Tafel step or Heyrovsky step.
Advantages compared to PEM water electrolysis
In comparison to Proton exchange membrane electrolysis, the advantages of alkaline water electrolysis are mainly:
Has a longer track record of industrial use, proven reliability, and lower initial costs, making it a more mature option for large-scale hydrogen production.
Higher durability due to an exchangeable electrolyte and lower dissolution of anodic catalyst.
Unlike PEM electrolysis, alkaline electrolysis does not require expensive or scarce precious metals like platinum or iridium for the electrodes. This reduces the overall cost and material dependencies.
Disadvantage
One disadvantage of alkaline water electrolysers is the low-performance profiles caused by the commonly-used thick diaphragms that increase ohmic resistance, the lower intrinsic conductivity of OH− compared to H+, and the higher gas crossover observed for highly porous diaphragms.
References
Chemical processes
Electrochemistry
Electrolysis
Industrial gases
Hydrogen production | Alkaline water electrolysis | [
"Chemistry"
] | 1,170 | [
"Chemical processes",
"Electrochemistry",
"Industrial gases",
"nan",
"Electrolysis",
"Chemical process engineering"
] |
39,594,285 | https://en.wikipedia.org/wiki/Spherical%20surface%20acoustic%20wave%20%28SAW%29%20sensor | Spherical surface acoustic wave sensors use a type of surface acoustic wave (SAW) that travels along the surface of a medium exhibiting elasticity with exponentially decaying amplitude along depth. MEMS-IDT technology allows the use of SAW waves to sense various gases. Sensitivity up to 10 ppm of hydrogen using a spherical Ball SAW device is obtained.
Principles
Conventional planar SAW sensors are based on principle that the parameters such as amplitude, speed and phase of Surface acoustic wave changes on adsorption of gas molecules. Limitation of planar SAW based sensors is that the change in above mentioned parameters is very small due to limited path offered to Surface acoustic wave by planar sensor. In case of Spherical sensors surface acoustic wave make several round trips along the equator of a ball as shown in fig, which offer longer paths to Surface acoustic wave hence even smaller change in parameters is amplified with multiple turns, which increases the sensitivity of the sensor considerably.
References
Microtechnology
Sensors | Spherical surface acoustic wave (SAW) sensor | [
"Materials_science",
"Technology",
"Engineering"
] | 193 | [
"Sensors",
"Materials science",
"Microtechnology",
"Measuring instruments"
] |
48,649,482 | https://en.wikipedia.org/wiki/Towed%20glider%20air-launch%20system | Towed glider air-launch system (abbv. TGALS) is a NASA-designed two-stage air-launched reusable launch system currently in development at NASA's Armstrong Flight Research Center. The system uses a glider, tow plane, and rocket and is designed to carry small satellites to orbit. Both the glider and tow plane are reusable.
The system, compared to other designs such as Swiss Space Systems' SOAR spaceplane and Virgin Galactic's SpaceShipTwo vehicle, is launched from a glider. This design emulates an air-launched multistage rocket with two recoverable stages: the tow plane and the glider itself.
Design
The system comprises three large components: a tow plane, a glider, and a rocket. The tow plane, a conventional small aircraft, carries the glider up to about before releasing the towline and flying back. The glider, carrying its own hybrid or solid rocket motor, will ignite its engine to glide higher than the tow plane's maximum altitude. Following burnout of its rocket, the glider will jettison the third (exclusively rocket powered) stage of the system. This rocket stage will then carry its satellite payload into low Earth orbit.
NASA's concept aims to create a platform that can launch 15 times the mass of the glider, compared to 0.7 times for other air-launch reusable spaceflight systems. According to Aero News, the advantage of the system is that gliders don't carry engines with them and have longer, lighter wings, resulting in lower total mass. NASA discussed the advantages of this system in a report:
The TGALS demonstration's goal is to provide proof-of-concept of a towed, airborne launch platform. Distinct advantages are believed possible in cost, logistic efficiency, and performance when utilizing a towed, high lift-to-drag launch platform as opposed to utilizing a traditional powered 'mothership' launch platform. The project goal is to examine the performance advantage, as well as the operational aspects, of a towed, airborne launch system.
According to Gerrard Budd, the development manager of NASA Armstrong Center's air launch program, "[NASA] thinks that [the glider] is the optimization for air launch", comparing the design of the system to other air launch systems in development.
Tow plane
Original testing of the glider was performed using NASA Armstrong DROID Aircraft. Due to a combination of the projected mass, and thus drag of the glider with a rocket aboard, as well as the operating altitude required for testing, a larger tow aircraft was built. Dubbed the "Micro Cub", it is a heavily modified Hempel 60% Super Cub using a JetCat SPT 15 turbo prop.
Glider
The glider design is based on a twin fuselage. NASA engineers plan to suspend the rocket stage below the center section of the glider wing. The glider will carry its own small rocket motor which will light for about 20 seconds after release from the tow plane to maintain velocity while climbing. The glider will then glide at a 70-degree angle.
Rocket
Test flights
On 21 October 2014, NASA conducted the maiden test flight of a one-third scale model of the twin fuselage glider using NASA Armstrong's DROID aircraft, an autonomous airplane. The flight was successful.
Future
NASA plans to test the feasibility of releasing a small unpowered rocket from the one-third scale glider, followed by mounting a small rocket motor on the glider to test the feasibility of a rocket-assisted glider design. Further plans involve the construction of a full-scale platform. The project has obtained funding for NASA's 2015 financial year through the Game Changing Development program following the first successful test flight of the one-third scale glider.
References
Air launch to orbit
Rocketry
Space access | Towed glider air-launch system | [
"Engineering"
] | 761 | [
"Rocketry",
"Aerospace engineering"
] |
36,725,358 | https://en.wikipedia.org/wiki/Indacrinone | Indacrinone is a loop diuretic. It can be used in patients of gout with hypertension as an antihypertensive because it decreases reabsorption of uric acid, while other diuretics increase it.
Chirality and biological activity
Indacrinone is a chiral drug, with one chiral center and hence exists as mirror-image twins. (R)-enantiomer, the eutomer, is diuretic whereas the mirror-image version (S)-enantiomer counteracts side effect of the eutomer. Here both the enantiomers contribute to the overall desired effect in different ways.
As indicated earlier, the (R)- enantiomer is the pharmacologically active diuretic. Like most other diuretics, the (R)-isomer possesses an undesirable side-effect of retaining uric acid. But the (S)-enantiomer, the distomer, has the property of assisting uric acid secretion (uricosuric effect), and, therefore, antagonizing the undesirable side-effects of the eutomer (uric-acid retention). It affords a good argument for the marketing of a racemic mixture. But studies exemplify that 9:1 mixture of the two enantiomers provides optimal therapeutic value.
Synthesis
The Friedel-Crafts acylation of 2,3-dichloroanisole [1984-59-4] (1) with phenylacetyl chloride [103-80-0] (2) gives 2,3-dichloro-4-phenylacetylanisole [59043-83-3] (3). A variation of the Mannich reaction is performed employing tetramethyldiaminomethane [51-80-9] (this is an aminal of dimethylamine and formaldehyde). The intermediate reaction product (5), which is not isolated, would undergo a β-Hydride elimination with concomitant loss of dimethylamine and formation of the corresponding enone, 2,3-Dichloro-4-(2-phenylacryloyl)anisole (PC10924810) (6). Acid catalyzed (H2SO4) intramolecular cyclization gives the indanone (PC10990444) (7). This is O-demethylated under acidic conditions to give 2-Phenyl-5-hydroxy-6,7-dichloro-1-indanone, PC12774089 (8). The phenol thus obtained is then alkylated on oxygen by iodoacetic acid [64-69-7] (9) affording PC20520826 (10). Alkylation with iodomethane [74-88-4] in the presence of sodium hydride completed the synthesis of indacrinone (11).
See also
Chiral drugs
Chirality
Eudisimic ratio
References
Diuretics
Carboxylic acids
Chloroarenes | Indacrinone | [
"Chemistry"
] | 684 | [
"Carboxylic acids",
"Functional groups"
] |
36,726,699 | https://en.wikipedia.org/wiki/Fault%20reporting | Fault reporting is a maintenance concept that increases operational availability and that reduces operating cost by three mechanisms:
Reduce labor-intensive diagnostic evaluation
Eliminate diagnostic testing down-time
Provide notification to management for degraded operation
That is a prerequisite for condition-based maintenance.
Active redundancy can be integrated with fault reporting to reduce the down time to a few minutes per year.
History
Formal maintenance philosophies are required by organizations whose primary responsibility is to ensure systems are ready when expected, such as space agencies and military.
Labor-intensive planned maintenance began during the rise of the Industrial Revolution and depends upon periodic diagnostic evaluation based upon calendar dates, distance, or use. The intent is to accomplish diagnostic evaluations that indicate when maintenance is required to prevent inconvenience and safety issues that will occur when critical equipment failures occur during use.
The electronic revolution allowed inexpensive sensors and controls to be integrated into most equipment. That includes diagnostic indicators, fluid sensors, temperature sensors, ignition sensors, exhaust monitoring, voltage sensors, and similar monitoring equipment that indicates when maintenance is required. Sensor displays are often located in inaccessible locations that cannot be observed during normal operation. Labor-intensive periodic maintenance is often required to inspect indicators.
Some organizations have eliminated most labor-intensive periodic maintenance and diagnostic down time by implementing designs that bring all sensor status to fault indicators near users.
Principle
Maintenance requires three actions.
Fault discovery
Fault isolation
Fault recovery
Fault discovery requires diagnostic maintenance, which requires system down time and labor costs.
Down time and cost requirements associated with diagnostics are eliminated for every item that satisfies the following criteria.
Automated diagnostic
Instrumented for remote viewing
Displayed in the viscidity of supervisory personnel
Implementation
Fault reporting is an optional feature that can be forwarded to remote displays using simple configuration setting in all modern computing equipment. The system level of reporting that is appropriate for Condition Based Maintenance are critical, alert, and emergency, which indicate software termination due to failure. Specific failure reporting, like interface failure, can be integrated into applications linked with these reporting systems. There is no development cost if they are incorporated into designs.
Syslog
Event Log
Power distribution unit
Other kinds of fault reporting involves painting green, yellow, and red zones onto temperature gages, pressure gages, flow gages, vibration sensors, strain gages, and similar sensors. Remote viewing can be implemented using a video camera.
Benefits
The historical approach for Fault discovery is periodic diagnostic testing, which eliminates the following operational availability penalty.
Fault reporting eliminates maintenance costs associated manual diagnostic testing.
Labor is eliminated in redundant designs by using the fault discovery and fault isolation functions to automatically reconfigure equipment for degraded operation.
Maintenance savings can be re-allocated to upgrades and improvements that increase organizational competitiveness.
Problems
Faults that do not trigger a sustained requirement for fault isolation and fault recovery actions should not be displayed for management action.
For example, lighting up a fault indicator in situations if human intervention is not required induces breakage by causing maintenance personnel to perform work when nothing is already broken.
Another example is that enabling fault reporting for Internet network packet delivery failure increases network loading when the network is already busy, which causes a total network outage.
See also
Active redundancy
Operational availability
References
Maintenance
Engineering concepts
Reliability engineering | Fault reporting | [
"Engineering"
] | 652 | [
"Systems engineering",
"Reliability engineering",
"nan",
"Maintenance",
"Mechanical engineering"
] |
36,726,860 | https://en.wikipedia.org/wiki/Borane%E2%80%93tetrahydrofuran | Borane–tetrahydrofuran is an adduct derived from borane and tetrahydrofuran (THF). These solutions, which are colorless, are used for reductions and hydroboration, reactions that are useful in synthesis of organic compounds. A common alternative to BHF•THF is borane–dimethylsulfide, which has a longer shelf life and effects similar transformations.
Preparation and uses
The complex is commercially available but can also be generated by the dissolution of diborane in THF. Alternatively, it can be prepared by the oxidation of sodium borohydride with iodine in THF.
The complex can reduce carboxylic acids to alcohols and is a common route for the reduction of amino acids to amino alcohols (e.g. valinol). It adds across alkenes to give organoboron compounds that are useful intermediates. The following organoboron reagents are prepared from borane-THF: 9-borabicyclo[3.3.1]nonane, Alpine borane, diisopinocampheylborane. It is also used as a source of borane (BH3) for the formation of adducts.
Safety
The solution is highly sensitive to air, requiring the use of air-free techniques.
See also
Ammonia borane
Borane dimethylsulfide
Borane tert-butylamine
References
Boranes
Reagents for organic chemistry | Borane–tetrahydrofuran | [
"Chemistry"
] | 314 | [
"Reagents for organic chemistry"
] |
36,727,085 | https://en.wikipedia.org/wiki/Active%20redundancy | Active redundancy is a design concept that increases operational availability and that reduces operating cost by automating most critical maintenance actions.
This concept is related to condition-based maintenance and fault reporting.
History
The initial requirement began with military combat systems during World War I. The approach used for survivability was to install thick armor plate to resist gun fire and install multiple guns.
This became unaffordable and impractical during the Cold War when aircraft and missile systems became common.
The new approach was to build distributed systems that continue to work when components are damaged. This depends upon very crude forms of artificial intelligence that perform reconfiguration by obeying specific rules. An example of this approach is the AN/UYK-43 computer.
Formal design philosophies involving active redundancy are required for critical systems where corrective labor is undesirable or impractical to correct failure during normal operation.
Commercial aircraft are required to have multiple redundant computing systems, hydraulic systems, and propulsion systems so that a single in-flight equipment failure will not cause loss of life.
A more recent outcome of this work is the Internet, which relies on a backbone of routers that provide the ability to automatically re-route communication without human intervention when failures occur.
Satellites placed into orbit around the Earth must include massive active redundancy to ensure operation will continue for a decade or longer despite failures induced by normal failure, radiation-induced failure, and thermal shock.
This strategy now dominates space systems, aircraft, and missile systems.
Principle
Maintenance requires three actions, which usually involve down time and high priority labor costs:
Automatic fault detection
Automatic fault isolation
Automatic reconfiguration
Active redundancy eliminates down time and reduces manpower requirements by automating all three actions. This requires some amount of automated artificial intelligence.
N stands for needed equipment. The amount of excess capacity affects overall system reliability by limiting the effects of failure.
For example, if it takes two generators to power a city, then "N+1" would be three generators to allow a single failure. Similarly, "N+2" would be four generators, which would allow one generator to fail while a second generator has already failed.
Active redundancy improves operational availability as follows.
Passive components
Active redundancy in passive components requires redundant components that share the burden when failure occurs, like in cabling and piping.
This allows forces to be redistributed across a bridge to prevent failure if a vehicle ruptures a cable.
This allows water flow to be redistributed through pipes when a limited number of valves are shut or pumps shut down.
Active components
Active redundancy in active components requires reconfiguration when failure occurs. Computer programming must recognize the failure and automatically reconfigure to restore operation.
All modern computers provide the following when an existing feature is enabled via fault reporting.
Automatic fault detection
Automatic fault isolation
Mechanical devices must reconfigure, such as transmission settings on hybrid vehicles that have redundant propulsion systems. The petroleum engine will start up when battery power fails.
Electrical power systems must perform two actions to prevent total system failure when smaller failures occur, such as when a tree falls across a power line. Power systems incorporate communication, switching, and automatic scheduling that allows these actions to be automated.
Shut down the damaged power line to isolate the failure
Adjust generator settings to prevent voltage and frequency excursions
Benefits
This is the only known strategy that can achieve high availability.
Detriments
This maintenance philosophy requires custom development with extra components.
See also
Operational availability
Fault reporting
Data center tiers
Availability zone
References
Maintenance
Engineering concepts
Reliability engineering
Safety
Fault-tolerant computer systems | Active redundancy | [
"Technology",
"Engineering"
] | 744 | [
"Systems engineering",
"Reliability engineering",
"Computer systems",
"Fault-tolerant computer systems",
"Mechanical engineering",
"Maintenance",
"nan"
] |
36,728,197 | https://en.wikipedia.org/wiki/Zymoscope | Measuring instruments
Yeasts | Zymoscope | [
"Technology",
"Engineering",
"Biology"
] | 6 | [
"Yeasts",
"Fungi",
"Measuring instruments"
] |
36,728,961 | https://en.wikipedia.org/wiki/Aircraft%20engine%20starting | Many variations of aircraft engine starting have been used since the Wright brothers made their first powered flight in 1903. The methods used have been designed for weight saving, simplicity of operation and reliability. Early piston engines were started by hand. Geared hand starting, electrical and cartridge-operated systems for larger engines were developed between the First and Second World Wars.
Gas turbine aircraft engines such as turbojets, turboshafts and turbofans often use air/pneumatic starting, with the use of bleed air from built-in auxiliary power units (APUs) or external air compressors now seen as a common starting method. Often only one engine needs be started using the APU (or remote compressor). After the first engine is started using APU bleed air, cross-bleed air from the running engine can be used to start the remaining engine(s).
Piston engines
Hand starting/propeller swinging
Hand starting of aircraft piston engines by swinging the propeller is the oldest and simplest method, the absence of any onboard starting system giving an appreciable weight saving. Positioning of the propeller relative to the crankshaft is arranged such that the engine pistons pass through top dead centre during the swinging stroke.
As the ignition system is normally arranged to produce sparks before top dead centre there is a risk of the engine kicking back during hand starting. To avoid this problem one of the two magnetos used in a typical aero engine ignition system is fitted with an 'impulse coupling', this spring-loaded device delays the spark until top dead centre and also increases the rotational speed of the magneto to produce a stronger spark. When the engine fires, the impulse coupling no longer operates and the second magneto is switched on.
As aero engines grew bigger in capacity (during the interwar period), single-person propeller swinging became physically difficult, ground crew personnel would join hands and pull together as a team or use a canvas sock fitted over one propeller blade, the sock having a length of rope attached to the propeller tip end. Note that this is different from the manual "turning over" of radial piston engine, which is done to release oil that has become trapped in the lower cylinders prior to starting, to avoid engine damage. The two appear similar, but while hand starting involves a sharp, strong "yank" on the prop to start the engine, turning over is simply done by turning the prop through a certain set amount.
Accidents have occurred during lone pilot hand starting, high throttle settings, brakes not applied or wheel chocks not being used, all resulting in aircraft moving off without the pilot at the controls. "Turning the engine" with the ignition and switches accidentally left "on" can also cause injury, as the engine can start unexpectedly when a spark plug fires. If the switch is not in start position, the spark will occur before the piston hits top dead center, which can force the propeller to violently kick back.
Hucks starter
The Hucks starter (invented by Bentfield Hucks during WWI) is a mechanical replacement for the ground crew. Based on a vehicle chassis the device uses a clutch driven shaft to turn the propeller, disengaging as the engine starts. A Hucks starter is used regularly at the Shuttleworth Collection for starting period aircraft.
Pull cord
Self-sustaining motor gliders (often known as 'turbos') are fitted with small two-stroke engines with no starting system, for ground testing a cord is wrapped around the propeller boss and pulled rapidly in conjunction with operating decompressor valves. These engines are started in flight by operating the decompressor and increasing airspeed to windmill the propeller. Early variants of the Slingsby Falke motor glider use a cockpit mounted pull start system.
Electric starter
Aircraft began to be equipped with electrical systems around 1930, powered by a battery and small wind-driven generator. The systems were initially not powerful enough to drive starter motors. Introduction of engine-driven generators solved the problem.
Introduction of electric starter motors for aero engines increased convenience at the expense of extra weight and complexity. They were a necessity for flying boats with high mounted, inaccessible engines. Powered by an onboard battery, ground electrical supply or both, the starter is operated by a key or switch in the cockpit. The key system usually facilitates switching of the magnetos.
In cold ambient conditions the friction caused by viscous engine oil causes a high load on the starting system. Another problem is the reluctance of the fuel to vaporise and combust at low temperatures. Oil dilution systems were developed (mixing fuel with the engine oil), and engine pre-heaters were used (including lighting fires under the engine). The Ki-Gass priming pump system was used to assist starting of British engines.
Aircraft fitted with variable-pitch propellers or constant speed propellers are started in fine pitch to reduce air loads and current in the starter motor circuit.
Many light aircraft are fitted with a 'starter engaged' warning light in the cockpit, a mandatory airworthiness requirement to guard against the risk of the starter motor failing to disengage from the engine.
Coffman starter
The Coffman starter was an explosive cartridge operated device, the burning gases either operating directly in the cylinders to rotate the engine or operating through a geared drive. First introduced on the Junkers Jumo 205 diesel engine in 1936 the Coffman starter was not widely used by civil operators due to the expense of the cartridges.
Pneumatic starter
In 1920 Roy Fedden designed a piston engine gas starting system, used on the Bristol Jupiter engine in 1922. A system used in early Rolls-Royce Kestrel engines ducted high-pressure air from a ground unit through a camshaft driven distributor to the cylinders via non-return valves, the system had disadvantages only overcome by conversion to electric starting.
In-flight starting
When a piston engine needs to be started in flight the electric starter motor can be used. This is a normal procedure for motor gliders that have been soaring with the engine turned off. During aerobatics with earlier aircraft types it was not uncommon for the engine to cut during manoeuvres due to carburettor design. With no electric starter installed, engines can be restarted by diving the aircraft to increase airspeed and the rotation speed of the 'windmilling' propeller.
Inertia starter
An aero engine inertia starter uses a pre-rotated flywheel to transfer kinetic energy to the crankshaft, normally through reduction gears and a clutch to prevent over-torque conditions. Three variations have been used, hand driven, electrically driven and a combination of both. When the flywheel is fully energised either a manual cable is pulled or a solenoid is used to engage the starter.
Gas turbine engines
Starting of a gas turbine engine requires rotation of the compressor to a speed that provides sufficient pressurised air to the combustion chambers. The starting system has to overcome inertia of the compressor and friction loads, the system remains in operation after combustion starts and is disengaged once the engine has reached self-idling speed.
Electric starter
Two types of electrical starter motor can be used, direct-cranking (to disengage as internal combustion engines) and starter-generator system (permanently engaged).
Hydraulic starter
Small gas turbine engines, particularly turboshaft engines used in helicopters and cruise missile turbojets can be started by a geared hydraulic motor using oil pressure from a ground supply.
Air-start
With air-start systems, gas turbine engine compressor spools are rotated by the action of a large volume of compressed air acting directly on the compressor blades or driving the engine through a small, geared turbine motor. These motors can weigh up to 75% less than an equivalent electrical system.
The compressed air can be supplied from an on-board auxiliary power unit (APU), a portable gas generator used by ground crew or by cross feeding bleed air from a running engine in the case of multi-engined aircraft.
The Turbomeca Palouste gas generator was used to start the Spey engines of the Blackburn Buccaneer. The de Havilland Sea Vixen was equipped with its own Palouste in a removable underwing container to facilitate starting when away from base. Other military aircraft types using ground supplied compressed air for starting include the Lockheed F-104 Starfighter and variants of the F-4 Phantom using the General Electric J79 turbojet engine.
Combustion starters
AVPIN starter
Versions of the Rolls-Royce Avon turbojet engine used a geared turbine starter motor that burned isopropyl nitrate as the fuel. In military service this monofuel had the NATO designation of S-746 AVPIN. For starting a measured amount of fuel was introduced to the starter combustion chamber then ignited electrically, the hot gases spinning the turbine at high revolutions with the exhaust exiting overboard.
Cartridge starter
Similar in operating principle to the piston engine Coffman starter, an explosive cartridge drives a small turbine engine which is connected by gears to the compressor shaft.
Fuel/air turbine starter (APU)
Developed for short-haul airliners, most civil and military aircraft requiring self-contained starting systems these units are known by various names including Auxiliary Power Unit (APU), Jet Fuel Starter (JFS), Air Start Unit (ASU) or Gas Turbine Compressor (GTC).
Comprising a small gas turbine which is electrically started, these devices provide compressed bleed air for engine starting and often also provide electrical and hydraulic power for ground operations without the need to run the main engines.
ASUs are used today within the civil and military Ground Support to serve Aircraft on main engine start (MES) and pneumatic bleed-air-support for the Environmental Control System (ECS) cooling and heating
Internal combustion engine starter
An interesting feature of all three German jet engine designs that saw production of any kind before May 1945: the German BMW 003, Junkers Jumo 004 and Heinkel HeS 011 axial-flow turbojet engine designs was the starter system, which consisted of a Riedel 10 hp (7.5 kW) flat twin two-stroke air-cooled engine hidden in the intake, and essentially functioned as a pioneering example of an auxiliary power unit (APU) for starting a jet engine — for the Jumo 004, a hole in the extreme nose of the intake diverter contained a D-shaped manual pull-cord handle which started the piston engine, which in turn rotated the compressor. Two small petrol/oil mix tanks were fitted in the annular intake.
The Lockheed SR-71 Blackbird used two Buick Nailheads as starter motors, which were mounted on a AG-330 Start Kart trolley, later with big-block V8 Chevrolet 454 engines.
In-flight restart
Gas turbine engines can be shut down in flight, intentionally by the crew to save fuel or during a flight test or unintentionally due to fuel starvation or flameout after a compressor stall.
Sufficient airspeed is used to 'windmill' the compressor then fuel and ignition are switched on, an on-board auxiliary power unit may be used at high altitudes where the air density is lower.
During zoom climb operations of the Lockheed NF-104A the jet engine was shut down on climbing through and was started using the windmill method on descent through denser air.
Pulse jet starting
Pulse jet engines are uncommon aircraft powerplants. However, the Argus As 014 used to power the V-1 flying bomb and Fieseler Fi 103R Reichenberg was a notable exception.
In this pulse jet three air nozzles in the front section were connected to an external high-pressure air source, butane from an external supply was used for starting, ignition was accomplished by a spark plug located behind the shutter system, electricity to the plug being supplied from a portable starting unit.
Once the engine started and the temperature rose to the minimum operating level, the external air hose and connectors were removed, and the resonant design of the tailpipe kept the pulse jet firing. Each cycle or pulse of the engine began with the shutters open; fuel was injected behind them and ignited, and the resulting expansion of gases forced the shutters closed. As the pressure in the engine dropped following combustion, the shutters reopened and the cycle was repeated, roughly 40 to 45 times per second. The electrical ignition system was used only to start the engine; heating of the tailpipe skin maintained combustion.
See also
Index of aviation articles
References
Notes
Bibliography
Bowman, Martin W. Lockheed F-104 Starfighter. Ramsbury, Marlborough, Wiltshire, UK: Crowood Press Ltd., 2000. .
Federal Aviation Administration, Airframe & Powerplant Mechanics Powerplant Handbook U.S Department of Transportation, Jeppesen Sanderson, 1976.
Gunston, Bill. Development of Piston Aero Engines. Cambridge, England. Patrick Stephens Limited, 2006.
Gunston, Bill. The Development of Jet and Turbine Aero Engines. Cambridge, England. Patrick Stephens Limited, 1997.
Hardy, Michael. Gliders & Sailplanes of the World. London: Ian Allan, 1982. .
Jane's Fighting Aircraft of World War II. London. Studio Editions Ltd, 1998.
Lumsden, Alec. British Piston Engines and their Aircraft. Marlborough, Wiltshire: Airlife Publishing, 2003. .
Rubbra, A.A. Rolls-Royce Piston Aero Engines - a designer remembers: Historical Series no 16 :Rolls-Royce Heritage Trust, 1990.
Stewart, Stanley. Flying the Big Jets. Shrewsbury, England. Airlife Publishing Ltd, 1986.
Thom, Trevor. The Air Pilot's Manual 4-The Aeroplane-Technical. Shrewsbury, Shropshire, England. Airlife Publishing Ltd, 1988.
Williams, Neil. Aerobatics, Shrewsbury, England: Airlife Publishing Ltd., 1975
Starting systems
Engine starting | Aircraft engine starting | [
"Engineering"
] | 2,787 | [
"Systems engineering",
"Aircraft systems"
] |
36,729,549 | https://en.wikipedia.org/wiki/Rolling-wave%20planning | Rolling-wave planning is the process of project planning in waves as the project proceeds and later details become clearer; similar to the techniques used in agile software development approaches like Scrum.
Work to be done in the near term is based on high-level assumptions; also, high-level milestones are set. As the project progresses, the risks, assumptions, and milestones originally identified become more defined and reliable. One would use rolling-wave planning in an instance where there is an extremely tight schedule or timeline to adhere to, as more thorough planning would place the schedule into an unacceptable negative schedule variance.
The concepts of rolling-wave planning and progressive elaboration are techniques covered in the Project Management Body of Knowledge.
References
External links
Rolling Wave Planning
Rolling Wave Planning in Project Management
Rolling Wave Planning
PMBOK 4 - This Time It's Iterative
PMBOK Define Activities Process
Schedule (project management) | Rolling-wave planning | [
"Physics"
] | 184 | [
"Spacetime",
"Physical quantities",
"Time",
"Schedule (project management)"
] |
40,910,086 | https://en.wikipedia.org/wiki/Weyl%E2%80%93Lewis%E2%80%93Papapetrou%20coordinates | In general relativity, the Weyl–Lewis–Papapetrou coordinates are used in solutions to the vacuum region surrounding an axisymmetric distribution of mass–energy. They are named for Hermann Weyl, Thomas Lewis, and Achilles Papapetrou.
Details
The square of the line element is of the form:
where are the cylindrical Weyl–Lewis–Papapetrou coordinates in -dimensional spacetime, and , , , and , are unknown functions of the spatial non-angular coordinates and only. Different authors define the functions of the coordinates differently.
See also
Introduction to the mathematics of general relativity
Stress–energy tensor
Metric tensor (general relativity)
Relativistic angular momentum
Weyl metrics
References
Further reading
Selected papers
Selected books
Metric tensors
Spacetime
Coordinate charts in general relativity
General relativity
Gravity | Weyl–Lewis–Papapetrou coordinates | [
"Physics",
"Mathematics",
"Engineering"
] | 161 | [
"Tensors",
"Vector spaces",
"Coordinate systems",
"Space (mathematics)",
"General relativity",
"Metric tensors",
"Relativity stubs",
"Theory of relativity",
"Spacetime",
"Coordinate charts in general relativity"
] |
40,911,082 | https://en.wikipedia.org/wiki/Icosahedral%20twins | An icosahedral twin is a atomic structure found in atomic clusters and also nanoparticles with some thousands of atoms. Their atomic structure is slightly different from what is found for bulk materials, and contains five-fold symmetries. They have been analyzed in many areas of science including crystal growth, crystallography, chemical physics, surface science and materials science, as well as sometimes being considered as beautiful due to their high symmetry.
The simplest form of these clusters is twenty interlinked tetrahedral crystals joined along triangular (e.g. cubic-(111)) faces, although more complex variants of the outer surface also occur. A related structure has five units similarly arranged with twinning, which were known as "fivelings" in the 19th century, more recently as "decahedral multiply twinned particles", "pentagonal particles" or "star particles". A variety of different methods (e.g. condensing metal nanoparticles in argon, deposition on a substrate, wet chemical synthesis) lead to the icosahedral form, and they also occur in virus capsids.
These forms occur at small sizes where they have lower total surface energy than other configurations. This is balanced by a elastic deformation (strain) energy, which dominates at larger sizes. This leads to a competition between different forms as a function of size, and often there is a population of different shapes.
Shape and energetics
In a large particle the energy is dominated by the bulk bonding. The energy of the external surface where the atoms have less bonding is less important. The overall shape is the one which minimizes the total surface energy, the solution of which is the Wulff construction. When the size is reduced a significant fraction of the atoms are at the surface, and hence the total surface energy starts to become comparable to the bulk bonding energy. Icosahedral arrangements, typically because of their smaller total surface energy, can be preferred for small nanoparticles. For face centered cubic (fcc) materials such as gold or silver these structures can be considered as being built from twenty different single crystal units all with three twin facets arranged in icosahedral symmetry, and mainly the low energy {111} external facets. An fcc single crystal has both {111} and {100} surface facets, and perhaps {110} if the energy of the latter is low enough. In contrast icosahedral twins normally have {111} and perhaps {110}, none of the higher energy {100}.
The external surface shape for given values of the surface energy can be generated from a modified Wulff construction, and is also not always that of a simple icosahedron; there can be additional facets leading to a more spherical shape as illustrated in the figure. Depending upon the relative energies of {111} and {110} facets, the shape can range from an icosahedron (on the left of the figure) with small dents at the five-fold axes (due to the twin boundary energy) when {111} is significantly lower in energy, to (going to the right in the figure) a truncated icosahedron or a Icosidodecahedron when the {111} and {110} are similar, and a regular dodecahedron when {110} is significantly lower in energy. These different shapes have been found in experiments where the relative surface energies are changed with surface adsorbates. There are several software codes that can be used to calculate the shape as a function of the energy of different surface facets.
With just tetrahedra these structure cannot fill space and there would be gaps as shown in the figure, so there is some distortions of the atomic positions, that is elastic deformation to close these gaps. These deformations cost energy, and this strain energy competes with the gain in total surface energy. Roland De Wit pointed out that these can be thought of in terms of disclinations, an approach later extended to three dimensions by Elisabeth Yoffe. This leads to a compression in the center of the particles, and an expansion at the surface.
At small sizes the surface energy often dominates over the strain energy, with icosahedral forms often the most stable ones. At larger sizes the energy to distort becomes larger than the gain in surface energy, and a single crystal with a Wulff construction shape is lowest in energy. The size when the icosahedra become less energetically stable is typically 10-30 nanometers in diameter, but it does not always happen that the shape changes and the particles can grow to micron sizes.
The most common approach to understand the formation of these particles, first used by Shozo Ino in 1969, is to look at the energy as a function of size comparing these icosahedral twins, decahedral nanoparticles and single crystals. The total energy for each type of particle can be written as the sum of three terms:
for a volume , where is the surface energy, is the disclination strain energy to close the gap , and is a coupling term for the effect of the strain on the surface energy via the surface stress, which can be a significant contribution. The sum of these three terms is compared to the total surface energy of a single crystal (which has no strain), and to similar terms for a decahedral particle. Of the three the icosahedral particles have both the lowest total surface energy and the largest strain energy for a given volume. Hence the icosahedral particles are more stable at very small sizes, the decahedral at intermediate sizes then single crstals. At large sizes the strain energy can become very large, so it is energetically favorable to have dislocations and/or a grain boundary instead of a distributed strain.
There is no general consensus on the exact sizes when there is a transition in which type of particle is lowest in energy, as these vary with material and also the environment such as gas and temperature; the coupling surface stress term and also the surface energies of the facets are very sensitive to these. In addition, as first described by Michael Hoare and P Pal and R. Stephen Berry and analyzed for these particles by Pulickel Ajayan and Laurence Marks as well as discussed by others such as Amanda Barnard, David J. Wales, Kristen Fichthorn and Francesca Baletto and Riccardo Ferrando, at very small sizes there will be a statistical population of different structures so many different ones will exist at the same time. In many cases nanoparticles are believed to grow from a very small seed without changing shape, and hence what is found reflects the distribution of coexisting structures.
For systems where icosahedral and decahedral morphologies are both relatively low in energy, the competition between these structures has implications for structure prediction and for the global thermodynamic and kinetic properties. These result from a double funnel energy landscape where the two families of structures are separated by a relatively high energy barrier at the temperature where they are in thermodynamic equilibrium. This arises for a cluster of 75 atoms with the Lennard-Jones potential, where the global potential energy minimum is decahedral, and structures based upon incomplete Mackay icosahedra are also low in potential energy, but higher in entropy. The free energy barrier between these families is large compared to the available thermal energy at the temperature where they are in equilibrium. An example is shown in the figure, with probability in the lower part and energy above with axes of an order parameter and temperature . At low temperature the 75 atom decahedral cluster (Dh) is the global free energy minimum, but as the temperature increases the higher entropy of the competing structures based on incomplete icosahedra (Ic) causes the finite system analogue of a first-order phase transition; at even higher temperatures a liquid-like state is favored.
Ubiquity
Most modern analysis of these shapes in nanoparticles started with the observation of icosahedral and decahedral particles by Shozo Ino and Shiro Ogawa in 1966-67, and independently but slightly later (which they acknowledged) in work by John Allpress and John Veysey Sanders. In both cases these were for vacuum deposition of metal onto substrates in very clean (ultra-high vacuum) conditions, where nanoparticle islands of size 10-50 nm were formed during thin film growth. Using transmission electron microscopy and diffraction these authors demonstrated the presence of the units in the particles, and also the twin relationships. They called the five-fold and icosahedral crystals multiply twinned particles (MTPs). In the early work near perfect icosahedron shapes were formed, so they were called icosahedral MTPs, the names connecting to the icosahedral () point group symmetry.These forms occur for both elemental nanoparticles as well as alloys and colloidal crystals. A related form also exists in icosahedral viruses as shown in the electron micrograph images.
Quasicrystals are un-twinned structures with long range rotational but not translational periodicity, that some initially tried to explain away as icosahedral twinning. There are also icosahedral-like minerals such as in pyrite where they are called pyritohedra. These form large crystals, but they do not have twinning and the lengths of the sides are not all the same.
See also
Crystal twinning
Icosahedron
Nanomaterial based catalyst
Nanotechnology
Quasicrystals
Self-assembly of nanoparticles
References
External links
Code from the group of Emilie Ringe which calculates thermodynamic and kinetic shapes for decahedral particles and also does optical simulations, see also
Code from J M Rahm and P Erhart which calculates thermodynamic shapes, both continuum and atomistic, see also .
The code can be used to generate thermodynamic Wulff shapes including twinning.
Web page using the WulffPack code and was used for the different icosahedral shapes.
Chemical physics
Condensed matter physics
Crystallography
Materials science
Mineralogy
Nanoparticles
Physical chemistry
Solid-state chemistry | Icosahedral twins | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,080 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Chemical physics",
"Crystallography",
"Condensed matter physics",
"nan",
"Physical chemistry",
"Matter",
"Solid-state chemistry"
] |
40,912,430 | https://en.wikipedia.org/wiki/C20H20O6 | {{DISPLAYTITLE:C20H20O6}}
The molecular formula C20H20O6 (molar mass: 356.37 g/mol, exact mass: 356.1260 u) may refer to:
Balanophonin, a neo-lignan
Pluviatilol, a lignan
Molecular formulas | C20H20O6 | [
"Physics",
"Chemistry"
] | 73 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
40,914,007 | https://en.wikipedia.org/wiki/Diafiltration | Diafiltration is a dilution process that involves removal or separation of components (permeable molecules like salts, small proteins, solvents etc.,) of a solution based on their molecular size by using micro-molecule permeable filters in order to obtain pure solution.
References
Further reading
External links
Diafiltration for Desalting or Buffer Exchange
Mobius Ultra/ Diafiltration Solutions
Membrane technology | Diafiltration | [
"Chemistry"
] | 80 | [
"Membrane technology",
"Separation processes"
] |
40,914,034 | https://en.wikipedia.org/wiki/Phosvitin | Phosvitin is one of the egg (commonly hen's egg) yolk phosphoproteins known for being the most phosphorylated protein found in nature. Phosvitin isolation was first described by Mecham and Olcott in the year 1949. Recently it has been shown that phosvitin orchestrates nucleation and growth of biomimetic bone like apatite.
Structure
As the most phosphorylated natural protein, phosvitin contains 123 phosphoserine residues accounting for 56.7% of its total 217 amino acid residues. The structure of phosvitin at large consists of 4-12 base pair stretches of serines, interspersed with amino acid residues lysine (6.9%), histidine (6.0%), and arginine (5.1%), among others in smaller quantities. Phosvitin’s structure (right) is adapted from the protein vitellogenin (Gene: VTG2; Uniprot: P02845; residues 1-1850) generated by AlphaFold, where all the possible phosphorylated serine residues are highlighted in red. Phosvitin is one of four proteins cleaved from vitellogenin and is unstructured at neutral pH. Despite phosvitin only accounting for 16% of total proteins in egg yolk, it alone accounts for 60% of the total yolk phosphoproteins as well as 90% of the total yolk phosphorus.
Function
Due to phosvitin’s polyanionic activity, the protein performs functionalities such as metal chelation, emulsification, and nutrition sequestration for a growing embryo. Additionally, in recent research it has been shown that the disordered secondary structure of phosvitin orchestrates nucleation and growth of biomimetic bone like apatite.
References
Further reading
Glycoproteins
Phosphoproteins | Phosvitin | [
"Chemistry"
] | 420 | [
"Biochemistry stubs",
"Glycobiology",
"Glycoproteins",
"Protein stubs"
] |
40,914,232 | https://en.wikipedia.org/wiki/Whiting%20event | A whiting event is a phenomenon that occurs when a suspended cloud of fine-grained calcium carbonate precipitates in water bodies, typically during summer months, as a result of photosynthetic microbiological activity or sediment disturbance. The phenomenon gets its name from the white, chalky color it imbues to the water. These events have been shown to occur in temperate waters as well as tropical ones, and they can span for hundreds of meters. They can also occur in both marine and freshwater environments. The origin of whiting events is debated among the scientific community, and it is unclear if there is a single, specific cause. Generally, they are thought to result from either bottom sediment re-suspension or by increased activity of certain microscopic life such as phytoplankton. Because whiting events affect aquatic chemistry, physical properties, and carbon cycling, studying the mechanisms behind them holds scientific relevance in various ways.
Characteristics
Whiting event clouds consist of calcium carbonate polymorphs; aragonite tends to be the dominant precipitate, but some studies in oligotrophic and mesotrophic lakes show calcite is favored. Whiting events have been observed in tropical and temperate waters, and they can potentially cover hundreds of meters. They tend to occur more often in summer months, as warmer waters promote calcium carbonate precipitation, and in hard waters. Whitings are typically characterized by cloudy, white patches of water, but they can also be tanner in hue in very shallow waters (less than 5m deep). In some cases, the whiting might be cryptic (not visible at the surface), but still generate calcium carbonate. These shallow water whiting events also tend to last less than a day in comparison to deeper water events that can last for several days up to several months. Regardless of the event's lifespan, the clouds it produces increase turbidity and hamper light penetration.
Potential causes
Some debate exists surrounding the exact cause of whiting events. And although much research exists on the subject, there is still no definitive consensus on the chemical mechanisms behind it. The three most common suggested causes for the phenomenon are: microbiological processes, re-suspension of marine or bottom sediments, and spontaneous direct precipitation from water. Of these three, the last has been ruled unlikely due to the unfavorable reaction kinetics of spontaneous calcium carbonate precipitation. It is also worth noting that it may be possible for more than one of the aforementioned factors to contribute to whiting events in the same region.
Microbiological activity
Substantial findings indicate photosynthetic picoplankton, picocyanobacteria, and phytoplankton activity creates favorable conditions for carbonate precipitation. This link arises as a result of planktonic blooms being observed coinciding with the events. Subsequently, via photosynthesis, these organisms uptake inorganic carbon, raise water pH, and alter water alkalinity, which promotes calcium carbonate precipitation. The thermodynamic influence of inorganic carbon on whiting calcium carbonate production is shown in the equation below. Furthermore, cases exist in which the type of calcium carbonate found in the whiting cloud matches the type found on local cyanobacteria membranes. It's hypothesized that the extracellular polymeric substances (EPS) these microorganisms produce can act as seed crystals that provide a start for the precipitation process. Current research on the specifics of these EPS and the exact physiological mechanisms of the microorganisms' carbon uptake, however, are limited.
2 HCO3- (aq) + Ca^2+ (aq) <=> CaCO3 (s) + H2O + CO2
Sediment re-suspension
In shallower waters, evidence supports that activity of local fisherman and marine life such as fish and certain shark species can disturb bottom sediments containing calcium carbonate particles and lead to their suspension. In addition, as microorganisms impact water chemistry in observable ways and require certain nutrient levels to thrive, whiting events found occurring in nutrient-poor waters where no significant alkalinity difference exists between whiting and non-whiting waters support the idea of sediment re-suspension as a primary cause.
Relevance
Whiting events have a unique effect on the waters around them. The fact that calcium carbonate clouds increase turbidity and light reflectance holds implications for organisms and processes that depend on light. In addition, whiting events can function as a transport mechanism for organic carbon to the benthic zone, which is relevant to nutrient cycling. The cyanobacteria abundant clouds also hold the potential to act as a means to study the microorganism's role in carbon cycling (especially in relation to climate change) and their possible role in finding petroleum source rocks.
References
Further reading
Biogeochemistry
Geobiology
Geochemical processes
Geochemistry
Geology
Phenomena
Water | Whiting event | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,016 | [
"Hydrology",
"Environmental chemistry",
"Chemical oceanography",
"Geochemical processes",
"Biogeochemistry",
"nan",
"Water",
"Geobiology"
] |
32,555,013 | https://en.wikipedia.org/wiki/Aanval | Aanval is a commercial SIEM product designed specifically for use with Snort, Suricata, and Syslog data. Aanval has been in active development since 2003 and remains one of the longest running Snort capable SIEM products in the industry. Aanval is Dutch for "attack".
History
Aanval was created by Loyal Moses in 2003 but was not publicly made available until March 2004 where it was released under the private commercial license C1-RA1008. Throughout the lifecycle of the software it has also been referred to as OpenAanval or ComAanval in addition to Aanval.
Aanval's had provided AJAX style security event monitoring and reporting from a web-browser. Since Aanval's creation, it has developed into an intrusion detection, correlation and threat management console with a specific focus on normalizing Snort, Suricata, and Syslog data.
Several information security related books have been published that include details and references to Aanval, including "Linux Server Security, Second Edition" by O'Reilly Media, "Security Log Management" by O'Reilly Media, "Snort: IDS and IPS Toolkit" by O'Reilly Media and in 2010 "Unix and Linux System Administration Handbook, Fourth Edition" by O'Reilly Media.
See also
Snort
Intrusion detection system (IDS)
Intrusion prevention system (IPS)
Network intrusion detection system (NIDS)
Sguil
References
External links
Aanval wiki
Snort homepage
OISF homepage
Computer security software | Aanval | [
"Engineering"
] | 313 | [
"Cybersecurity engineering",
"Computer security software"
] |
34,164,016 | https://en.wikipedia.org/wiki/Ganoderma%20curtisii | Ganoderma curtisii is a wood-decaying polypore whose distribution is primarily in the Southeastern United States. Craig and Levetin claim to have observed it in Oklahoma.
Taxonomic history
The name was originally established by Miles Berkeley in 1849 as Polyporus curtisii, and later transferred to the genus Ganoderma by William Alphonso Murrill in 1908. This species is tentative and is a subject of debate as to its viability as a distinct species from North American specimens described as G. lucidum (G. sessile), which is much more widely distributed throughout the US. There is also debate about the identities of several species that resemble G. lucidum and G. tsugae.
One reason for an alleged synonymy between G. sessile and G. curtisii is overlap in habitat, decaying hardwoods. According to Volk, Gilbertson and Ryvarden, authors of North American Polypores, it is not considered a separate species from G. lucidum. Bessette et al., authors of Mushrooms of the Southeastern United States, echo this and list it as a synonym to G. lucidum. Paul Stamets considers G. lucidum and G. curtisii to both be members of a tight-knit species complex.
However, several recent molecular studies have shown Ganoderma curtisii to be genetically distinct from Ganoderma lucidum, calling into doubt the synonymy of the two species and supporting previous mycologists' opinion that it is a distinct species. The same studies support the idea that G. lucidum sensu stricto is actually absent from the North American continent and that the mushroom widely called G.lucidum in North America is instead G.sessile, a member of the Ganoderma resinaceum complex, with Ganoderma curtisii as a separate species.
Description
This polypore bears a marked resemblance to G. lucidum and generally has a stipe, sometimes lacking the characteristic red to purple varnished appearance that G. lucidum possesses. The flesh is spongy in pore tissue and firm in the stipe. The pores bruise brown when damaged.
Its habitat of choice is decaying stumps and roots of hardwoods, which aligns perfectly with that of G. sessile.
References
External links
G. curtisii – Images at Mushroom Observer
Fungi of North America
Fungi described in 1849
curtisii
Taxa named by Miles Joseph Berkeley
Fungus species | Ganoderma curtisii | [
"Biology"
] | 511 | [
"Fungi",
"Fungus species"
] |
34,167,136 | https://en.wikipedia.org/wiki/Neuroleptic-induced%20deficit%20syndrome | Neuroleptic-induced deficit syndrome (NIDS) is a psychopathological syndrome that develops in some patients who take high doses of an antipsychotic for an extended time. It is most often caused by high-potency typical antipsychotics, but can also be caused by high doses of many atypicals, especially those closer in profile to typical ones (that have higher D2 dopamine receptor affinity and relatively low 5-HT2 serotonin receptor binding affinity), like paliperidone and amisulpride.
Symptoms
Neuroleptic-induced deficit syndrome is principally characterized by the same symptoms that constitute the negative symptoms of schizophrenia: emotional blunting, apathy, hypobulia, anhedonia, indifference, difficulty or total inability in thinking, difficulty or total inability in concentrating, lack of initiative, attention deficits, and desocialization. This can easily lead to misdiagnosis and mistreatment. Instead of decreasing the antipsychotic, the doctor may increase their dose to try to "improve" what they perceive to be negative symptoms of schizophrenia, rather than antipsychotic side effects. The concept of neuroleptic-induced deficit syndrome was initially presented for schizophrenia, and it has rarely been associated in other mental disorders. In recent years, atypical neuroleptics are being more often managed to patients with bipolar disorder, so some studies about neuroleptic-induced deficit syndrome in bipolar disorder patients are now available.
There are significant difficulties in the differential diagnosis of primary negative symptoms and neuroleptic deficiency syndrome (secondary negative symptoms), as well as depression.
Case
A Japanese man, who was being treated for schizophrenia, exhibited neuroleptics-induced deficit syndrome and obsessive–compulsive symptoms. His symptoms were remarkably improved by quitting a course of antipsychotics followed by the introduction of the antidepressant fluvoxamine. He had been misdiagnosed with schizophrenia, the real diagnosis was obsessive–compulsive disorder.
References
Adverse effects of psychoactive drugs
Antipsychotics
Psychopathological syndromes | Neuroleptic-induced deficit syndrome | [
"Chemistry"
] | 445 | [
"Drug safety",
"Adverse effects of psychoactive drugs"
] |
34,170,016 | https://en.wikipedia.org/wiki/Artin%20conductor | In mathematics, the Artin conductor is a number or ideal associated to a character of a Galois group of a local or global field, introduced by as an expression appearing in the functional equation of an Artin L-function.
Local Artin conductors
Suppose that L is a finite Galois extension of the local field K, with Galois group G. If is a character of G, then the Artin conductor of is the number
where Gi is the i-th ramification group (in lower numbering), of order gi, and χ(Gi) is the average value of on Gi. By a result of Artin, the local conductor is an integer. Heuristically, the Artin conductor measures how far the action of the higher ramification groups is from being trivial. In particular, if χ is unramified, then its Artin conductor is zero. Thus if L is unramified over K, then the Artin conductors of all χ are zero.
The wild invariant or Swan conductor of the character is
in other words, the sum of the higher order terms with i > 0.
Global Artin conductors
The global Artin conductor of a representation of the Galois group G of a finite extension L/K of global fields is an ideal of K, defined to be
where the product is over the primes p of K, and f(χ,p) is the local Artin conductor of the restriction of to the decomposition group of some prime of L lying over p. Since the local Artin conductor is zero at unramified primes, the above product only need be taken over primes that ramify in L/K.
Artin representation and Artin character
Suppose that L is a finite Galois extension of the local field K, with Galois group G. The Artin character aG of G is the character
and the Artin representation AG is the complex linear representation of G with this character. asked for a direct construction of the Artin representation. showed that the Artin representation can be realized over the local field Ql, for any prime l not equal to the residue characteristic p. showed that it can be realized over the corresponding ring of Witt vectors. It cannot in general be realized over the rationals or over the local field Qp, suggesting that there is no easy way to construct the Artin representation explicitly.
Swan representation
The Swan character swG is given by
where rg is the character of the regular representation and 1 is the character of the trivial representation. The Swan character is the character of a representation of G. showed that there is a unique projective representation of G over the l-adic integers with character the Swan character.
Applications
The Artin conductor appears in the conductor-discriminant formula for the discriminant of a global field.
The optimal level in the Serre modularity conjecture is expressed in terms of the Artin conductor.
The Artin conductor appears in the functional equation of the Artin L-function.
The Artin and Swan representations are used to define the conductor of an elliptic curve or abelian variety.
Notes
References
Number theory
Representation theory
Zeta and L-functions | Artin conductor | [
"Mathematics"
] | 640 | [
"Fields of abstract algebra",
"Discrete mathematics",
"Representation theory",
"Number theory"
] |
57,240,727 | https://en.wikipedia.org/wiki/Kapitza%20instability | In fluid dynamics, the Kapitza instability is an instability that occurs in fluid films flowing down walls. The instability is characterised by the formation of capillary waves on the free surface of the film. The instability is named after Pyotr Kapitsa, who described and analysed the instability in 1948. The free surface waves are known as roll waves.
Roll waves in granular materials
A similar instability has been observed in granular flow, and this instability can be predicted using the Saint-Venant equations with appropriate modifications accounting for the frictional properties of granular flows.
References
Fluid dynamic instabilities | Kapitza instability | [
"Chemistry"
] | 126 | [
"Fluid dynamic instabilities",
"Fluid dynamics"
] |
60,215,472 | https://en.wikipedia.org/wiki/Aluminum%20internal%20combustion%20engine | An aluminum internal combustion engine is an internal combustion engine made mostly from aluminum metal alloys.
Many internal combustion engines use cast iron and steel extensively for their strength and low cost. Aluminum offers lighter weight at the expense of strength, hardness and often cost. However, with care it can be substituted for many of the components and is widely used. Aluminum crank cases, cylinder blocks, heads and pistons are commonplace. The first airplane engine to fly, in the Wright Flyer of 1903, had an aluminum cylinder block.
All-aluminum engines are rare, as the material is difficult to use in more highly stressed components such as connecting rods and crankshafts. The BSA A10 motorcycle engine had aluminum conrods, while the Škoda 935 Dynamic auto engine had an aluminum crankshaft.
Russian Aluminum ICE project
An aircraft engine made 90 percent from aluminum alloys was developed by scientists and engineers Novosibirsk State Technical University. Work on it was carried out for four years.
The engineers of NSTU, while working on this engine, applied the development of Institute of Inorganic Chemistry SB RAS. The designers were assisted by scientists Alexei Rogov and Olga Terleeva.
The crankshaft and main engine gearbox are made of aluminum. This allows reduction of mass by 40-50 percent, while maintaining the same power, compared to conventional steel engines.
A prototype engine was tested on ordinary AI-95 gasoline. Tests were going on throughout 2018 and completed in early 2019. As a result, the high performance characteristics of the heavy-duty coating, which the aluminum parts are processed with, were confirmed.
According to the professor of the Aircraft and Helicopter Engineering Department of the Faculty of Aircraft of NSTU, Ilya Zverkov, this engine was developed for the aircraft Yak-52 by order of the Russian Aviation Revival Foundation, which is based at the Mochishche airfield near Novosibirsk.
References
External links
В Новосибирске собрали первый в мире двигатель из алюминия
В Новосибирске ученые сделали и запустили первый в мире двигатель из алюминия
Internal combustion engine | Aluminum internal combustion engine | [
"Technology",
"Engineering"
] | 499 | [
"Internal combustion engine",
"Combustion engineering",
"Engines"
] |
60,216,579 | https://en.wikipedia.org/wiki/Cephalodiscidae%20mitochondrial%20code | The Cephalodiscidae mitochondrial code (translation table 33) is a genetic code used by the mitochondrial genome of Cephalodiscidae (Pterobranchia). The Pterobranchia are one of the two groups in the Hemichordata which together with the Echinodermata and Chordata form the major clades of deuterostomes.
Code 33 is very similar to the mitochondrial code 24 for the Pterobranchia, which also belong to the Hemichordata, except that it uses UAA for tyrosine rather than as a stop codon.
This code shares with many other mitochondrial codes the reassignment of the UGA STOP to tryptophan, and AGG and AGA to an amino acid other than arginine. However, the assignment of AGG to lysine in pterobranchian mitogenomes is not found elsewhere in deuterostome mitochondria but it occurs in some taxa of Arthropoda.
The code
AAs = FFLLSSSSYYY*CCWWLLLLPPPPHHQQRRRRIIIMTTTTNNKKSSSKVVVVAAAADDEEGGGG
Starts = ---M-------*-------M---------------M---------------M------------
Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG
Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG
Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V).
Differences from the standard code
See also
List of genetic codes
References
Molecular genetics
Gene expression
Protein biosynthesis | Cephalodiscidae mitochondrial code | [
"Chemistry",
"Biology"
] | 683 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
60,218,169 | https://en.wikipedia.org/wiki/Cell%20cycle%20withdrawal | Cell cycle withdrawal refers to the natural stoppage of cell cycle during cell division. When cells divide, there are many internal or external factors that would lead to a stoppage of division. This stoppage could be permanent or temporary, and could occur in any one of the four cycle phases (G1, S, G2 and M), depending on the status of cells or the activities they are undergoing. During the process, all cell duplication process, including mitosis, meiosis as well as DNA replication, will be paused. The mechanisms involve the proteins and DNA sequences inside cells.
Permanent cell cycle withdrawal
Permanent cell cycle withdrawal refers to the forever stoppage in divisions of cells. In organisms, cells do not divide endlessly. Certain mechanisms are present to prevent cells from indefinite division, which is mostly done by programmed failure in DNA synthesis. By adapting the above mechanism, cells are prevented from over dividing. The process also enables cells to proceed to senescence, which are further stages of cell life and growth.
Mechanism
The permanent cell cycle withdrawal is mainly done by the wearing off of DNA sequences during S Phase, the second stage during a DNA replication progress. Such progress occurs in the end sequences of the whole linear chromosome named telomeres.
Telomeres are sequences of repetitive nucleotides which serve no genetic use. During the replication process, the DNA replication enzymes are not able to copy the ending sequences at the telomere. Those sequences, located at the end of the telomere and chromosome, would hence get lost gradually. Once all of these sequences have been worn out, the useful genetic information in the cell's chromosome would also get lost. This prevents cells from cell dividing, withdrawing cells from the cell division cycle. Therefore, telomeres act as the buffer for cells to continue dividing and when telomeres are worn out, cells lose their dividing function.
Not all cells carry out cell cycle withdrawal. In some cells, such as germ cells, stem cells and white blood cells, the withdrawal process do not occur. This is to ensure that these cells continue dividing for body growth or reproduction. Such phenomena is brought about by the presence of telomerase, which would catalyse the reaction of adding nucleotide sequences to the ends of telomeres. It replenishes the telomeres that are lost during DNA replication, compensating for enough telomerase sequence so that the useful DNA content would not be damaged. This allows such cells to have continuous division. Some other cells do not have the mechanism of cell cycle withdrawal because they don't even contain the function of cell division. Red blood cells, for example, do not contain genetic material when mature, and hence will not carry out cell cycle or its withdrawal.
Some organisms also do not withdrawal mechanism. Eukaryotic organisms are such examples. The DNA structure in these organisms are in the form of circular chromosomes, meaning there would be no "ends" appearing in their DNA. Therefore, the wearing off of DNA would not occur, and the genetic information would remain the same, and no withdrawal would happen. This is to prevent the stopping of cell division in eukaryotic organisms, or even withdrawing from the basic reproduction procedures of eukaryotic cells.
Significance
There are several significance with regards to the withdrawal of cell cycle, one of which is to prevent unlimited cell division in somatic cells. This is to prevent too many cells from accumulating inside an organism's body, ensuring that cells in different organs are contained in a fixed proportion for achieving optimal function. The stoppage of exponential growth in cells also avoids cell growth diseases, such as tumours or cancer, from occurring in organism bodies. Studies have discovered the linkage between the abnormal replenishing of telomere, overactivity of telomerase, and cancer growth. Here, telomere act as a barrier against cells from dividing abnormally, hence providing a stable environment for body functions. The withdrawal process also prevents diseased cells, or cells with mutated or damaged DNA, from continuing to divide and increasing the percentage of abnormal cells inside the body. It can further allow these cells to stop their functions and differentiations to undergo a programmed cell death process called apoptosis.
Furthermore, the withdrawal process could allow cells to encounter further parts of their cell life, namely senescence and natural apoptosis. During normal body activities, cells divide, grow and differentiate into different cell types and serve different functions. The above procedures are also known as senescence. After senescence, body cells would start to become old, and several functions would be lost during the process. As these cells with limited functions are inefficient in performing body activities, they are programmed to self demolition under the presence of apoptotic signals, such as caspase proteins and Bcl-2 family regulation proteins. Before such process, the cell cycle withdrawal ensures that these aged cells are not divided into other daughter cells before death, so as to maintain the age level of cells in organisms to perform efficient body activities.
Temporary cell cycle withdrawal
Temporary cell cycle withdrawal, also known as cell cycle arrest, refers to the short-term stoppage in cell division. This mechanism often happens in organisms' bodies, mainly due to the reasons of abnormality in growth factors or the replication of DNA. In these cases, the withdrawal starts when abnormality is detected, and ends once the detected errors have been repaired. This process makes sure that cells are functioning properly after dividing, and to prevent mutations from occurring.
Mechanism
The mechanism is brought about by positive and negative regulators, and has specific checkpoints to signal cell cycle to stop. The cell cycle goes on only when a go-ahead signal was received by the checkpoints, meaning the stages of cell cycle are operating as usual.
Cyclins and cyclin-dependent kinases (CDK) are major positive regulators, and appear throughout the cell cycle. The CDK appear as positive regulators, which withdraws cells from their cycle if a certain type of cyclin is not detected in the process of cell division. Throughout the cell cycle, three cyclins, namely G1/S, S and M, would appear at different stages of the cell cycle respectively. The CDK detects the presence of these cyclins by binding with these cyclins and producing a type of target protein to move the cell cycle forward. Once the cyclins are absent, it means the previous process in cell cycle is not finished yet, and hence the cell cycle comes to a halt until the whole process is made. The detection of G1/S, S and M cyclins takes place in G1 phase, at the end of G1 phase, and at the end of G2 phase respectively.
There are two main types of negative regulators in the cell cycle that arrests the cell cycle, and has to be removed in order to resume the process. The first one is retinoblastoma protein, which prevents the cell from getting too large and to prohibit the premature transition from G1 to S phase. It functions by binding to transcription factors, for example E2F, so that the DNA could not be replicated until the cell has grown to a certain extent and the retinoblastoma protein is phosphorylated. Another type of negative regulator is p53, which halts the cell cycle process upon detection of DNA damage so as to provide to for repairing. This regulator can also induce apoptosis when the DNA damage is too large and cannot be repaired.
Checkpoints in cell cycles include DNA replication checkpoints and spindle assembly checkpoints. DNA replication checkpoints are located at the G1, S and G2 phase to check if DNA is normal, and withdraws the cell from the cycle if the DNA is damaged or has undergone incomplete replication. The spindle assembly checkpoints, on the other hand, ensure that the chromosomes in cells are segregated properly by microtubules in cells during mitotic cell division. If errors occur when the microtubules are attaching to the centromere, the centre of a chromosome, the cell cycle will halt until the error is corrected. Possible errors include microtubules not attached properly to the centromere, or chromosomes are not segregated in half.
Significance
The significance of cell cycle arrest is merely to ensure that cells do not undergo improper division. Once such a division occurs, the cell cycle automatically stops until repairs have been made, or directly proceed to the stage of apoptosis once the damage is irreparable. Like permanent cell cycle withdrawal, this mechanism is to prevent damaged cells from continuing to develop or even worse, dividing and spreading.
References
Cell cycle
Cell biology | Cell cycle withdrawal | [
"Biology"
] | 1,772 | [
"Cell biology",
"Cell cycle",
"Cellular processes"
] |
60,219,966 | https://en.wikipedia.org/wiki/Aafje%20Looijenga-Vos | Aafje Looijenga-Vos (29 April 1928 in Marum – 4 November 2018 in Amersfoort) was a Dutch crystallographer. She was a professor for general chemistry and later for structural chemistry at the University of Groningen.
Life
She studied chemistry at the University of Groningen from 1946 to 1952. Already in 1948, she met other crystallographers during the first Congress of the International Union of Crystallography (IUCr). In 1952, she started her PhD in the group of the Eelco Wiebenga. Her PhD thesis dealt with the structures of P4S10 and P4S7 and was named "De kristalstructuur van P4S10 and P4S7 ". She defended the thesis in 1955. The first two years after defending her thesis, she worked at UK crystallographic institutions (Glasgow, Leeds, Oxford, and Cambridge), where she also worked with Dorothy Hodgkin. In 1962, she became a professor for general chemistry and, in 1967, she became a professor of structural chemistry at the University of Groningen. She was the secretary of the IUCr Commission on the International Tables and was involved in realising the 1983 edition of Volume A of the International Tables for Crystallography. In 1982, she married Hans Looijenga, a widower with eight children. She was survived by 21 grandchildren and 4 great-grandchildren.
Research
Together with Philip Coppens, she performed neutron diffraction experiments on cyanuric acid crystals. She also worked on direct methods together with Isabella Karle. She was also involved in the development of the CAD-4, the Computer Automated Diffractometer with 4 circles. Furthermore, her research dealt with the following topics: cyclophosphazenes, electron-density distribution studies based on high-resolution data at low temperature, and relations between structure and both electrical and magnetic properties of morpholinium-TCNQ compounds.
Selected publications
Awards
Since 1980, she was a member of the Royal Netherlands Academy of Arts and Sciences.
References
1928 births
2018 deaths
University of Groningen alumni
People from Marum
Members of the Royal Netherlands Academy of Arts and Sciences
20th-century Dutch chemists
Dutch women chemists
Academic staff of the University of Groningen
20th-century Dutch women scientists
Crystallographers | Aafje Looijenga-Vos | [
"Chemistry",
"Materials_science"
] | 479 | [
"Crystallographers",
"Crystallography"
] |
60,220,086 | https://en.wikipedia.org/wiki/Stibiotantalite | Stibiotantalite is a tantalate mineral found in complex granite pegmatites. Stibiotantalite constitutes the tantalum endpoint of a solid solution series with its niobium analogue stibiocolumbite.
It is translucent to transparent, medium hard (5.5 mohs), appears yellow to dark brown, reddish or greenish brown, with an adamantine luster.
Stibiotantalite is found in veins and walls associated with tin mines. It is a fairly rare to rare mineral. Due to its relative softness, it is more likely to be found in mineral collections than in jewelry.
Occurrence
Stibiotantalite has been found in several countries, but the most significant are Mozambique, Sri Lanka, and the USA. It occurs in complex granite pegmatites.
Mozambique and the USA have the most localities where Stibiotantalite has been found. Mozambique has 6, while the USA has 18. California alone has 15 of these localities.
Use
Stibiotantalite is primarily used as a gemstone. These gems are vivid, shiny, and golden-brown. These gems come between 0.73 carats and 6.34 carats, with the average being 3.54. These gems are similar in appearance to sphalerite, but they are brown instead of orange. While stibiotantalite and tantalite are similar, stibiotantalite is softer, brighter, and heavier.
Etymology
The "stibio-" prefix is a reference to its antimony content. The tantalite suffix shows that the mineral is actually very similar to tantalite.
References
Gemstones
Tantalate minerals
Antimony minerals
Minerals described in 1893 | Stibiotantalite | [
"Physics"
] | 347 | [
"Materials",
"Gemstones",
"Matter"
] |
60,220,120 | https://en.wikipedia.org/wiki/Hardware%20security%20bug | In digital computing, hardware security bugs are hardware bugs or flaws that create vulnerabilities affecting computer central processing units (CPUs), or other devices which incorporate programmable processors or logic and have direct memory access, which allow data to be read by a rogue process when such reading is not authorized. Such vulnerabilities are considered "catastrophic" by security analysts.
Speculative execution vulnerabilities
Starting in 2017, a series of security vulnerabilities were found in the implementations of speculative execution on common processor architectures which effectively enabled an elevation of privileges.
These include:
Foreshadow
Meltdown
Microarchitectural Data Sampling
Spectre
SPOILER
Pacman
Intel VISA
In 2019 researchers discovered that a manufacturer debugging mode, known as VISA, had an undocumented feature on Intel Platform Controller Hubs, which are the chipsets included on most Intel-based motherboards and which have direct memory access, which made the mode accessible with a normal motherboard possibly leading to a security vulnerability.
See also
Hardware security
Security bug
Computer security
Threat (computer)
References
Computer security exploits
Hardware bugs
Side-channel attacks
2018 in computing | Hardware security bug | [
"Technology",
"Engineering"
] | 232 | [
"Computing stubs",
"Computer engineering stubs",
"Computer engineering",
"Computer security exploits"
] |
60,220,535 | https://en.wikipedia.org/wiki/Single-stranded%20RNA%20virus | Single-stranded RNA virus refers to RNA viruses with single-stranded RNA genomes. There are two kinds:
Negative-sense single-stranded RNA virus
Positive-sense single-stranded RNA virus
See also
Double-stranded RNA viruses
DNA virus
Riboviria
RNA viruses | Single-stranded RNA virus | [
"Biology"
] | 55 | [
"Viruses",
"Riboviria"
] |
42,360,188 | https://en.wikipedia.org/wiki/Algorithmic%20logic | Algorithmic logic is a calculus of programs that allows the expression of semantic properties of programs by appropriate logical formulas. It provides a framework that enables proving the formulas from the axioms of program constructs such as assignment, iteration and composition instructions and from the axioms of the data structures in question see , .
The following diagram helps to locate algorithmic logic among other logics.
The formalized language of algorithmic logic (and of algorithmic theories of various data structures) contains three types of well formed expressions: Terms - i.e. expressions denoting operations on elements of data structures,
formulas - i.e. expressions denoting the relations among elements of data structures, programs - i.e. algorithms - these expressions describe the computations.
For semantics of terms and formulas consult pages on first-order logic and Tarski's semantics. The meaning of a program is the set of possible computations of the program.
Algorithmic logic is one of many logics of programs.
Another logic of programs is dynamic logic, see dynamic logic, .
Bibliography
]
[Banachowski et al.] |
Algorithms
Theoretical computer science | Algorithmic logic | [
"Mathematics"
] | 228 | [
"Theoretical computer science",
"Algorithms",
"Mathematical logic",
"Applied mathematics",
"Mathematical logic stubs"
] |
42,362,439 | https://en.wikipedia.org/wiki/Best%20Illusion%20of%20the%20Year%20Contest | The Best Illusion of the Year Contest is an annual recognition of the world's illusion creators awarded by the Neural Correlate Society. The contest was created in 2005 by professors Susana Martinez-Conde and Stephen Macknik as part of the European conference on Visual Perception in La Coruna, Spain. It has since transitioned to an online contest where everyone in the world is invited to submit illusions and vote for the winner.
The contest decides on the most impressive perceptual or cognitive illusion of the year (unpublished, or published no earlier than the year prior to the most recent competition). An illusion is a perceptual or cognitive experience that does not match the physical reality (i.e. the perception of motion where no such motion physically exists).
As human experience is generated indirectly by brain mechanisms that interact with the physical reality, the study of illusions offers insight into the neural bases of perception and cognition. The community includes neuroscientists, ophthalmologists, neurologists, and visual artists that create illusions to help discover the neural underpinnings of illusory perception.
The Best Illusion of the Year Contest consists of three stages: submission, initial review, and voting of winners. The initial review is conducted by a panel of judges who are world experts in the science, art, and science education. The judge panel narrows the submissions to the Top Ten finalists, and viewers from all over the world can vote for the winner online. The top three winners receive cash awards.
Neural Correlate Society
The Neural Correlate Society (NCS) is a nonprofit 501(c)3 organization that promotes research into the neural basis of perception and cognition. The organization serves a community of neuroscientists, ophthalmologists, neurologists, and artists who use a variety of methods to help discover the underpinnings of the human experience.
The NCS hosts a variety of events, including the Best Illusion of the Year Contest, that highlight important new discoveries to the public.
Champions of Illusion
The Illusions
Award Recipients
The following table details the first, second, and third place recipients from each year of the contest since its inception.
References
Optical illusions
Competitions in the United States
Optical phenomena
Neuroscience awards | Best Illusion of the Year Contest | [
"Physics",
"Technology"
] | 455 | [
"Physical phenomena",
"Optical illusions",
"Optical phenomena",
"Science and technology awards",
"Neuroscience awards"
] |
42,363,409 | https://en.wikipedia.org/wiki/Memory%20operations%20per%20second | Memory operations per second or MOPS is a metric for an expression of the performance capacity of semiconductor memory. It can also be used to determine the efficiency of RAM in the Windows operating environment. MOPS can be affected by multiple applications being open at once without adequate job scheduling.
References
Computer performance
Units of frequency | Memory operations per second | [
"Mathematics",
"Technology"
] | 63 | [
"Computer performance",
"Quantity",
"Units of frequency",
"Units of measurement"
] |
45,657,826 | https://en.wikipedia.org/wiki/Journal%20of%20Experiments%20in%20Fluid%20Mechanics | The Journal of Experiments in Fluid Mechanics is a bimonthly peer-reviewed scientific journal covering fluid dynamics. It was established in 1987 and is published by the China Aerodynamics Research Society. The editor-in-chief is Jialing Le. The journal publishes articles in Chinese and English.
External links
Fluid dynamics journals
Multilingual journals
Bimonthly journals
Academic journals published by learned and professional societies
Academic journals established in 1987 | Journal of Experiments in Fluid Mechanics | [
"Chemistry"
] | 85 | [
"Fluid dynamics journals",
"Fluid dynamics"
] |
45,658,840 | https://en.wikipedia.org/wiki/Planetary%20oceanography | Planetary oceanography, also called astro-oceanography or exo-oceanography, is the study of oceans on planets and moons other than Earth. Unlike other planetary sciences like astrobiology, astrochemistry, and planetary geology, it only began after the discovery of underground oceans in Saturn's moon Titan and Jupiter's moon Europa. This field remains speculative until further missions reach the oceans beneath the rock or ice layer of the moons. There are many theories about oceans or even ocean worlds of celestial bodies in the Solar System, from oceans made of liquid carbon with floating diamonds in Neptune to a gigantic ocean of liquid hydrogen that may exist underneath Jupiter's surface.
Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia, when dissolved in water, will lower water's freezing point, so that water might exist in large quantities in extraterrestrial environments as brine, or convecting ice. Unconfirmed oceans are speculated to exist beneath the surfaces of many dwarf planets and natural satellites; notably, the ocean of the moon Europa is estimated to have over twice the water volume of Earth's. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet-to-be-confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.
Extraterrestrial oceans may be composed of water, or other elements and compounds. The only confirmed large, stable bodies of extraterrestrial surface liquids are the lakes of Titan, which are made of hydrocarbons instead of water. However, there is strong evidence for the existence of subsurface water oceans elsewhere in the Solar System. The best-established candidates for subsurface water oceans in the Solar System are Jupiter's moons Europa, Ganymede, and Callisto, and Saturn's moons Enceladus and Titan.
Although Earth is the only known planet with large stable bodies of liquid water on its surface, and the only such planet in the Solar System, other celestial bodies are thought to have large oceans. In June 2020, NASA scientists reported that it is likely that exoplanets with oceans may be common in the Milky Way galaxy, based on mathematical modeling studies.
The inner structure of gas giants remain poorly understood. Scientists suspect that, under extreme pressure, hydrogen would act as a supercritical fluid, hence the likelihood of oceans of liquid hydrogen deep in the interior of gas giants like Jupiter. Oceans of liquid carbon have been hypothesized to exist on ice giants, notably Neptune and Uranus. Magma oceans exist during periods of accretion on any planet and some natural satellites when the planet or natural satellite is completely or partly molten.
Extraterrestrial oceans
Planets
The gas giants, Jupiter and Saturn, are thought to lack surfaces and instead have a stratum of liquid hydrogen; however their planetary geology is not well understood. The possibility of the ice giants Uranus and Neptune having hot, highly compressed, supercritical water under their thick atmospheres has been hypothesised. Although their composition is still not fully understood, a 2006 study by Wiktorowicz and Ingersall ruled out the possibility of such a water "ocean" existing on Neptune, though oceans of metallic liquid carbon are possible.
The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, though the water on Mars is no longer oceanic (much of it residing in the ice caps). The possibility continues to be studied along with reasons for their apparent disappearance. Some astronomers now propose that Venus may have had liquid water and perhaps oceans for over 2 billion years.
Natural satellites
A global layer of liquid water thick enough to decouple the crust from the mantle is thought to be present on the natural satellites Titan, Europa, Enceladus, Ganymede, and Triton; and, with less certainty, in Callisto, Mimas, Miranda, and Ariel. A magma ocean is thought to be present on Io. Geysers or fumaroles have been found on Saturn's moon Enceladus, possibly originating from an ocean about beneath the surface ice shell. Other icy moons may also have internal oceans, or may once have had internal oceans that have now frozen.
Large bodies of liquid hydrocarbons are thought to be present on the surface of Titan, although they are not large enough to be considered oceans and are sometimes referred to as lakes or seas. The Cassini–Huygens space mission initially discovered only what appeared to be dry lakebeds and empty river channels, suggesting that Titan had lost what surface liquids it might have had. Later flybys of Titan provided radar and infrared images that showed a series of hydrocarbon lakes in the colder polar regions. Titan is thought to have a subsurface liquid-water ocean under the ice in addition to the hydrocarbon mix that forms atop its outer crust.
Dwarf planets and trans-Neptunian objects
Ceres appears to be differentiated into a rocky core and icy mantle and may harbour a liquid-water ocean under its surface.
Not enough is known of the larger trans-Neptunian objects to determine whether they are differentiated bodies capable of supporting oceans, although models of radioactive decay suggest that Pluto, Eris, Sedna, and Orcus have oceans beneath solid icy crusts approximately thick. In June 2020, astronomers reported evidence that the dwarf planet Pluto may have had a subsurface ocean, and consequently may have been habitable, when it was first formed.
Extrasolar
Some planets and natural satellites outside the Solar System are likely to have oceans, including possible water ocean planets similar to Earth in the habitable zone or "liquid-water belt". The detection of oceans, even through the spectroscopy method, however is likely extremely difficult and inconclusive.
Theoretical models have been used to predict with high probability that GJ 1214 b, detected by transit, is composed of exotic form of ice VII, making up 75% of its mass,
making it an ocean planet.
Other possible candidates are merely speculative based on their mass and position in the habitable zone include planet though little is actually known of their composition. Some scientists speculate Kepler-22b may be an "ocean-like" planet. Models have been proposed for Gliese 581 d that could include surface oceans. Gliese 436 b is speculated to have an ocean of "hot ice". Exomoons orbiting planets, particularly gas giants within their parent star's habitable zone may theoretically have surface oceans.
Terrestrial planets will acquire water during their accretion, some of which will be buried in the magma ocean but most of it will go into a steam atmosphere, and when the atmosphere cools it will collapse on to the surface forming an ocean. There will also be outgassing of water from the mantle as the magma solidifies—this will happen even for planets with a low percentage of their mass composed of water, so "super-Earth exoplanets may be expected to commonly produce water oceans within tens to hundreds of millions of years of their last major accretionary impact."
Non-water surface liquids
Oceans, seas, lakes and other bodies of liquids can be composed of liquids other than water, for example the hydrocarbon lakes on Titan. The possibility of seas of nitrogen on Triton was also considered but ruled out. There is evidence that the icy surfaces of the moons Ganymede, Callisto, Europa, Titan and Enceladus are shells floating on oceans of very dense liquid water or water–ammonia solution.
Extrasolar terrestrial planets that are extremely close to their parent star will be tidally locked and so one half of the planet will be a magma ocean. It is also possible that terrestrial planets had magma oceans at some point during their formation as a result of giant impacts. Hot Neptunes close to their star could lose their atmospheres via hydrodynamic escape, leaving behind their cores with various liquids on the surface. Where there are suitable temperatures and pressures, volatile chemicals that might exist as liquids in abundant quantities on planets (thalassogens) include ammonia, argon, carbon disulfide, ethane, hydrazine, hydrogen, hydrogen cyanide, hydrogen sulfide, methane, neon, nitrogen, nitric oxide, phosphine, silane, sulfuric acid, and water.
Supercritical fluids, although not liquids, do share various properties with liquids. Underneath the thick atmospheres of the planets Uranus and Neptune, it is expected that these planets are composed of oceans of hot high-density fluid mixtures of water, ammonia and other volatiles. The gaseous outer layers of Jupiter and Saturn transition smoothly into oceans of supercritical hydrogen. The atmosphere of Venus is 96.5% carbon dioxide, and is a supercritical fluid at the surface.
See also
Extraterrestrial liquid water
Lava planet
List of largest lakes and seas in the Solar System
Magma ocean
Ocean world
References
Space science
Oceanography
Extraterrestrial water | Planetary oceanography | [
"Physics",
"Astronomy",
"Environmental_science"
] | 1,906 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Outer space",
"Oceanography",
"Space science"
] |
45,662,064 | https://en.wikipedia.org/wiki/Electrical%20aerosol%20spectrometer | Electrical aerosol spectrometry (EAS) is a technique for measurement of the number-size distribution of aerosol using a combination of electrical charging and multiple solid state electrometer detectors. The technique combines both diffusion and field charging regimes to cover the diameter range 10 nm to 10 μm.
Subsequent developments of the technique enable measurements faster than 1 Hz, although in each case with a reduced size range.
Aerosol charging
High charging efficiency allows sufficient charge to be placed on individual particles that the use of electrometer detectors is practicable, while the use of parallel electrometer detectors allows real time measurement of the size/number spectrum with output data as fast as 0.25 Hz.
Unlike SMPS-type devices, multiple charging is an inherent issue across almost the entire size range of EAS-type devices. Accurate characterization of the electrical charging of the aerosol is therefore an essential component of device design.
Calibration
Techniques for the traceable calibration of such devices are established, and result in good agreement (subject to suitable signal levels) with slower but more sensitive scanning mobility particle sizers.
Applications
Applications include the measurement of engine exhaust, cigarette smoke, and ambient/atmospheric studies.
The technique is particularly appropriate for situations where aerosol concentrations are changing on a timescale of 1 s or faster.
References
Spectrometers
Aerosols
Aerosol measurement | Electrical aerosol spectrometer | [
"Physics",
"Chemistry"
] | 278 | [
"Spectrum (physical sciences)",
"Colloids",
"Aerosols",
"Spectrometers",
"Spectroscopy"
] |
53,861,145 | https://en.wikipedia.org/wiki/Brian%20Hanley%20%28microbiologist%29 | Brian P. Hanley (born c. 1957) is an American microbiologist and founder of Butterfly Sciences. He is known for self-experimenting with gene therapy to try to improve health span.
Biography
Early in his research career, Hanley’s areas of study were biodefense and terrorism. He contributed chapters to two books about these subjects. Hanley obtained a PhD in Microbiology from University of California, Davis in 2009. The same year, he founded Butterfly Sciences in Davis, California to develop a gene therapy to treat HIV AIDS using a combination of GHRH and an intracellular vaccine.
After founding Butterfly Sciences, Hanley continued publishing scholarly literature in multiple fields that examined economic topics such as banking, cryptocurrency and climate.
During the COVID-19 pandemic, with Steve Keen and George Church, Hanley also contributed to literature about public health strategy in response to the pandemic.
Self-experimentation
Hanley could not raise money for his company Butterfly Sciences and decided to obtain proof of concept by testing gene therapy on himself. Hanley said: “I wanted to prove it, I wanted to do it for myself, and I wanted to make progress.” He designed the plasmid containing a gene coding for growth hormone–releasing hormone and had it made by a scientific supply company for around $10,000. However, the total cost of development was over $500,000.
He said that he corresponded with the FDA prior to starting his self-experimentation, and that the FDA told him he needed to file and get approval for an Investigational New Drug (IND) application, but Hanley did not agree that he needed FDA approval and proceeded without it. Hanley later co-authored a 2019 paper on Self-Experimentation, ethics and law with George Church, which bears out his position regarding the necessity for an IND. He did not perform any animal tests before testing the plasmid on himself, but won institutional review board (IRB) approval regardless on his proposed clinical research plans.
A physician assisted in administration of the plasmid to Hanley's thigh using electroporation. The plasmids were administered twice: once in summer 2015 and a second larger dose in July 2016.
Hanley claims the treatment has helped him. He reported that his white blood cell count and testosterone increased and his LDL levels dropped.
A researcher at George Church’s Harvard University laboratory observed the experiment and Hanley’s blood was then studied. The scientific results were published in December 2021, coauthored with George Church.
Transgender research
Hanley published an article in 2011 providing a biological explanation for transgender identity and homosexuality.
Selected publications
Brian P Hanley (2014) Radiation – Exposure and its treatment: A modern handbook
See also
References
External links
Official website of Butterfly Sciences
American transhumanists
1950s births
People from Davis, California
Living people
American microbiologists
Biogerontologists
Gene therapy
University of California, Davis alumni | Brian Hanley (microbiologist) | [
"Engineering",
"Biology"
] | 604 | [
"Gene therapy",
"Genetic engineering"
] |
53,868,622 | https://en.wikipedia.org/wiki/Juansher%20Chkareuli | Juansher Chkareuli (; born January 13, 1940, Tbilisi) is a Georgian theoretical physicist working in particle physics, Head of Particle Physics Department at Andronikashvili Institute of Physics of Tbilisi State University and Professor at Institute of Theoretical Physics of Ilia State University in Tbilisi.
Academic career
He studied at Tbilisi State University and Lebedev Physical Institute (Moscow), and received MSc in Theoretical Physics in 1965. He completed his PhD in 1970 and DSc in 1985 in Andronikashvili Institute of Physics (Tbilisi) and Joint Institute for Nuclear Research (Russia).
Subsequently, he worked as Principal Research Fellow at Andronikashvili Institute of Physics (1985–present); Professor of Theoretical Physics at Tbilisi State University (1986-1990); Professor of Theoretical Physics at Ilia State University (2006–present).
In 1991-2012 he was also a Visiting research professor at many leading centers in high energy physics including European Organization for Nuclear Research (CERN) in Geneva, International Center for Theoretical Physics (ICTP) in Trieste, Max-Planck Institute in Munich, University of Glasgow, University of Maryland, University of Melbourne, Institute of High Energy Physics in Beijing .
J.L. Chkareuli is primarily known for his works on family symmetries, extended grand unified theories and emergent gauge and gravity theories. These developments include: An introduction of the chiral family symmetry SU(3) for quark-lepton generations and its application to the flavor mixing of quarks and leptons; A novel missing VEV mechanism in the supersymmetric SU(8) grand unified theory suggesting a simultaneous solution to the gauge hierarchy problem and unification of flavor; New nonlinear sigma models for emergent gauge and gravity theories leading to dynamical generation of local internal and spacetime symmetries with gauge fields and gravitons as massless vector/tensor Goldstone bosons.
He is also known as President of the Georgian Physical Society (1993–99), and an organizer and co-organizer of some notable conferences and workshops on high energy physics – Annual Georgian Winter School on particle physics and cosmology (Bakuriani, Georgia, 1970–1993) which was one of the most popular scientific meetings in the former Soviet Union; International seminar "Standard Model and Beyond" (Tbilisi, 1996); International conference "Low dimensional physics and gauge principles" (Yerevan & Tbilisi, 2011) and others.
Honours and awards
Royal Society Fellowship (1993–94); Royal Society Joint Project Grant (1999-2000), Georgia-US Bilateral Grant (2003-2005); Member of the American Physical Society (1993), Fellow of the Institute of Physics (UK, 2000). Listed in the biographical dictionaries including «Who is Who in Science and Engineering» (2008), Marquis Who's Who, NY; «2000 Outstanding Scientists 2008/2009» (2010), International Biographical Centre, Cambridge.
References
External links
J.L. Chkareuli at Center for Elementary Particle Physics, Ilia State University
J.L. Chkareuli at Andronikashvili Institute of Physics, Tbilisi State University
Scientific publications of J.L. Chkareuli on INSPIRE-HEP
J.L. Chkareuli on Google Scholar
1940 births
Living people
Scientists from Tbilisi
Physicists from Georgia (country)
Theoretical physicists
Particle physicists
People associated with CERN | Juansher Chkareuli | [
"Physics"
] | 696 | [
"Theoretical physics",
"Particle physicists",
"Particle physics",
"Theoretical physicists"
] |
53,869,832 | https://en.wikipedia.org/wiki/Limits%20of%20stability | Limits of Stability (LoS) are a concept in balance and stability, defined as the points at which the center of gravity (CoG) approaches the limits of the base of support (BoS) and requires a corrective strategy to bring the center of mass (CoM) back within the BoS. In simpler terms, LoS represents the maximum distance an individual can intentionally sway in any direction without losing balance or needing to take a step. The typical range of stable swaying is approximately 12.5° in the front-back (antero-posterior) direction and 16° in the side-to-side (medio-lateral) direction. This stable swaying area is often referred to as the 'Cone of Stability', which varies depending on the specific task being performed.
When the CoG moves beyond the BoS, the individual must take a step or hold onto an external support to maintain balance and prevent a fall.
These stability limits are perceived rather than solely physiological; they represent the subject's readiness to adjust their CoG position.
Clinical significance
Limits of Stability (LoS) is a significant variable in assessing stability and voluntary motor control in dynamic states. It provides valuable information by tracking the instantaneous change in the center of mass (COM) velocity and position. LoS is a useful measure for evaluating postural instability and identifying individuals at higher risk of falling, making it a valuable screening tool.
Individuals with decreased LoS are more susceptible to falling when shifting their bodyweight forward, backward, or sideways, thus increasing their risk of injuries. A restricted LoS can significantly affect an individual's ability to respond to balance control tests and react to perturbations. This reduction in LoS may be attributed to various factors, including weakness in ankle and foot muscles, musculoskeletal issues in the lower limbs, or an internal perception to resist larger displacements.
These impairments can be correlated with medical examination findings and serve as an essential outcome measure for rehabilitation of specific underlying body impairments.
From a clinical perspective, individuals with better LoS can perform complex mobility tasks without support and are more capable of tolerating environmental challenges.
Possible causes of LoS impairment
Impaired cognitive processing: This is often associated with aging and can result in attention deficits.
Neuromuscular impairments: Conditions such as bradykinesia (slowness of movement), ataxia (lack of muscle coordination), and poor motor control can affect attention and cognitive functions.
Musculoskeletal impairments: Weakness, limited range of motion (ROM), and pain in the musculoskeletal system can also impact attention and cognitive abilities.
Emotional Overlay: Emotions like fear or anxiety can influence cognitive processing and attention.
Aphysiology: This refers to exaggeration or poor effort in performing tasks, which can affect attention and cognitive performance.
Limits of Stability Testing
Various tools such as the Functional Reach Test (FRT) and Limits Of Stability (LOS) test have been used to assess LoS.
Functional Reach Test (FRT): This test is commonly used to assess balance and LoS in the forward direction. It is cost-effective and easy to administer. However, it only measures LoS in the forward direction and is performed in a standing posture with the feet in a static position.
Limits Of Stability (LOS) Test: This is a more advanced tool compared to FRT and is used to measure balance under multi-directional conditions. In this test, the subject stands on force plates and intentionally shifts their body weight in the cued direction.
Parameters measured in LOS test
Reaction Time (RT): The time taken by an individual to start shifting their center of gravity (COG) from the static position after receiving a cue, measured in seconds.
Movement Velocity (MVL): The average speed at which the COG shifts.
EndPoint Excursions (EPE): The distance willingly covered by the subject in their very first attempt towards the target, expressed as a percentage.
Maximum Excursions (MXE): The amount of distance the subject actually covered or moved their COG.
Directional Control (DCL): A comparison between the amount of movement demonstrated in the desired direction (towards the target) to the amount of external movement in the opposite direction of the target, expressed as a percentage.
Interpretation of LOS Results
The ability to move around without falling is essential for performing activities of daily living (ADLs). Patients who exhibit delays in reaction time, decreased movement velocity, restricted Limits of Stability (LoS) boundary or cone of stability, or uncontrolled center of gravity (CoG) movement are at a higher risk of falling. A delayed reaction time may indicate cognitive processing issues, while reduced movement velocities may indicate higher-level central nervous system deficits. Reduced endpoint excursions, excessively larger maximum excursions, and poor directional control are all indicative of motor control abnormalities.
A LoS score close to 100 represents minimal sway and hence a reduced risk of falling, while scores close to 0 imply a higher risk of falling.
Validity and reliability of LOS
The LOS test has been validated for use across multiple patient populations, including community-dwelling elderly individuals, those with neurological disorders, and those with back and knee injuries. A study conducted by Wernick-Robinson and collaborators in 1999 on the test-retest reliability suggests that using the amount of distance covered in the functional reach test alone may not be an adequate measure of dynamic balance. The study also highlights that for a better evaluation of postural control, additional assessment of movement strategies is indispensable.
Similarly, another study conducted by Brouwer et al. also claims that the Limits of Stability (LOS) test is a reliable measure for balance testing in healthy populations.
Functional Impact and Implications of LOS
The capability of moving around without falling is necessary for activities of daily living (ADLs).Instability during weight-shifting activities or the inability to perform certain weight transfer tasks, such as bending forward to take objects from a shelf or leaning backward to rinse hair in the shower, can result from a restricted LoS boundary. The ability to voluntarily move the COG to positions within the Limits of Stability (LOS) with control is fundamental to independence and safety in mobility tasks, such as reaching for objects, transitioning from seated to standing positions (or standing to seated), and walking.
The LoS can be indicative of fall risks in various populations, including the elderly, individuals with movement disorders, and those with neurological impairments. Ensuring adequate control and movement within the LoS is crucial for maintaining independence and safety in everyday mobility tasks.
References
Geriatrics
Biomechanics
Medical tests | Limits of stability | [
"Physics"
] | 1,343 | [
"Biomechanics",
"Mechanics"
] |
53,874,077 | https://en.wikipedia.org/wiki/Proof%20of%20secure%20erasure | In computer security, proof of secure erasure (PoSE) or proof of erasure is a remote attestation protocol, by which an embedded device proves to a verifying party, that it has just erased (overwritten) all its writable memory. The purpose is to make sure that no malware remains in the device. After that typically a new software is installed into the device.
Overview
The verifying party may be called the verifier, the device being erased the prover.
The verifier must know the device's writable memory size from a trusted source and the device must not be allowed to communicate with other parties during execution of the protocol, which proceeds as follows. The verifier constructs a computational problem, which cannot be solved (in reasonable time or at all) using less than the specified amount of memory, and sends it to the device. The device responds with the solution and the verifier checks its correctness.
Protocol constructions
Naive approach
In the simplest implementation the verifier sends a random message as large as the device's memory to the device, which is expected to store it. After the device has received the complete message, it is required to send it back. Security of this approach is obvious, but it includes transfer of a huge amount of data (twice the size of the device's memory).
This can be halved if the device responds with just a hash of the message. To prevent the device from computing it on the fly without actually storing the message, the hash function is parametrized by a random value sent to the device after the message.
Communication-efficient constructions
Avoiding the huge data transfer requires a suitable (as stated in Overview) computational problem, whose description is short. Dziembowski et al. achieve this by constructing what they call an (m − δ, ε)-uncomputable hash function, which can be computed in quadratic time using memory of size m, but with memory of size m − δ it can be computed with at most a negligible probability ε.
Communication- and time-efficient constructions
Karvelas and Kiayias claim to have designed the first PoSE with quasilinear time and sublinear communication complexity.
Relation to proof of space
Proof of space is a protocol similar to proof of secure erasure in that both require the prover to dedicate a specific amount of memory to convince the verifier. Nevertheless, there are important differences in their design considerations.
Because the purpose of proof of space is similar to proof of work, the verifier's time complexity must be very small. While such property may be useful for proof of secure erasure as well, it is not fundamental to its usefulness.
Proof of secure erasure on the other hand requires the prover to be unable to convince the verifier using less than the specified amount of memory. Even this may be useful for the other protocol, however proof of space is not harmed if the prover may succeed even with significantly less space.
References
Data erasure
Computer security procedures
Cryptographic protocols
Communications protocols | Proof of secure erasure | [
"Technology",
"Engineering"
] | 638 | [
"Computer standards",
"Computer security procedures",
"Cybersecurity engineering",
"Communications protocols"
] |
43,785,137 | https://en.wikipedia.org/wiki/EBeam | eBeam was an interactive whiteboard system developed by Luidia, Inc. that transformed any standard whiteboard or other surface into an interactive display and writing surface.
Luidia's eBeam hardware and software products allowed text, images, and video to be projected onto a variety of surfaces, where an interactive stylus or marker could be used to add notes, access control menus, manipulate images, and create diagrams and drawings. The presentations, notes, and images could be saved and emailed to class or meeting participants, as well as shared in real-time either on local networks or over the Internet.
History
An eBeam demo was given at the Apple Expo 2002 in Paris, France.
The production of eBeam hardware was discontinued in 2020. As of June 2022, Luidia has ceased all operations.
Technology
Luidia's eBeam technology was originally developed and patented by engineers at Electronics for Imaging Inc. (Nasdaq: EFII), a Foster City, California, developer of digital print server technology. Luidia was spun off from EFI in July 2003 with venture funding from Globespan Capital Partners and Silicom Ventures.
See also
Office equipment
Display technology
Educational technology
References
External links
Electronics for Imaging (EFI)
Luidia Inc. - eBeam
Luidia website
Google Books results
Review in PC Mag
Review in InfoWorld
VEngineers Co. Ltd (Mauritius)
Office equipment
Display technology | EBeam | [
"Engineering"
] | 288 | [
"Electronic engineering",
"Display technology"
] |
43,785,666 | https://en.wikipedia.org/wiki/Kinyoun%20stain | The Kinyoun method or Kinyoun stain (cold method), developed by Joseph J. Kinyoun, is a procedure used to stain acid-fast species of the bacterial genus Mycobacterium. It is a variation of a method developed by Robert Koch in 1882. Certain species of bacteria have a waxy lipid called mycolic acid, in their cell walls which allow them to be stained with Acid-Fast better than a Gram-Stain. The unique ability of mycobacteria to resist decolorization by acid-alcohol is why they are termed acid-fast. It involves the application of a primary stain (basic fuchsin), a decolorizer (acid-alcohol), and a counterstain (methylene blue). Unlike the Ziehl–Neelsen stain (Z-N stain), the Kinyoun method of staining does not require heating. In the Ziehl–Neelsen stain, heat acts as a physical mordant while phenol (carbol of carbol fuchsin) acts as the chemical mordant.
Modification
The Kinyoun method can be modified as a weak acid fast stain, which uses 0.5–1.0% sulfuric acid instead of hydrochloric acid. The weak acid fast stain, in addition to staining Mycobacteria, will also stain organisms that are not able to maintain the carbol fuchsin after decolorizing with HCl, such as Nocardia species and Cryptosporidium.
See also
Auramine-rhodamine stain
References
Bacteriology
Staining
Microscopy
pt:Coloração de Kinyoun | Kinyoun stain | [
"Chemistry",
"Biology"
] | 344 | [
"Staining",
"Microbiology techniques",
"Cell imaging",
"Microscopy"
] |
46,968,364 | https://en.wikipedia.org/wiki/Inferring%20horizontal%20gene%20transfer | Horizontal or lateral gene transfer (HGT or LGT) is the transmission of portions of genomic DNA between organisms through a process decoupled from vertical inheritance. In the presence of HGT events, different fragments of the genome are the result of different evolutionary histories. This can therefore complicate investigations of the evolutionary relatedness of lineages and species. Also, as HGT can bring into genomes radically different genotypes from distant lineages, or even new genes bearing new functions, it is a major source of phenotypic innovation and a mechanism of niche adaptation. For example, of particular relevance to human health is the lateral transfer of antibiotic resistance and pathogenicity determinants, leading to the emergence of pathogenic lineages.
Inferring horizontal gene transfer through computational identification of HGT events relies upon the investigation of sequence composition or evolutionary history of genes. Sequence composition-based ("parametric") methods search for deviations from the genomic average whereas evolutionary history-based ("phylogenetic") approaches identify genes whose evolutionary history significantly differs from that of the host species. The evaluation and benchmarking of HGT inference methods typically rely upon simulated genomes, for which the true history is known. On real data, different methods tend to infer different HGT events, and as a result it can be difficult to ascertain all but simple and clear-cut HGT events.
Overview
Horizontal gene transfer was first observed in 1928, in Frederick Griffith's experiment: showing that virulence was able to pass from virulent to non-virulent strains of Streptococcus pneumoniae, Griffith demonstrated that genetic information can be horizontally transferred between bacteria via a mechanism known as transformation. Similar observations in the 1940s and 1950s showed evidence that conjugation and transduction are additional mechanisms of horizontal gene transfer.
To infer HGT events, which may not necessarily result in phenotypic changes, most contemporary methods are based on analyses of genomic sequence data. These methods can be broadly separated into two groups: parametric and phylogenetic methods. Parametric methods search for sections of a genome that significantly differ from the genomic average, such as GC content or codon usage. Phylogenetic methods examine evolutionary histories of genes involved and identify conflicting phylogenies. Phylogenetic methods can be further divided into those that reconstruct and compare phylogenetic trees explicitly, and those that use surrogate measures in place of the phylogenetic trees.
The main feature of parametric methods is that they only rely on the genome under study to infer HGT events that may have occurred on its lineage. It has been a considerable advantage at the early times of the sequencing era, when few closely related genomes were available for comparative methods. However, because they rely on the uniformity of the host's signature to infer HGT events, not accounting for the host's intra-genomic variability will result in overpredictions—flagging native segments as possible HGT events. Similarly, the transferred segments need to exhibit the donor's signature and to be significantly different from the recipient's. Furthermore, genomic segments of foreign origin are subject to the same mutational processes as the rest of the host genome, and so the difference between the two tends to vanish over time, a process referred to as amelioration. This limits the ability of parametric methods to detect ancient HGTs.
Phylogenetic methods benefit from the recent availability of many sequenced genomes. Indeed, as for all comparative methods, phylogenetic methods can integrate information from multiple genomes, and in particular integrate them using a model of evolution. This lends them the ability to better characterize the HGT events they infer—notably by designating the donor species and time of the transfer. However, models have limits and need to be used cautiously. For instance, the conflicting phylogenies can be the result of events not accounted for by the model, such as unrecognized paralogy due to duplication followed by gene losses. Also, many approaches rely on a reference species tree that is supposed to be known, when in many instances it can be difficult to obtain a reliable tree. Finally, the computational costs of reconstructing many gene/species trees can be prohibitively expensive. Phylogenetic methods tend to be applied to genes or protein sequences as basic evolutionary units, which limits their ability to detect HGT in regions outside or across gene boundaries.
Because of their complementary approaches—and often non-overlapping sets of HGT candidates—combining predictions from parametric and phylogenetic methods can yield a more comprehensive set of HGT candidate genes. Indeed, combining different parametric methods has been reported to significantly improve the quality of predictions. Moreover, in the absence of a comprehensive set of true horizontally transferred genes, discrepancies between different methods might be resolved through combining parametric and phylogenetic methods. However, combining inferences from multiple methods also entails a risk of an increased false-positive rate.
Parametric methods
Parametric methods to infer HGT use characteristics of the genome sequence specific to particular species or clades, also called genomic signatures. If a fragment of the genome strongly deviates from the genomic signature, this is a sign of a potential horizontal transfer. For example, because bacterial GC content falls within a wide range, GC content of a genome segment is a simple genomic signature. Commonly used genomic signatures include nucleotide composition, oligonucleotide frequencies, or structural features of the genome.
To detect HGT using parametric methods, the host's genomic signature needs to be clearly recognizable. However, the host's genome is not always uniform with respect to the genome signature: for example, GC content of the third codon position is lower close to the replication terminus and GC content tends to be higher in highly expressed genes. Not accounting for such intra-genomic variability in the host can result in over-predictions, flagging native segments as HGT candidates. Larger sliding windows can account for this variability at the cost of a reduced ability to detect smaller HGT regions.
Just as importantly, horizontally transferred segments need to exhibit the donor's genomic signature. This might not be the case for ancient transfers where transferred sequences are subjected to the same mutational processes as the rest of the host genome, potentially causing their distinct signatures to "ameliorate" and become undetectable through parametric methods. For example, Bdellovibrio bacteriovorus, a predatory δ-Proteobacterium, has homogeneous GC content, and it might be concluded that its genome is resistant to HGT. However, subsequent analysis using phylogenetic methods identified a number of ancient HGT events in the genome of B. bacteriovorus. Similarly, if the inserted segment was previously ameliorated to the host's genome, as is the case for prophage insertions, parametric methods might miss predicting these HGT events. Also, the donor's composition must significantly differ from the recipient's to be identified as abnormal, a condition that might be missed in the case of short- to medium-distance HGT, which are the most prevalent. Furthermore, it has been reported that recently acquired genes tend to be AT-richer than the recipient's average, which indicates that differences in GC-content signature may result from unknown post-acquisition mutational processes rather than from the donor's genome.
Nucleotide composition
Bacterial GC content falls within a wide range, with Ca. Zinderia insecticola having a GC content of 13.5% and Anaeromyxobacter dehalogenans having a GC content of 75%. Even within a closely related group of α-Proteobacteria, values range from approximately 30% to 65%. These differences can be exploited when detecting HGT events as a significantly different GC content for a genome segment can be an indication of foreign origin.
Oligonucleotide spectrum
The oligonucleotide spectrum (or k-mer frequencies) measures the frequency of all possible nucleotide sequences of a particular length in the genome. It tends to vary less within genomes than between genomes and therefore can also be used as a genomic signature. A deviation from this signature suggests that a genomic segment might have arrived through horizontal transfer.
The oligonucleotide spectrum owes much of its discriminatory power to the number of possible oligonucleotides: if n is the size of the vocabulary and w is oligonucleotide size, the number of possible distinct oligonucleotides is nw; for example, there are 45=1024 possible pentanucleotides. Some methods can capture the signal recorded in motifs of variable size, thus capturing both rare and discriminative motifs along with frequent, but more common ones.
Codon usage bias, a measure related to codon frequencies, was one of the first detection methods used in methodical assessments of HGT. This approach requires a host genome which contains a bias towards certain synonymous codons (different codons which code for the same amino acid) which is clearly distinct from the bias found within the donor genome. The simplest oligonucleotide used as a genomic signature is the dinucleotide, for example the third nucleotide in a codon and the first nucleotide in the following codon represent the dinucleotide least restricted by amino acid preference and codon usage.
It is important to optimise the size of the sliding window in which to count the oligonucleotide frequency: a larger sliding window will better buffer variability in the host genome at the cost of being worse at detecting smaller HGT regions. A good compromise has been reported using tetranucleotide frequencies in a sliding window of 5 kb with a step of 0.5kb.
A convenient method of modelling oligonucleotide genomic signatures is to use Markov chains. The transition probability matrix can be derived for endogenous vs. acquired genes, from which Bayesian posterior probabilities for particular stretches of DNA can be obtained.
Structural features
Just as the nucleotide composition of a DNA molecule can be represented by a sequence of letters, its structural features can be encoded in a numerical sequence. The structural features include interaction energies between neighbouring base pairs, the angle of twist that makes two bases of a pair non-coplanar, or DNA deformability induced by the proteins shaping the chromatin.
The autocorrelation analysis of some of these numerical sequences show characteristic periodicities in complete genomes. In fact, after detecting archaea-like regions in the thermophilic bacteria Thermotoga maritima, periodicity spectra of these regions were compared to the periodicity spectra of the homologous regions in the archaea Pyrococcus horikoshii. The revealed similarities in the periodicity were strong supporting evidence for a case of massive HGT between the bacteria and the archaea kingdoms.
Genomic context
The existence of genomic islands, short (typically 10–200kb long) regions of a genome which have been acquired horizontally, lends support to the ability to identify non-native genes by their location in a genome. For example, a gene of ambiguous origin which forms part of a non-native operon could be considered to be non-native. Alternatively, flanking repeat sequences or the presence of nearby integrases or transposases can indicate a non-native region. A machine-learning approach combining oligonucleotide frequency scans with context information was reported to be effective at identifying genomic islands. In another study, the context was used as a secondary indicator, after removal of genes which are strongly thought to be native or non-native through the use of other parametric methods.
Phylogenetic methods
The use of phylogenetic analysis in the detection of HGT was advanced by the availability of many newly sequenced genomes. Phylogenetic methods detect inconsistencies in gene and species evolutionary history in two ways: explicitly, by reconstructing the gene tree and reconciling it with the reference species tree, or implicitly, by examining aspects that correlate with the evolutionary history of the genes in question, e.g., patterns of presence/absence across species, or unexpectedly short or distant pairwise evolutionary distances.
Explicit phylogenetic methods
The aim of explicit phylogenetic methods is to compare gene trees with their associated species trees. While weakly supported differences between gene and species trees can be due to inference uncertainty, statistically significant differences can be suggestive of HGT events. For example, if two genes from different species share the most recent ancestral connecting node in the gene tree, but the respective species are spaced apart in the species tree, an HGT event can be invoked. Such an approach can produce more detailed results than parametric approaches because the involved species, time and direction of transfer can potentially be identified.
As discussed in more detail below, phylogenetic methods range from simple methods merely identifying discordance between gene and species trees to mechanistic models inferring probable sequences of HGT events. An intermediate strategy entails deconstructing the gene tree into smaller parts until each matches the species tree (genome spectral approaches).
Explicit phylogenetic methods rely upon the accuracy of the input rooted gene and species trees, yet these can be challenging to build. Even when there is no doubt in the input trees, the conflicting phylogenies can be the result of evolutionary processes other than HGT, such as duplications and losses, causing these methods to erroneously infer HGT events when paralogy is the correct explanation. Similarly, in the presence of incomplete lineage sorting, explicit phylogeny methods can erroneously infer HGT events. That is why some explicit model-based methods test multiple evolutionary scenarios involving different kinds of events, and compare their fit to the data given parsimonious or probabilistic criteria.
Tests of topologies
To detect sets of genes that fit poorly to the reference tree, one can use statistical tests of topology, such as the Kishino–Hasegawa (KH), Shimodaira–Hasegawa (SH), and Approximately Unbiased (AU) tests. These tests assess the likelihood of the gene sequence alignment when the reference topology is given as the null hypothesis.
The rejection of the reference topology is an indication that the evolutionary history for that gene family is inconsistent with the reference tree. When these inconsistencies cannot be explained using a small number of non-horizontal events, such as gene loss and duplication, an HGT event is inferred.
One such analysis checked for HGT in groups of homologs of the γ-Proteobacterial lineage. Six reference trees were reconstructed using either the highly conserved small subunit ribosomal RNA sequences, a consensus of the available gene trees or concatenated alignments of orthologs. The failure to reject the six evaluated topologies, and the rejection of seven alternative topologies, was interpreted as evidence for a small number of HGT events in the selected groups.
Tests of topology identify differences in tree topology taking into account the uncertainty in tree inference but they make no attempt at inferring how the differences came about. To infer the specifics of particular events, genome spectral or subtree pruning and regraft methods are required.
Genome spectral approaches
In order to identify the location of HGT events, genome spectral approaches decompose a gene tree into substructures (such as bipartitions or quartets) and identify those that are consistent or inconsistent with the species tree.
Bipartitions
Removing one edge from a reference tree produces two unconnected sub-trees, each a disjoint set of nodes—a bipartition. If a bipartition is present in both the gene and the species trees, it is compatible; otherwise, it is conflicting. These conflicts can indicate an HGT event or may be the result of uncertainty in gene tree inference. To reduce uncertainty, bipartition analyses typically focus on strongly supported bipartitions such as those associated with branches with bootstrap values or posterior probabilities above certain thresholds. Any gene family found to have one or several conflicting, but strongly supported, bipartitions is considered as an HGT candidate.
Quartet decomposition
Quartets are trees consisting of four leaves. In bifurcating (fully resolved) trees, each internal branch induces a quartet whose leaves are either subtrees of the original tree or actual leaves of the original tree. If the topology of a quartet extracted from the reference species tree is embedded in the gene tree, the quartet is compatible with the gene tree. Conversely, incompatible strongly supported quartets indicate potential HGT events. Quartet mapping methods are much more computationally efficient and naturally handle heterogeneous representation of taxa among gene families, making them a good basis for developing large-scale scans for HGT, looking for highways of gene sharing in databases of hundreds of complete genomes.
Subtree pruning and regrafting
A mechanistic way of modelling an HGT event on the reference tree is to first cut an internal branch—i.e., prune the tree—and then regraft it onto another edge, an operation referred to as subtree pruning and regrafting (SPR). If the gene tree was topologically consistent with the original reference tree, the editing results in an inconsistency. Similarly, when the original gene tree is inconsistent with the reference tree, it is possible to obtain a consistent topology by a series of one or more prune and regraft operations applied to the reference tree. By interpreting the edit path of pruning and regrafting, HGT candidate nodes can be flagged and the host and donor genomes inferred. To avoid reporting false positive HGT events due to uncertain gene tree topologies, the optimal "path" of SPR operations can be chosen among multiple possible combinations by considering the branch support in the gene tree. Weakly supported gene tree edges can be ignored a priori or the support can be used to compute an optimality criterion.
Because conversion of one tree to another by a minimum number of SPR operations is NP-Hard, solving the problem becomes considerably more difficult as more nodes are considered. The computational challenge lies in finding the optimal edit path, i.e., the one that requires the fewest steps, and different strategies are used in solving the problem. For example, the HorizStory algorithm reduces the problem by first eliminating the consistent nodes; recursive pruning and regrafting reconciles the reference tree with the gene tree and optimal edits are interpreted as HGT events. The SPR methods included in the supertree reconstruction package SPRSupertrees substantially decrease the time of the search for the optimal set of SPR operations by considering multiple localised sub-problems in large trees through a clustering approach. The T-REX (webserver) includes a number of HGT detection methods (mostly SPR-based) and allows users to calculate the bootstrap support of the inferred transfers.
Model-based reconciliation methods
Reconciliation of gene and species trees entails mapping evolutionary events onto gene trees in a way that makes them concordant with the species tree. Different reconciliation models exist, differing in the types of event they consider to explain the incongruences between gene and species tree topologies. Early methods exclusively modelled horizontal transfers (T). More recent ones also account for duplication (D), loss (L), incomplete lineage sorting (ILS) or homologous recombination (HR) events. The difficulty is that by allowing for multiple types of events, the number of possible reconciliations increases rapidly. For instance, a conflicting gene tree topologies might be explained in terms of a single HGT event or multiple duplication and loss events. Both alternatives can be considered plausible reconciliation depending on the frequency of these respective events along the species tree.
Reconciliation methods can rely on a parsimonious or a probabilistic framework to infer the most likely scenario(s), where the relative cost/probability of D, T, L events can be fixed a priori or estimated from the data. The space of DTL reconciliations and their parsimony costs—which can be extremely vast for large multi-copy gene family trees—can be efficiently explored through dynamic programming algorithms. In some programs, the gene tree topology can be refined where it was uncertain to fit a better evolutionary scenario as well as the initial sequence alignment. More refined models account for the biased frequency of HGT between closely related lineages, reflecting the loss of efficiency of HR with phylogenetic distance, for ILS, or for the fact that the actual donor of most HGT belong to extinct or unsampled lineages. Further extensions of DTL models are being developed towards an integrated description of the genome evolution processes. In particular, some of them consider horizontal at multiple scales—modelling independent evolution of gene fragments or recognising co-evolution of several genes (e.g., due to co-transfer) within and across genomes.
Implicit phylogenetic methods
In contrast to explicit phylogenetic methods, which compare the agreement between gene and species trees, implicit phylogenetic methods compare evolutionary distances or sequence similarity. Here, an unexpectedly short or long distance from a given reference compared to the average can be suggestive of an HGT event. Because tree construction is not required, implicit approaches tend to be simpler and faster than explicit methods.
However, implicit methods can be limited by disparities between the underlying correct phylogeny and the evolutionary distances considered. For instance, the most similar sequence as obtained by the highest-scoring BLAST hit is not always the evolutionarily closest one.
Top sequence match in a distant species
A simple way of identifying HGT events is by looking for high-scoring sequence matches in distantly related species. For example, an analysis of the top BLAST hits of protein sequences in the bacteria Thermotoga maritima revealed that most hits were in archaea rather than closely related bacteria, suggesting extensive HGT between the two; these predictions were later supported by an analysis of the structural features of the DNA molecule.
However, this method is limited to detecting relatively recent HGT events. Indeed, if the HGT occurred in the common ancestor of two or more species included in the database, the closest hit will reside within that clade and therefore the HGT will not be detected by the method. Thus, the threshold of the minimum number of foreign top BLAST hits to observe to decide a gene was transferred is highly dependent on the taxonomic coverage of sequence databases. Therefore, experimental settings may need to be defined in an ad-hoc way.
Discrepancy between gene and species distances
The molecular clock hypothesis posits that homologous genes evolve at an approximately constant rate across different species. If one only considers homologous genes related through speciation events (referred to as “orthologous" genes), their underlying tree should by definition correspond to the species tree. Therefore, assuming a molecular clock, the evolutionary distance between orthologous genes should be approximately proportional to the evolutionary distances between their respective species. If a putative group of orthologs contains xenologs (pairs of genes related through an HGT), the proportionality of evolutionary distances may only hold among the orthologs, not the xenologs.
Simple approaches compare the distribution of similarity scores of particular sequences and their orthologous counterparts in other species; HGT are inferred from outliers. The more sophisticated DLIGHT ('Distance Likelihood-based Inference of Genes Horizontally Transferred') method considers simultaneously the effect of HGT on all sequences within groups of putative orthologs: if a likelihood-ratio test of the HGT hypothesis versus a hypothesis of no HGT is significant, a putative HGT event is inferred. In addition, the method allows inference of potential donor and recipient species and provides an estimation of the time since the HGT event.
Phylogenetic profiles
A group of orthologous or homologous genes can be analysed in terms of the presence or absence of group members in the reference genomes; such patterns are called phylogenetic profiles. To find HGT events, phylogenetic profiles are scanned for an unusual distribution of genes. Absence of a homolog in some members of a group of closely related species is an indication that the examined gene might have arrived via an HGT event. For example, the three facultatively symbiotic Frankia sp. strains are of strikingly different sizes: 5.43 Mbp, 7.50 Mbp and 9.04 Mbp, depending on their range of hosts. Marked portions of strain-specific genes were found to have no significant hit in the reference database, and were possibly acquired by HGT transfers from other bacteria. Similarly, the three phenotypically diverse Escherichia coli strains (uropathogenic, enterohemorrhagic and benign) share about 40% of the total combined gene pool, with the other 60% being strain-specific genes and consequently HGT candidates. Further evidence for these genes resulting from HGT was their strikingly different codon usage patterns from the core genes and a lack of gene order conservation (order conservation is typical of vertically evolved genes). The presence/absence of homologs (or their effective count) can thus be used by programs to reconstruct the most likely evolutionary scenario along the species tree. Just as with reconciliation methods, this can be achieved through parsimonious or probabilistic estimation of the number of gain and loss events. Models can be complexified by adding processes, like the truncation of genes, but also by modelling the heterogeneity of rates of gain and loss across lineages and/or gene families.
Clusters of polymorphic sites
Genes are commonly regarded as the basic units transferred through an HGT event. However it is also possible for HGT to occur within genes. For example, it has been shown that horizontal transfer between closely related species results in more exchange of ORF fragments, a type a transfer called gene conversion, mediated by homologous recombination. The analysis of a group of four Escherichia coli and two Shigella flexneri strains revealed that the sequence stretches common to all six strains contain polymorphic sites, consequences of homologous recombination. Clusters of excess of polymorphic sites can thus be used to detect tracks of DNA recombined with a distant relative. This method of detection is, however, restricted to the sites in common to all analysed sequences, limiting the analysis to a group of closely related organisms.
Evaluation
The existence of the numerous and varied methods to infer HGT raises the question of how to validate individual inferences and of how to compare the different methods.
A main problem is that, as with other types of phylogenetic inferences, the actual evolutionary history cannot be established with certainty. As a result, it is difficult to obtain a representative test set of HGT events. Furthermore, HGT inference methods vary considerably in the information they consider and often identify inconsistent groups of HGT candidates: it is not clear to what extent taking the intersection, the union, or some other combination of the individual methods affects the false positive and false negative rates.
Parametric and phylogenetic methods draw on different sources of information; it is therefore difficult to make general statements about their relative performance. Conceptual arguments can however be invoked. While parametric methods are limited to the analysis of single or pairs of genomes, phylogenetic methods provide a natural framework to take advantage of the information contained in multiple genomes. In many cases, segments of genomes inferred as HGT based on their anomalous composition can also be recognised as such on the basis of phylogenetic analyses or through their mere absence in genomes of related organisms. In addition, phylogenetic methods rely on explicit models of sequence evolution, which provide a well-understood framework for parameter inference, hypothesis testing, and model selection. This is reflected in the literature, which tends to favour phylogenetic methods as the standard of proof for HGT. The use of phylogenetic methods thus appears to be the preferred standard, especially given that the increase in computational power coupled with algorithmic improvements has made them more tractable, and that the ever denser sampling of genomes lends more power to these tests.
Considering phylogenetic methods, several approaches to validating individual HGT inferences and benchmarking methods have been adopted, typically relying on various forms of simulation. Because the truth is known in simulation, the number of false positives and the number of false negatives are straightforward to compute. However, simulating data do not trivially resolve the problem because the true extent of HGT in nature remains largely unknown, and specifying rates of HGT in the simulated model is always hazardous. Nonetheless, studies involving the comparison of several phylogenetic methods in a simulation framework could provide quantitative assessment of their respective performances, and thus help the biologist in choosing objectively proper tools.
Standard tools to simulate sequence evolution along trees such as INDELible or PhyloSim can be adapted to simulate HGT. HGT events cause the relevant gene trees to conflict with the species tree. Such HGT events can be simulated through subtree pruning and regrafting rearrangements of the species tree. However, it is important to simulate data that are realistic enough to be representative of the challenge provided by real datasets, and simulation under complex models are thus preferable. A model was developed to simulate gene trees with heterogeneous substitution processes in addition to the occurrence of transfer, and accounting for the fact that transfer can come from now extinct donor lineages. Alternatively, the genome evolution simulator ALF directly generates gene families subject to HGT, by accounting for a whole range of evolutionary forces at the base level, but in the context of a complete genome. Given simulated sequences which have HGT, analysis of those sequences using the methods of interest and comparison of their results with the known truth permits study of their performance. Similarly, testing the methods on sequence known not to have HGT enables the study of false positive rates.
Simulation of HGT events can also be performed by manipulating the biological sequences themselves. Artificial chimeric genomes can be obtained by inserting known foreign genes into random positions of a host genome. The donor sequences are inserted into the host unchanged or can be further evolved by simulation, e.g., using the tools described above.
One important caveat to simulation as a way to assess different methods is that simulation is based on strong simplifying assumptions which may favour particular methods.
See also
Index of evolutionary biology articles
Horizontal gene transfer
Horizontal gene transfer in evolution
Phylogenetic tree
Phylogenetic network
Bioinformatics
Comparative genomics
Homology (biology)
References
Computational biology | Inferring horizontal gene transfer | [
"Biology"
] | 6,356 | [
"Computational biology"
] |
46,969,064 | https://en.wikipedia.org/wiki/Device-to-device | Device-to-Device (D2D) communication in cellular networks is defined as direct communication between two mobile users without traversing the Base Station (BS) or core network. D2D communication is generally non-transparent to the cellular network and it can occur on the cellular frequencies (i.e., inband) or unlicensed spectrum (i.e., outband).
In a traditional cellular network, all communications must go through the BS even if communicating parties are in range for proximity-based D2D communication. Communication through BS suits conventional low data rate mobile services such as voice call and text messaging in which users are seldom close enough for direct communication. However, mobile users in today's cellular networks use high data rate services (e.g., video sharing, gaming, proximity-aware social networking) in which they could potentially be in range for direct communications (i.e., D2D). Hence, D2D communications in such scenarios can greatly increase the spectral efficiency of the network. The advantages of D2D communications go beyond spectral efficiency; they can potentially improve throughput, energy efficiency, delay, and fairness.
Data delivery in non-cooperative D2D communication
Existing data delivery protocols in D2D communications mainly assume that mobile nodes willingly participate in data delivery, share their resources with each other, and follow the rules of underlying networking protocols. Nevertheless, rational nodes in real-world scenarios have strategic interactions and may act selfishly for various reasons (such as resource limitations, the lack of interest in data, or social preferences).
D2D applications
D2D Communications is used for
Local Services: In local service, user data is directly transmitted between the terminals and doesn't involves network side, e.g. social media apps, which are based on proximity service.
Emergency communications: In case of natural disasters like hurricanes, earthquakes etc., the traditional communication network may not work due to the damage caused. Ad hoc network can be established via D2D which could be used for such communication in such situations.
IoT Enhancement: By combining D2D with Internet of things (IoT), a truly interconnected wireless network will be created. Example of D2D-based IoT enhancement is vehicle-to-vehicle (V2V) communication in the Internet of Vehicles (IoV). When running at high speeds, a vehicle can warn nearby vehicles in D2D mode before it changes lanes or slows down.
See also
Machine to machine
Peer-to-peer
References
Wireless networking
Telecommunications infrastructure | Device-to-device | [
"Technology",
"Engineering"
] | 523 | [
"Wireless networking",
"Computer networks engineering"
] |
46,970,434 | https://en.wikipedia.org/wiki/Allomyces%20reticulatus | Allomyces reticulatus is a species of fungus from the United States.
References
Fungi described in 1974
Fungi of the United States
Blastocladiomycota
Fungi without expected TNC conservation status
Fungus species | Allomyces reticulatus | [
"Biology"
] | 45 | [
"Fungus stubs",
"Fungi",
"Fungus species"
] |
46,971,160 | https://en.wikipedia.org/wiki/Hentriacontanonaene | Hentriacontanonaene is a long-chain polyunsaturated hydrocarbon produced by numerous gamma-proteobacteria primarily from the marine environment. Hentriacontanonaene was originally isolated from bacterial isolates from Antarctic sea ice cores. All isolated bacteria that produced hentriacontanonaene also produced the polyunsaturated fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). Given its polyunsaturated nature it has been proposed that this molecule is produced as part of a response to maintain optimal membrane fluidity.
Biosynthesis
The biosynthesis of this compound was initially identified by its similarity to other known pathways found in bacteria that produce similar long-chain hydrocarbons. Production of monounsaturated and tri-unsaturated long-chain hydrocarbons in various microbial lineages has been attributed to the oleABCD gene cluster. In this pathway two acyl-CoA or acyl-ACP are condensed using a non-decarboxylative Claisen condensation to yield a β-keto-thioester. Hydrolysis from the enzyme is followed by reduction of the β-keto group to an alcohol catalyzed by an NADPH dependent reductase OleD. The remaining steps include decarboxylation and dehydration, which might be combined as a single decarboxylation elimination step. The exact roles of OleB and OleC in this pathway are unknown, however deletion of oleC yielded a strain that produced a mono-ketone product without the completed olefin.
The overall unsaturation of the compound is determined by the acyl precursors and it has been hypothesized that condensation of two 16:4(n-3) acyl chains by OleABCD yields hentriacontanonaene. A polyketide-like pathway responsible for the production of eicosapentaenoic acid provides the polyunsaturated precursor for hentriacontanonaene.
References
Unsaturated compounds
Hydrocarbons
Lipids
Polyenes | Hentriacontanonaene | [
"Chemistry"
] | 455 | [
"Hydrocarbons",
"Biomolecules by chemical classification",
"Organic compounds",
"Unsaturated compounds",
"Lipids"
] |
46,971,882 | https://en.wikipedia.org/wiki/Elastin-like%20polypeptides | Elastin-like polypeptides (ELPs) are synthetic biopolymers with potential applications in the fields of cancer therapy, tissue scaffolding, metal recovery, and protein purification. For cancer therapy, the addition of functional groups to ELPs can enable them to conjugate with cytotoxic drugs. Also, ELPs may be able to function as polymeric scaffolds, which promote tissue regeneration. This capacity of ELPs has been studied particularly in the context of bone growth. ELPs can also be engineered to recognize specific proteins in solution. The ability of ELPs to undergo morphological changes at certain temperatures enables specific proteins that are bound to the ELPs to be separated out from the rest of the solution via experimental techniques such as centrifugation.
The general structure of polymeric ELPs is (VPGXG)n, where the monomeric unit is Val-Pro-Gly-X-Gly, and the "X" denotes a variable amino acid that can have consequences on the general properties of the ELP, such as the transition temperature (Tt). Specifically, the hydrophilicity or hydrophobicity and the presence or absence of a charge on the guest residue play a great role in determining the Tt. Also, the solubilization of the guest residue can effect the Tt. The "n" denotes the number of monomeric units that comprise the polymer. In general, these polymers are linear below the Tt, but aggregate into spherical clumps above the Tt..
Structure
Although engineered and modified in a laboratory setting, ELPs share structural characteristics with intrinsically disordered proteins (IDPs) naturally found in the body, such as tropoelastin, from which ELPs were given their name. The repeat sequences found in the biopolymer give each ELP a distinct structure, as well as influence the lower critical solution temperature (LCST), also referred to commonly as the Tt. It is at this temperature that the ELPs move from a linear, relatively disordered state to a more densely aggregated, partially ordered state Although given as a single temperature, Tt, the ELP phase change process generally begins and ends within a temperature range of approximately 2 °C. Also, Tt is altered by the addition of unique proteins to the free ELPs.
Tropoelastin
Tropoelastin is a protein, of size 72kDa, that comes together via cross-links to form elastin in the extracellular matrix of the cell. The cross-link formation process is mediated by lysyl oxidase. One of the major reasons that elastin can withstand high levels of stress in the body without experiencing any physical deformation is that the underlying tropoelastin contains domains that are highly hydrophobic. These hydrophobic domains, consisting overwhelmingly of alanine, proline, glycine, and valine, tend towards instability and disorderliness, ensuring that the elastin does not lock into any specific confirmation. Thus, ELPs consisting of the Val-Pro-Gly-X-Gly monomeric units, which bear resemblance to the repetitive tropoelastin hydrophobic domains, are highly disordered below their Tt. Even above their Tt in their aggregated state, ELPs are only partially ordered. This is due to the fact that the proline and glycine amino acids are present in high amounts in the ELP. Glycine, due to the lack of a bulky side chain, enables the biopolymer to be flexible and proline prevents the formation of stable hydrogen bonds in the ELP backbone. It is important to note, however, that certain segments of the ELP may be able to form instantaneous type II β turns, but these turns are not long-lasting and do not resemble true β sheets, when the NMR chemical shifts are compared.
Amyloid formation
Although ELPs generally form reversible spherical aggregates due to their proline and glycine content, there is a possibility that, under certain conditions such as exceedingly high temperatures, ELPs will form amyloids, or irreversible aggregates of insoluble protein. It is also believed that changes in the ELP backbone leading to a reduction in the proline and glycine content may lead to ELPs with a greater propensity for the amyloid state. As amyloids are implicated in the progression of Alzheimer's disease as well as in prion-based diseases, such as Creutzfeldt-Jakob disease (CJD), modeling of ELP amyloid formation may be useful from a biomedical standpoint.
Tt dependence on ELP structure
The transition temperature of an ELP depends to a certain extent on the identity of the "X" residue found at the fourth position of the pentapeptide monomeric unit. Residues that are highly hydrophobic, such as leucine and phenylalanine, tend to decrease the transition temperature. On the other hand, residues that are highly hydrophilic, such as serine and glutamine, tend to increase the transition temperature. The presence of a potentially charged residue at the "X" position will determine how the ELP responds to varying pHs, with glutamic acid and aspartic acid raising the Tt at pH values in which the residues are deprotonated and lysine and arginine raising the Tt at pH values in which the residues are protonated. The pH needs to be compatible with the charged states of these amino acids in order to raise the Tt. Also higher molecular mass ELPs and higher concentrations of ELPs in solution make it much easier for the polymer to form aggregates, in effect lowering the experimental Tt.
Tt theoretical model
Oftentimes, ELPs are not used in isolation, but are rather fused with other proteins to become functionally active. The structure of these other proteins will have a certain effect on transition temperature. It is important to be able to predict the transition temperature that these fusion proteins will have relative to the free ELPs, as this temperature will determine the fused protein's applicability and phase transition. A theoretical model is available that relates the change in Tt of the fused protein to the varying ratios of each individual amino acid found in the fused protein. The model involves calculating a surface index (SI) associated with each amino acid and then extrapolating, based on the ratio of each amino acid present in the fused protein, the total change in the Tt associated with the fusion protein, ΔTt,fusion:
SI=(ASAXAA/ ASAp)(Ttc)
where ASAp refers to the area of the entire fused protein that is available to the solvent that is being used, ASAXAA refers to the area of the guest residue on the ELP that is available to the solvent, and Ttc is the transition temperature that is unique to the amino acid. Summing up the contribution of each potential guest residue (XAA) will yield an SI index that is directly proportional to ΔTt,fusion. It was found that the amino acids that are charged under a physiological pH of 7.4 have the greatest impact on the overall SI of a fused protein. This is due to the fact that they are more accessible to water-containing solvents, thereby increasing the ASAXAA and also have high Ttc values. Hence, knowledge of the transition temperature of a fused protein is highly dependent on the presence of these charged residues.
Synthesis
Because ELPs are protein-based biopolymers, synthesis involves manipulation of genes to continually express the monomeric repeat unit. Various techniques have been employed in the production of ELPs of various sizes, including unidirectional ligation or concatemerization, overlap extension polymerase chain reaction (OEPCR), and recursive directional ligation (RDL). Also, ELPs can be experimentally modified through conjugation with other polymers or through SpyTag/SpyCatcher reaction, allowing for the synthesis of copolymers with unique morphology.
Concatemerization
The concatemerization process generates libraries of concatamers for the ELPs. Concatamers are oligomeric products of ligating a single gene with itself. This will result in repeat segments of a gene, all of which can be transcribed and translated immediately to produce the ELP of interest. A major problem with this synthetic route is that the number of gene repeat segments ligated together to form the concatamer cannot be controlled, leading to ELPs of different sizes, from which the ELP of a desired size must be isolated.
Overlap extension polymerase chain reaction (OEPCR)
The OEPCR method uses a small amount of the gene encoding the monomeric ELP unit and leads to the amplification of this segment to a great extent. This amplification is due to the fact that the initial segment added to the reaction functions as a template, from which identical gene segments can be synthesized. The process will result in the production of double-stranded DNA encoding the ELP of interest. One major bottleneck associated with this method is the potentially low fidelity associated with the Taq polymerase used. This might lead to replication from the template in which the wrong nucleotides are incorporated into the growing DNA strand.
Recursive directional ligation (RDL)
In recursive directional ligation, the gene encoding the monomer is inserted into a plasmid with restriction sites that are recognized by at least two endonucleases. The endonucleases will cut the plasmid, releasing the gene of interest. Then, this single gene is inserted into a recipient plasmid vector already containing one copy of the ELP monomer gene via digestion of the recipient plasmid with the same restriction endonucleases used on the donor plasmid and a subsequent ligation step. From this process, a sequence of two ELP monomer genes is retrieved. RDL allows for the controlled synthesis of ELP gene oligomers, in which single gene segments are sequentially added. However, the restriction endonucleases used are limited to those that do not cut within the ELP monomer gene itself, as this would lead to loss of crucial nucleotides and a potential frameshift mutation in the protein.
Synthetic conjugation
ELPs can be synthetically conjugated to poly (ethylene glycol) by adding a cyclooctyne functional motif to the poly (ethylene glycol) and an azide group to the ELP. Through a cycloaddition reaction involving both of the functional groups and manipulation of the solvent pH, diblock and star polymers can be formed. Rather than forming the canonical spherical clumps above the transition temperature, this specific conjugated ELP forms a micelle with amphiphillic properties, in which the polar head groups face outward and the hydrophobic domains face inward. Such micelles may be helpful in delivering nonpolar drugs to the body.
Applications
Due to the unique temperature-dependent phase transition experienced by ELPs, in which they move from a linear state to a spherical aggregate state above their Tt, as well as the ability of ELPs to be easily conjugated with other compounds, these biopolymers hold numerous applications. Some of these applications involve ELP use in protein purification, cancer therapy, and tissue scaffolding.
Protein purification
The ELP can be conjugated to a functional group that can bind to a protein of interest. At temperatures below the Tt, the ELP will bind to the ligand in its linear form. In this linear state, the ELP-protein complex cannot easily be distinguished from the extraneous proteins in the solution. However, once the solution is heated to a temperature exceeding the Tt, the ELP will form spherical clumps. These clumps will then settle to the bottom of the solution tube following centrifugation, carrying the protein of interest. The proteins that are not needed will be found in the supernatant, which can be physically separated from the spherical aggregates. To ensure that there are few impurities in the ELP-protein complex isolated, the solution can be cooled below the Tt, enabling the ELPs to once again assume their linear structure. From this point, hot and cold centrifugation cycles can be repeated, and then the protein of interest can be eluted from the ELPs via the addition of a salt.
Tissue scaffolding
The temperature-based phase behavior of ELPs can be utilized to produce stiff networks that may be compatible with cellular regeneration applications. At high concentrations (weight percent exceeding 15%), the ELP transition from a linear state to a spherical aggregate state above the transition temperature is arrested, leading to the formation of brittle gels. These otherwise brittle networks can then be modified chemically, via oxidative coupling, to yield hydrogels which can sustain high levels of mechanical stress and strain. Also, the modified gel networks contain pores, through which important cell-sustaining compounds can easily be delivered. Such strong hydrogels, when bathed in minimal cell media, have been found to promote the growth of human mesencyhmal stem cell populations. The ability of these arrested ELP networks to promote cell growth may prove indispensable in the production of tissue scaffolds that promote cartilage production, for example. Such an intervention may prove useful in the treatment of bone disease and rheumatoid arthritis.
Drug delivery
ELPs modified with certain functional groups have the capacity to be conjugated with drugs, including chemotherapeutic agents. Together, the ELP-drug complex can be taken up by tumor cells to a greater extent, promoting the cytotoxic activity of the drug. The reason that the complexes preferentially target the tumor cells is that these cells tend to be associated with more permeable blood vessels and also possess a weaker lymphatic presence. This essentially means that the drugs can cross over from the vessels to the tumor cells more frequently and can remain in the vessels for a longer period of time, without being filtered out. The phase transition associated with ELPs can also be used to promote tumor cell uptake of the drug. By locally heating tumor cell regions, the ELP-drug complex will aggregate into spherical clumps. If this ELP-drug complex is engineered to expose functional domains in the spherical clump shape that are recognized by tumor cell surfaces, then this cell surface interaction would promote uptake of the drug as the tumor cell would mistake the ELP-drug complex as being a harmless substance.
Metal recovery
A recent study highlights the first report of thermo-responsive rare-earth elements (REE)-selective protein. The ELP and the REE-binding domain are genetically fused to form REE-selective and thermo-responsive genetically encoded ELP called RELP for the selective extraction and recovery of total REEs. RELP shows a selective and repeatable biosorption platform for REE recovery. The authors highlighted that technology can be adapted to recover other precious metals and commodities.
References
Polymers
Polymer chemistry | Elastin-like polypeptides | [
"Chemistry",
"Materials_science",
"Engineering"
] | 3,131 | [
"Polymers",
"Materials science",
"Polymer chemistry"
] |
46,974,031 | https://en.wikipedia.org/wiki/Des%281-3%29IGF-1 | des(1-3)IGF-1 is a naturally occurring, endogenous protein, as well as drug, and truncated analogue of insulin-like growth factor 1 (IGF-1). des(1-3)IGF-1 lacks the first three amino acids at the N-terminus of IGF-1 (for a total of 67 amino acids, relative to the 70 of IGF-1). As a result of this difference, it has considerably reduced binding to the insulin-like growth factor-binding proteins (IGFBPs) and enhanced potency (about 10-fold in vivo) relative to IGF-1.
The amino acid sequence of des(1-3)IGF-1 is TLCGAELVDA LQFVCGDRGF YFNKPTGYGS SSRRAPQTGI VDECCFRSCD LRRLEMYCAP LKPAKSA.
See also
IGF-1 LR3
Mecasermin
Mecasermin rinfabate
Insulin-like growth factor 2
References
Growth hormones
Insulin-like growth factor receptor agonists
Recombinant proteins | Des(1-3)IGF-1 | [
"Biology"
] | 237 | [
"Recombinant proteins",
"Biotechnology products"
] |
46,977,372 | https://en.wikipedia.org/wiki/Adarme | The adarme is an antiquated Spanish unit of mass, equal to three tomines, equivalent to . The term derives from the Arabic درهم, parallel with drachm and the Greek and persists in Spanish as an idiom for something insignificant or which exists in small quantity.
References
Units of mass
Spanish customary measurements
Obsolete units of measurement | Adarme | [
"Physics",
"Mathematics"
] | 74 | [
"Obsolete units of measurement",
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
38,157,005 | https://en.wikipedia.org/wiki/VNREDSat-1 | VNREDSat-1 (short for Vietnam Natural Resources, Environment and Disaster Monitoring Satellite, also VNREDSat-1A) is the first optical Earth Observing satellite of Vietnam; its primary mission is to monitor and study the effects of climate change, predict and take measures to prevent natural disasters, and optimise the management of Vietnam's natural resources.
Satellite
The VNREDSat-1 was built in Toulouse by EADS Astrium. During the project 15 Vietnamese engineers were integrated and trained by the Astrium team. The VNREDSat-1 system is based on the Astrium operational AstroSat100 satellite, used for the SSOT programme developed with Chile or the ALSAT-2 satellite system developed with Algeria. The 120 kg satellite will image at 2.5 m in panchromatic mode and 10 m in multi-spectral mode (four bands) with a 17.5 km swath, and will orbit at 600–700 km in a Sun-synchronous orbit.
Launch
The satellite was launched from ELV at the Guiana Space Centre by the Vega VV02 rocket at 02:06:31 UTC on 7 May 2013 together with the PROBA-V and ESTCube-1 satellites.
References
2013 in Vietnam
Spacecraft launched in 2013
Earth imaging satellites
Satellites of Vietnam
Spacecraft launched by Vega rockets | VNREDSat-1 | [
"Astronomy"
] | 274 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
38,157,205 | https://en.wikipedia.org/wiki/Power-Packer | Power-Packer is a Netherlands-based producer and distributor of electro-hydraulic motion control systems for customers on a global basis, and is part of the CentroMotion organization. Their market includes original equipment manufacturers and suppliers in different markets, such as automotive, medical, commercial vehicles and others. Power-Packer has headquarters in the Netherlands and the United States, and manufacturing plants in those countries as well as Turkey, France, Mexico, Brazil, China and India.
Power-Packer specializes in solutions for motion control.
Markets and solutions
Power-Packer is a manufacturer of motion control systems for a variety of markets and applications. Power-Packer’s key product focuses include:
Automotive: drive systems of convertible roofs, trunklid, tailgate actuation
Truck: cabtilt, latches, levelling equipment and air-powered landing gear for semi-trailers
Medical: adjustment systems for beds, stretchers, patient lifts, operating tables
Recreational vehicles: roof lift systems, levelling systems, steps and trays, slide-out systems
It also has certification from the International Automotive Task Force under IATF 16949. This is a certification of quality control and assurance.
Timeline
1970: Foundation of Power-Packer Europa B.V.
1973: Development of cab tilt systems for trucks
1981: Power-Packer invents Regenerative Hydraulic Lost Motion
1988: Start two cylinder system production
1989: Factory opened in Korea
1999: Factory acquired in Turkey
2000: Factory opened in Brazil
2000: Power-Packer becomes part of the Actuant Group
2002: Start Hycab production
2003: Start C-Hydraulic Lost Motion production
2004: Factory opened in China
2004: Acquisition of Yvel, France
2006: Introduction Quick Generator
2009: High temperature development
2010: Development of crash cab tilt cylinder
References
External links
Manufacturing companies of the Netherlands
Motion control
Oldenzaal | Power-Packer | [
"Physics",
"Engineering"
] | 378 | [
"Physical phenomena",
"Motion (physics)",
"Automation",
"Motion control"
] |
38,157,620 | https://en.wikipedia.org/wiki/Glass%20poling | Glass poling is the physical process through which the distribution of the electrical charges is changed. In principle, the charges are randomly distributed and no permanent electric field exists inside the glass.
When the charges are moved and fixed at a place then a permanent field will be recorded in the glass. This electric field will permit various optical functions in the glass, impossible otherwise. The resulting effect would be like having positive and negative poles as in a battery, but inside an optical fibre.
The effect will be a change of the optical fibre properties. For instance glass poling will permit to realize second-harmonic light generation which consists of converting an input light into another wavelength, twice the original radiation frequency and half of the wave length. For instance a near infrared radiation around 1030 nm could be converted with this process to the 515 nm wavelength, corresponding to green light.
Glass poling also allows for the creation of the linear electro-optic effect that can be used for other functions like light modulation.
So, glass poling relies on recording an electric field which breaks the original symmetry of the material. Poling of glass is done by applying high voltage to the medium, while exciting it with heat, ultraviolet light or some other source of energy. Heat will permit the charges to move by diffusion and the high voltage permits to give a direction to the charges displacement.
Optical poling of silica fibers allows for second-harmonic generation through the creation of a self-organized periodic distribution of charges at the core-cladding interface.
UV poling received much attention because of the high non-linearity reported, but interest dwindled when various groups failed to reproduce the results.
Thermal poling
Strong electric fields are created by thermal poling of silica, subjecting the glass simultaneously to temperatures in the range of 280 °C and a few kilovolts bias for several minutes. Cations are mobile at elevated temperature (e.g., Na+) and are displaced by the poling field from the anode side of the sample. This creates a region a few micrometers thick of high electrical resistivity depleted of positive ions near the anodic surface. The depleted region is negatively charged, and if the sample is cooled to room temperature when the poling voltage is on, the distribution of electrons becomes frozen. After poling, positive charge attracted to the anodic surface and negative charge inside the glass create a recorded field that can reach 109 V/m. More detailed studies, show that there is little or no accumulation of cations near the cathode electrode, and that the layer nearest to the anode suffers partial neutralization if poling persists for an excessively long time. The process of glass poling is very similar to the one used for Anodic bonding, where the recorded electric field bonds the glass sample to the anode.
In thermal poling, one exploits effects of nonlinear optics created by the strong recorded field. An effective second-order optical non-linearity arises from χ(2)eff ~ 3 χ(3) Erec. In silica glass, the non-linear coefficient induced is ~1 pm/V, while in fibers it is a fraction of this value. The use of fibers with internal electrodes makes it possible to pole the fibers to make them exhibit the linear electro-optic effect and then control the refractive index with the application of voltage, for switching and modulation. The recorded field in a poled fiber can be erased by exposing the poled fiber from the side to UV radiation.
This makes it possible to artificially create an electric-field grating with arbitrary period, which satisfies the condition necessary for quasi-phase-matching. Periodic poling is used for efficient frequency-doubling in optical fibers.
References
Glass physics
Nonlinear optics | Glass poling | [
"Physics",
"Materials_science",
"Engineering"
] | 768 | [
"Glass engineering and science",
"Glass physics",
"Condensed matter physics"
] |
52,496,661 | https://en.wikipedia.org/wiki/Govindasamy%20Mugesh | Govindasamy Mugesh (born 1970) is an Indian inorganic and physical chemist, a professor and the head of the Mugesh Laboratory attached to the department of Inorganic and Physical Chemistry at the Indian Institute of Science. He is known for his studies on the mechanism of thyroid hormone action and is an elected fellow of the Indian Academy of Sciences, Indian National Science Academy, Royal Society of Chemistry and the National Academy of Sciences, India. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 2012, for his contributions to chemical sciences. In 2019, he was awarded the Infosys Prize in Physical Sciences for his seminal work in the chemical synthesis of small molecules and nanomaterials for biomedical applications.
Biography
Born on 29 May 1970 in the south Indian state of Tamil Nadu, G. Mugesh completed his graduate studies in chemistry at the University of Madras in 1990. After obtaining a master's degree from the Bharathidasan University in 1993, he enrolled for his doctoral degree at the Indian Institute of Technology, Mumbai under the guidance of H. B. Singh to secure a PhD in 1998. Remaining at the institute, he did his post doctoral studies there till 2000 and moved to continue his studies at the laboratories of :de:Wolf-Walther du Mont of Brunswick University of Technology and :de:Helmut Sies of University of Düsseldorf on an Alexander von Humboldt fellowship till 2001. After obtaining a Skaggs Postdoctoral Fellowship, he moved to the US to work with K. C. Nicolaou at the Scripps Research Institute. On his return to India in 2002, he joined the Indian Institute of Science as an assistant professor where he rose in ranks to become an associate professor in 2006 and a professor at the department of inorganic and physical chemistry in 2012. At IISc, he heads the Mugesh Laboratory attached to it.
Legacy
Mugesh is known to have carried out extensive researches on the chemistry of thyroid hormone metabolism and his work has assisted in widening the understanding of organic/inorganic synthesis and enzyme mimetic studies. He is credited with the development of therapeutic protocols for endothelial dysfunction and neurodegenerative diseases and with notable work on β-lactamase-based antibiotic resistance. His research has been documented by way of a number of peer-reviewed articles; ResearchGate and Google Scholar, two online article repositories of scientific articles, has listed 151 and 153 of them respectively. He has done many clinical trials including the one on a compound developed by him for use as an anti-thyroid agent. He has also been associated with science journals such as Organic and Biomolecular Chemistry, ACS Omega of the American Chemical Society, Bioorganic Chemistry of Elsevier and Scientific Reports of Nature Publishing Group as a member of their editorial boards. He serves as the vice president of the Asian Chemical Editorial Society (ACES) which publishes three science journals viz. Chemistry – An Asian Journal, Asian Journal of Organic Chemistry, and ChemNanoMat.
Awards and honors
Mugesh received the International Award for Young Chemists of the International Union of Pure and Applied Chemistry (IUPAC) and the Alexander von Humboldt Foundation Equipment Award in 2005. The World Academy of Sciences selected him as a Young Affiliate in 2008 and he received the UKIERI Standard Research Award in 2009. The year 2010 brought him four awards, viz. the Bronze Medal of the Chemical Research Society of India, Young Scientist Award of InterAcademy Panel,
RSC-West India Young Scientist Award and the Award for Excellence in Drug Research of Central Drug Research Institute. The next year, he was chosen for AstraZeneca Excellence in Chemistry Award and the Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards, in 2012. He was awarded the Asian Rising Star Commemorative Plaque at the 15th Asian Chemical Congress organized by the Federation of Asian Chemical Societies (FACS) in 2013 and he was selected for the ISCB Award for Excellence by the Indian Society of Chemists and Biologists in 2016.
He has been a holder of an Alexander von Humboldt fellowship during his post-doctoral days, Swarnajayanthi Fellowship (2006–07), Ramanna Fellowship (2008–09) and the J. C. Bose National Fellowship (2015) of the Department of Science and Technology and the Invitation Fellowship of the Japan Society for the Promotion of Science (2016). Mugesh was elected as a fellow by the Indian Academy of Sciences and the National Academy of Sciences, India in 2010 by the Royal Society of Chemistry in 2013 and by the Indian National Science Academy in 2016. Dr. R. A. Mashelkar Endowment Lecture of the National Chemical Laboratory (2014) and the Prof. S. K. Pradhan Endowment Lecture of the Institute of Chemical Technology (2014) feature among the several guest lectures he has delivered.
See also
K. C. Nicolaou
References
Recipients of the Shanti Swarup Bhatnagar Award in Chemical Science
1970 births
Living people
Fellows of the Indian Academy of Sciences
Fellows of the Indian National Science Academy
Fellows of the National Academy of Sciences, India
Scientists from Tamil Nadu
Tamil chemists
University of Madras alumni
IIT Bombay alumni
Academic staff of the Indian Institute of Science
Indian physical chemists
Indian inorganic chemists
Fellows of the Royal Society of Chemistry
20th-century Indian chemists | Govindasamy Mugesh | [
"Chemistry"
] | 1,118 | [
"Inorganic chemists",
"Indian inorganic chemists"
] |
52,498,587 | https://en.wikipedia.org/wiki/Open%20Problems%20in%20Mathematics | Open Problems in Mathematics is a book, edited by John Forbes Nash Jr. and Michael Th. Rassias, published in 2016 by Springer (). The book consists of seventeen expository articles, written by outstanding researchers, on some of the central open problems in the field of mathematics. The book also features an Introduction on John Nash: Theorems and Ideas, by Mikhail Leonidovich Gromov. According to the editors’ Preface, each article is devoted to one open problem or a “constellation of related problems”.
Choice of problems
Nash and Rassias write in the preface of the book that the open problems presented “were chosen for a variety of reasons. Some were chosen for their undoubtable importance and applicability, others because they constitute intriguing curiosities which remain unexplained mysteries on the basis of current knowledge and techniques, and some for more emotional reasons. Additionally, the attribute of a problem having a somewhat vintage flavor was also influential” in their decision process.
Table of contents
Preface, by John F. Nash Jr. and Michael Th. Rassias
A Farewell to “A Beautiful Mind and a Beautiful Person”, by Michael Th. Rassias
Introduction, John Nash: Theorems and Ideas, by Mikhail Leonidovich Gromov
P =? NP, by Scott Aaronson
From Quantum Systems to L-Functions: Pair Correlation Statistics and Beyond, by Owen Barrett, Frank W. K. Firk, Steven J. Miller, and Caroline Turnage-Butterbaugh
The Generalized Fermat Equation, by Michael Bennett, Preda Mihăilescu, and Samir Siksek
The Conjecture of Birch and Swinnerton-Dyer, by John H. Coates
An Essay on the Riemann Hypothesis, by Alain Connes
Navier–Stokes Equations: A Quick Reminder and a Few Remarks, by Peter Constantin
Plateau’s Problem, by Jenny Harrison and Harrison Pugh
The Unknotting Problem, by Louis Kauffman
How Can Cooperative Game Theory Be Made More Relevant to Economics?: An Open Problem, by Eric Maskin
The Erdős–Szekeres Problem, by Walter Morris and Valeriu Soltan
Novikov’s Conjecture, by Jonathan Rosenberg
The Discrete Logarithm Problem, by René Schoof
Hadwiger’s Conjecture, by Paul Seymour
The Hadwiger–Nelson Problem, by Alexander Soifer
Erdős’s Unit Distance Problem, by Endre Szemerédi
Goldbach’s Conjectures: A Historical Perspective, by Robert Charles Vaughan
The Hodge Conjecture, by Claire Voisin
References
2016 non-fiction books
Books about mathematics
Unsolved problems in mathematics | Open Problems in Mathematics | [
"Mathematics"
] | 540 | [
"Unsolved problems in mathematics",
"Mathematical problems"
] |
52,499,698 | https://en.wikipedia.org/wiki/Upconverting%20nanoparticles | Upconverting nanoparticles (UCNPs) are nanoscale particles (diameter 1–100 nm) that exhibit photon upconversion. In photon upconversion, two or more incident photons of relatively low energy are absorbed and converted into one emitted photon with higher energy. Generally, absorption occurs in the infrared, while emission occurs in the visible or ultraviolet regions of the electromagnetic spectrum. UCNPs are usually composed of rare-earth based lanthanide- or actinide-doped transition metals and are of particular interest for their applications in in vivo bio-imaging, bio-sensing, and nanomedicine because of their highly efficient cellular uptake and high optical penetrating power with little background noise in the deep tissue level. They also have potential applications in photovoltaics and security, such as infrared detection of hazardous materials.
Before 1959, the anti-Stokes shift was believed to describe all situations in which emitted photons have higher energies than the corresponding incident photons. An anti-Stokes shift occurs when a thermally excited ground state is electronically excited, leading to a shift of only a few kBT, where kB is the Boltzmann constant, and T is temperature. At room temperature, kBT is 25.7 meV. In 1959, Nicolaas Bloembergen proposed an energy diagram for crystals containing ionic impurities. Bloembergen described the system as having excited-state emissions with energy differences much greater than kBT, in contrast to the anti-Stokes shift.
Advances in laser technology in the 1960s allowed the observation of non-linear optical effects such as upconversion. This led to the experimental discovery of photon upconversion in 1966 by François Auzel. Auzel showed that a photon of infrared light could be upconverted into a photon of visible light in ytterbium–erbium and ytterbium–thulium systems. In a transition-metal lattice doped with rare-earth metals, an excited-state charge transfer exists between two excited ions. Auzel observed that this charge transfer allows an emission of photon with much higher energy than the corresponding absorbed photon. Thus, upconversion can occur through a stable and real excited state, supporting Bloembergen's earlier work. This result catapulted upconversion research in lattices doped with rare-earth metals. One of the first examples of efficient lanthanide doping, the Yb/Er-doped fluoride lattice, was achieved in 1972 by Menyuk et al.
Physics
Photon upconversion belongs to a larger class of processes by which light incident on a material induces anti-Stokes emission. Multiple quanta of energy such as photons or phonons are absorbed, and a single photon with the summed energy is emitted. It is important to make the distinction between photon upconversion, where real metastable excited states allow for sequential absorption, and other nonlinear processes like second-harmonic generation or two-photon excited fluorescence which involve virtual intermediate states such as the "simultaneous" absorption of two or more photons. It is also distinct from more weakly anti-Stokes processes like thermoluminescence or anti-Stokes Raman emission, which are due to initial thermal population of low-lying excited states and consequently show emission energies only a few kBT above the excitation. Photon upconversion is distinctly characterized by emission-excitation differences of 10–100 kBT and an observable fluorescence lifetime after the excitation source has been switched off.
Photon upconversion relies on metastable states to facilitate sequential energy absorption. Therefore, a necessary condition for upconverting systems is the existence of optically active long-lived excited states. This role is traditionally filled by lanthanide metal ions embedded in an insulating host lattice. Generally in the +3 oxidation state, these ions have 4fn electronic configurations and typically exhibit f-f transitions. These 4f orbitals allow for complex electronic structures and a large number of possible electronic excited states with similar energies. When embedded in bulk crystals or nanostructures, the energies of these excited states will further split under crystal field, generating a series of states with many closely spaced energies. The 4f shell is localized near the core of the ion and is therefore non-bonding, while the 5s and 5p shells provide further shielding from the exterior crystal field. Thus, the coupling of electronic excited states to the surrounding lattice is weak, leading to long excited state lifetimes and sharp optical lineshapes.
The physical processes responsible for upconversion in nanoparticles are the same as those in bulk crystals on the microscopic level, although total efficiency and other ensemble effects will have unique considerations in the nanoparticle case. The processes contributing to upconversion may be grouped according to the number of ions involved. The two most common processes by which upconversion can occur in lanthanide-doped nanoscale materials are excited state absorption (ESA) and energy transfer upconversion (ETU).
A single ion in the lattice sequentially absorbs two photons and emits a photon of higher energy as it returns to the ground state. ESA is most common when dopant concentrations are low and energy-transfer is not probable. Since ESA is a process where two photons must be absorbed at a single lattice site, coherent pumping and high intensity are much more important (but not necessarily required) than for ETU. Because of its single-ion nature, ESA does not depend on the lanthanide ion concentration.
Two-ion processes are usually dominated by energy transfer upconversion (ETU). This is characterized by the successive transfer of energy from singly excited ions (sensitizers/donors), to the ion which eventually emits (activators/acceptors). This process is commonly portrayed as the optical excitation of the activator followed by further excitation to the final fluorescing state due to energy transfer from a sensitizer. While this depiction is valid, the more strongly contributing process is the sequential excitation of the activator by two or more different sensitizer ions.
The upconversion process is said to be cooperative when there are one or more elementary steps (sensitization or luminescence) in the process which involve multiple lanthanide ions. In cooperative sensitization process, two ions in their excited state simultaneously decay to their ground states, generating a higher energy photon. Similarly, in cooperative luminescence, two excited state ions transfer their energy to a neighboring ion in one elementary step.
Energy migration-mediated upconversion (EMU) involves four types of luminescent ion centers with different roles. They are located in separate layers of a core-shell structure of the nanomaterial to inhibit relaxation processes between ions. In this case, low-energy photons are excited in an ETU process that populates an excited state of another ion. Energy from this state can transfer to an adjacent ion through a core-shell interface and is then emitted.
Recently, moving forward in the challenge of designing particles with tunable emissions, important progress in synthesis of high-quality nano-structured crystals has enabled new pathways for photon upconversion. This includes the possibility of creating particles with core/shell structures, allowing upconversion through interfacial energy transfer (IET), upon which the interactions between typical lanthanide donor-acceptor pairs including Yb-Er, Yb-Tm, Yb-Ho, Gd-Tb, Gd-Eu and Nd-Yb can be precisely controlled on the nanoscale.
Photon avalanche occurs under conditions of a high ESA/GSA ratio and efficient cross-relaxation (CR). Ions, typically Tm3+, Pr3+, Nd3+, initially in the ground state weakly absorb energy from the excitation source. This ground-state absorption (GSA) raises the ions to intermediate excited states. The excited-state ions absorb the energy more strongly than the ground-state ions. Highly excited ions undergo CR with neighboring ground-state ions, producing two ions in intermediate excited states. Further cycles of ESA and CR exponentially increase the number of intermediate excited ions. They eventually relax back to the ground state, emitting a large number of photons. Such systems with high nonlinearity exhibit a steep increase in emission intensity with increasing excitation power.
The mechanism for photon upconversion in lanthanide-doped nanoparticles is essentially the same as in bulk material, but some surface and size-related effects have been shown to have important consequences. While quantum confinement is not expected to have an effect on the energy levels in lanthanide ions since the 4f electrons are sufficiently localized, other effects have been shown to have important consequences on emission spectra and efficiency of UCNPs. Radiative relaxation is in competition with non-radiative relaxation, so phonon density of states becomes an important factor. In addition, phonon-assisted processes are important in bringing the energy states of the f orbitals in range so that energy transfer can occur. In nanocrystals, low frequency phonons do not occur in the spectrum, so the phonon band becomes a discrete set of states. With non-radiative relaxation decreasing the lifetimes of excited states and phonon-assistance increasing the probability of energy transfer, the effects of size are complicated because these effects compete with one another. Surface-related effects can also have a large influence on luminescence color and efficiency. Surface ligands on nanocrystals can have large vibrational energy levels, which can significantly contribute to phonon-assisted effects.
Lanthanide-doped upconversion nanocrystals have found broad applications across various fields. However, their low conversion efficiency remains a significant challenge. In recent decades, researchers have developed innovative solutions to synthesize upconversion nanocrystals with greatly improved efficiency. One frequently used approach is surface passivation, which aims to reduce quenching from surface impurities, ligands, and solvent molecules through multiphonon relaxation. Additionally, techniques such as organic dye sensitization, surface plasmon coupling, dielectric superlensing modulation, and resonant dielectric metasurface modulation have been widely employed for nanophotonic control. These techniques play a crucial role in enhancing upconversion luminescence, contributing to the overall improvement in efficiency.
Chemistry
The chemical composition of upconverting nanoparticles, UCNPs, directly influences their conversion efficiency and spectral characteristics. Primarily, three compositional parameters influence the particles' performance: the host lattice, activator ions, and sensitizer ions.
The host lattice provides structure for both the activator and sensitizer ions and acts as a medium that conducts energy transfer. This host lattice has to satisfy three requirements: low lattice phonon energies, high chemical stability, and low symmetry of the lattice. The major mechanism responsible for reduced upconversion is nonradiative phonon relaxation. Generally, if large numbers of phonons are needed to convert excitation energy into phonon energy, the efficiency of the nonradiative process is lowered. Low phonon energies in the host lattice prevent this loss, improving the conversion efficiency of incorporated activator ions. The lattice must also be stable under chemical and photochemical conditions, as these are the environments the conversion will take place within. Finally, this host lattice ought to have low symmetry, allowing for a slight relaxation of the Laporte selection rules. The normally forbidden transitions lead to an increase in the f-f intermixing and thus enhancement of the upconversion efficiency.
Other considerations about the host lattice include choice of cation and anions. Importantly, cations should have similar radii to the intended dopant ions: For example, when using lanthanide dopant ions, certain alkaline-earth (Ca2+), rare-earth (Y+), and transition-metal ions (Zr4+) all fulfill this requirement, as well as Na+. Similarly, the choice of anion is important as it significantly affects the phonon energies and chemical stability. Heavy halides like Cl− and Br− have the lowest phonon energies and so are the least likely to promote nonradiative decay pathways. However, these compounds are generally hygroscopic and thus not suitably stable. Oxides on the other hand can be quite stable but have high phonon energies. Fluorides provide a balance between the two, having both stability and suitably low phonon energies. As such, it is evident why some of the most popular and efficient UCNP compositions are NaYF4:Yb/Er and NaYF4:Yb/Tm.
Choice of activator dopant ions is influenced by comparing relative energy levels: The energy difference between the ground state and the intermediate state should be similar to the difference between the intermediate state and the excited emission state. This minimizes non-radiative energy loss and facilitates both absorption and energy transfer. Generally, UCNPs contain some combination of rare-earth elements (Y, Sc, and the lanthanides), such as Er3+, Tm3+, and Ho3+ ions, since they have several levels that follow this "ladder" pattern especially well.
Lanthanide dopants are used as activator ions because they have multiple 4f excitation levels and completely filled 5s and 5p shells, which shield their characteristic 4f electrons, thus producing sharp f-f transition bands. These transitions provide substantially longer lasting excited states, since they are Laporte forbidden, thus allowing longer time necessary for the multiple excitations required for upconversion.
The concentration of activator ions in UCNPs is also critically important, as this determines the average distance between the activator ions and therefore affects how easily energy is exchanged. If the concentration of activators is too high and energy transfer too facile, cross-relaxation may occur, reducing emission efficiency.
Efficiency of UCNPs doped with only activators is usually low, due to their low absorption cross section and necessarily low concentration. Sensitizer ions are doped into the host lattice along with the activator ions in UCNPs to facilitate Electron Transfer Upconversion. The most commonly used sensitizer ion is trivalent Yb3+. This ion provides a much larger absorption cross-section for incoming near-IR radiation, while only displaying a single excited 4f state. And since the energy gap between the ground level and this excited state matches well with the "ladder" gaps in the common activator ions, resonant energy transfers between the two dopant types.
Typical UCNPs are doped with approximately 20 mol% sensitizer ions and less than 2 mol% activator ions. These concentrations allow adequate distance between activators, avoiding cross-relaxation, and still absorb enough excitation radiation through the sensitizers to be efficient. Currently, other types of sensitizers are being developed to increase the spectral range available for upconversion, such as semi-conductor nanocrystal-organic ligand hybrids.
Synthesis
UCNP synthesis focuses on controlling several aspects of the nanoparticles – the size, shape, and phase. Control over each of these aspects may be achieved through different synthetic pathways, of which co-precipitation, hydro(solvo)thermal, and thermolysis are the most common. Different synthetic methods have different advantages and disadvantages, and the choice of synthesis must balance simplicity/ease of process, cost, and ability to achieve desired morphologies. Generally, solid-state synthesis techniques are the easiest for controlling the composition of the nanoparticles, but not the size or surface chemistry. Liquid-based syntheses are efficient and typically better for the environment.
The simplest and most economical method, in which components of the nanocrystal are mixed together in solution and allowed to precipitate. This method yields nanoparticles with a narrow size distribution (around 100 nm), but that lack the precision of more intricate methods, thereby requiring more post-synthesis work up. NPs can be improved with an annealing step at high temperatures, but this often leads to aggregation, limiting applications. Common coprecipitation synthesized NPs include rare-earth-doped NaYF4 nanoparticles prepared in the presence of ethylenediaminetetraacetic acid (EDTA) and LaYbEr prepared in NaF and organic phosphates (capping ligands).
Hydro(solvo)thermal, also known as hydrothermal/solvothermal, methods are implemented in sealed containers at higher temperatures and pressures in an autoclave. This method allows precise control over shape and size (monodisperse), but at the cost of long synthesis times and the inability to observe growth in real-time. More specialized techniques include sol-gel processing (hydrolysis and polycondensation of metal alkoxides), and combustion (flame) synthesis, which are rapid, non-solution phase pathways. Efforts to develop water-soluble and "green" total syntheses are also being explored, with the first of these methods implementing polyethylenimine (PEI)-coated nanoparticles.
Thermal decomposition uses high temperature solvents to decompose molecular precursors into nuclei, which grow at roughly the same rate, yielding high quality, monodisperse NPs. Growth is guided by precursor decomposition kinetics and Oswald ripening, allowing for fine control over particle size, shape and structure by temperature and reactant addition and identity.
Molecular mass
For many chemical and biological applications, it is useful to quantify the concentration of upconversion nanoparticles in terms of molecular mass. For this purpose, each nanoparticle can be considered a macromolecule. To calculate the molecular mass of a nanoparticle, the size of the nanoparticle, the size and shape of the unit cell structure, and the unit cell elemental composition must be known. These parameters can be obtained from transmission electron microscopy and X-ray diffraction respectively. From this, the number of unit cells in a nanoparticle, and thus the total mass of the nanoparticle, can be estimated.
Post-synthetic modification
As the size of the crystal decreases, the ratio of the surface area to volume increases dramatically, exposing dopant ions to being quenched due to effects of surface impurities, ligands, and solvents. Therefore, nano-sized particles are inferior to their bulk counterparts in upconversion efficiency. Experimental investigation reveals the dominant role of ligand in non-radiative relaxation process. There are several ways to increase the efficiencies of upconverting nanoparticles. This includes shell growth, ligand exchange and bilayer formation.
It has been shown that the introduction of an inert shell of a crystalline material around each doped NP serves as an effective way to isolate the core from the surrounding and surface deactivators, thus increasing upconverting efficiency. For example, 8 nm NaYF4 Yb3+/Tm3+ UCNPs coated with a 1.5 nm thick NaYF4 shell, show 30-fold enhancement of the upconverting luminescence. The shell can be grown epitaxially using two general approaches: i) using molecular precursors; ii) using sacrificial particles (see Ostwald ripening). Moreover, a critical thickness of the shell for the emission enhancement may exist that serves as a design factor.
The molecular precursor of the shell material is mixed with the core particles in high-boiling solvents such as oleic acid and octadecene and the resultant mixture is heated to 300 °C to decompose the shell precursor. The shell tends to grow epitaxially on the core particles. Since the host matrix of the core and the shell are of similar chemical composition (to achieve uniform epitaxial growth), there is no contrast difference between the corresponding TEM images before and after the shell growth. Consequently, the possibility of the alloy instead of the core–shell formation cannot be easily excluded. However, it is possible to distinguish between the two scenarios using X-ray photoelectron spectroscopy (XPS).
Ligand exchange
As-synthesized UCNPs are usually capped with organic ligands that aid in size and shape control during preparation. These ligands make their surface hydrophobic and hence are not dispersible in aqueous solution, preventing their biological applications. One simple method to increase solubility in aqueous solvents is direct ligand exchange. This requires a more favored ligand to replace the initial ones. The hydrophobic native ligand capping the NP during synthesis (usually a long chain molecule like oleic acid) is directly substituted with a more polar hydrophilic one, which are usually multi-chelating (e.g. polyethylene glycol (PEG)-phosphate, polyacrylic acid) and hence provides better stabilization and binding, resulting in their exchange. A shortcoming of this method is the slow kinetics associated with the exchange. Generally the new ligand is also functionalized with a group like thiol that allows for facile binding to the NP surface. The protocol for direct exchange is simple, generally involving mixing for an extended period of time, but the work-up can be tedious, conditions must be optimized for each system, and aggregation may occur. However the two-step process of ligand exchange involves the removal of original ligands followed by coating hydrophilic ones, which is a better method. The ligand removal step here was reported through various ways. A simple way was wash the particles with ethanol under ultrasonic treatment. Reagents like nitrosonium tetrafluoroborate or acids are used to strip the native ligands off of the NP surface to attach favorable ones later on. This method shows less tendency for NP aggregation than the direct exchange, and can be generalized to other types of nanoparticles.
Formation of bilayer
Another method involves coating the UCNP in long amphiphilic alkyl chains to create a pseudo bilayer. The hydrophobic tails of the amphiphiles are inserted in between the oleate ligands on the surface of the NP, leaving the hydrophilic heads to face outwards. Phospholipids have been used for this purpose with great success, as they are readily engulfed by biological cells Using this strategy, surface charge is easily controlled by choosing the second layer and some functionalized molecules can be loaded to the outer layer. Both surface charge and surface functional groups are important in bioactivity of nanoparticles. A cheaper strategy for making lipid bilayer coating is to use amphiphilic polymers instead of amphiphilic molecules.
Applications
Bioimaging
Bioimaging with UCNPs involves using a laser to excite the UCNPs within a sample and then detecting the emitted, frequency-doubled light. UCNPs are advantageous for imaging due to their narrow emission spectra, high chemical stability, low toxicity, weak autofluorescence background, long luminescence lifetime, and high resistance to photoquenching and photobleaching. In comparison to traditional biolabels, which use Stokes-shift processes and require high photon energies, UCNPs utilize an anti-Stokes mechanism that allows for the use of lower energy, less damaging and more deeply penetrating light.
Multimodal imaging agents combine multiple modes of signal reporting. UCNPs with Gd3+ or Fe2O3 can serve as luminescent probes and MRI contrast agents. UCNPs are also used in the configuration of photoluminescence and X-ray computed tomography (CT), and trimodal UCNPs combining photoluminescence, X-ray CT, and MRI have also been prepared. By taking advantage of the attractive interaction between fluoride and lanthanide ions, UCNPs can be used as imaging agents based on single-photon emission computed tomography (SPECT), helping to image lymph nodes and to assist in staging for cancer surgery. UCNPs as targeted fluorophores and conjugated with ligands form over-expressed receptors on malignant cells, serving as a photoluminescence label to selectively image cells. UCNPs have also been used in functional imaging, such as the targeting of lymph nodes and the vascular system to assist in cancer surgeries.
UCNPs enable multiplexed imaging by dopant modulation, shifting emission peaks to wavelengths that can be resolved. Single-band UCNPs conjugated to antibodies are used in detecting breast cancer cells, surpassing traditional fluorophore labeling of antibodies, which is not amenable to multiplexed analysis.
Biosensors and temperature sensors
It is utilizing photoinduced electron transfer mechanism. UCNPs have been used as nanothermometers to detect intracellular temperature differences. (NaYF4: 20% Yb3+, 2% Er3+) @NaYF4 core–shell structured hexagonal nanoparticles can measure temperatures in the physiological range (25 °C to 45 °C) with less than 0.5 °C precision in HeLa cells.
UNCPs can be made much more versatile biosensors by combining them with recognition elements like enzymes or antibodies. Intracellular glutathione was detected using UCNPs modified with MnO2 nanosheets. MnO2 nanosheets quench UCNP luminescence, and glutathione was observed to selectively restore this luminescence through reduction of MnO2 to Mn2+. NaYF4: Yb3+/Tm3+ nanoparticles with SYBR Green I dye can probe Hg2+ in vitro with a detection limit of 0.06 nM. Hg2+ and other heavy metals have been measured in live cells. The tunable and multiplexed emissions allow for the simultaneous detection of different species.
Drug release and delivery
There are three ways to construct UCNP-based drug delivery systems. First, UCNPs can transport hydrophobic drugs, like doxorubicin, by encapsulating them on the particle surface, the hydrophobic pocket. The drug can be released by a pH change. Second, mesoporous silica coated UCNPs can be used, where drugs can be stored and released from the porous surface. Thirdly, the drug can be encapsulated and transferred in a hollow UCNP shell.
Light-activated processes that deliver or activate medicine are known as photodynamic therapeutic (PDT). Many photoactive compounds, are triggered by UV light, which has smaller penetration depth and causes more tissue damage compared with IR light. UCNPs can be used to locally trigger UV-activated compounds when irradiated with benign IR irradiation. For instance, UCNPs can absorb IR light and emit visible light to trigger a photosensitizer, which can produce highly reactive singlet oxygen to destroy tumor cells. This non-toxic and effective approach has been demonstrated both in vitro and in vivo. Similarly, UCNPs can be used in photothermal therapy, which destroys targets by heat. In UCNP-plasmonic nanoparticle composites (e.g. NaYF4:Yb Er@Fe3O4@Au17), the UCNPs target tumor cells and the plasmonic nanoparticles generate heat to kill cancer cells. [Field] nanoparticles generate heat to kill cancer cells.
UCNPs have been integrated into solar panels to broaden the spectrum of sunlight that can be captured and converted into electricity. The maximum output of a solar cell is dictated in part by the fraction of incident photons captured to promote electrons. Solar cells can only absorb and convert photons with energy equal to or greater than the bandgap. Any incident photon with energy smaller than the bandgap is lost.
UCNPs can capture this wasted sunlight by combining multiple low energy IR photons into a single high energy photon. The emitted photon will have sufficient energy to promote charge carriers across the band gap.
UCNPs can be integrated into solar cell systems of a number of different classes and in multiple forms. For example, UCNPs can be laminated onto the back sides of semiconductors as a film, to collect low energy light and upconvert it. Such a treatment generated a 37% efficiency for upconverted light.
Another strategy is to disperse the nanoparticles throughout a highly porous material. In one device architecture, UCNPs are infiltrated into a titania micro-scaffold. More titania is added to embed the UCNPs, UCNPs have also been used in dye-sensitized cells.
Super-resolution imaging
Lanthanide-doped upconversion nanocrystals have indeed emerged as promising alternatives to traditional super-resolution imaging probes like organic dyes and quantum dots, primarily due to their high photostability and unique nonlinear optical processes. For example, upconversion nanocrystals have been utilized to achieve high-resolution imaging in STED microscopy. This technique involves exciting the fluorescent probe with an excitation laser, followed by de-excitation through stimulated emission using a depletion laser. By using a doughnut-shaped depletion laser, the point spread function (PSF) is effectively compressed, overcoming the diffraction barrier and enabling super-resolution imaging.
Additionally, the exploration of photon avalanche materials with ultrahigh nonlinearity for single-beam super-resolution imaging further highlights the potential of these advanced nanomaterials in pushing the boundaries of optical imaging techniques. The unique properties of upconversion nanocrystals allow for the realization of super-resolution imaging with sub-70-nm spatial resolution, achieved through simple scanning confocal microscopy without the need for complex computational analysis.
Upconversion lasing
In contrast to organic molecules and quantum dots, lanthanide ions exhibit complex excited states and significantly longer luminescence lifetimes. This characteristic makes it easier to achieve population inversion, a crucial requirement for lasing, when using lanthanide-activated gain materials. Miniaturized lasers are employed as a platform for producing coherent light for various sensing and imaging applications. Lanthanide-doped upconversion nanocrystals have been effectively utilized to achieve UV-to-NIR lasing within microcavities. Remarkably, this is achieved with a pumping threshold below 100 W cm^-2 using a continuous wave (CW) pumping laser source at room temperature.
Upconversion optogenetics
Membrane ion channels play a crucial role in various biological systems by facilitating the propagation and integration of electrical signals. Upconversion nanocrystals have emerged as nano-illuminators capable of controlling the activity of specific membrane ion channels. This capability is particularly valuable for in vivo applications, where the low attenuation of near-infrared (NIR) light in biological tissues enables precise and minimally invasive control of ion channel activity. One application involves embedding upconversion nanocrystals with strong blue emission into polymeric scaffolds. This approach enables optogenetic control of neuron cell activities on the scaffold surface when excited with 980-nm NIR light. Moreover, the utilization of upconversion nanocrystal-mediated optogenetics has enabled the stimulation of deep-brain neurons in mouse brains. This technique has been proven effective in eliciting dopamine release from genetically modified neurons and inducing brain oscillations through the activation of inhibitory neurons. Furthermore, upconversion optogenetics has shown promise in suppressing seizures by inhibiting excitatory cells in the hippocampus and in eliciting memory recall.
Mid-infrared detection
When exposed to MIR radiation, the lanthanide nanotransducers' emission band intensity ratio can be modulated. This modulation converts MIR radiation to the visible (VIS) and near-infrared (NIR) regions, allowing for real-time detection and imaging using silicon photodetectors.
Photoswitching
Photoswitching is the conversion from one chemical isomer to another triggered by light. Photoswitching finds use in optical data processing and storage and in photorelease. Photorelease is the use of light to induce a moiety attached to the nanoparticle surface to detach. UCNPs of lanthanide-doped NaYF4 have been applied as remote control photoswitches. UCNPs are useful photoswitches because they can be irradiated with low-cost NIR radiation and convert it into UV radiation extremely locally.
Photocatalytic systems can be enhanced with UCNPs by the same principle as solar cells. In titania coated with YF3:Yb/Tm UCNPs, degradation of pollutants was observed under NIR radiation. Normally low-energy NIR radiation cannot induce photocatalysis in titania, which has a band gap in the UV range. The excitation in titania results in a surface redox reaction which decomposes compounds near the surface. UCNPs enable cheap low-energy NIR photons to replace expensive UV photons.
In biological contexts UV light is highly absorbed and causes tissue damage. However NIR is weakly absorbed and induces UCNP behavior in vivo. Core-shell UCNPs were used to initiate the photocleavage of a ruthenium complex using an intensity of NIR light that is completely safe in biomedical use.
UCNP-based systems can couple both light-based techniques and current-based techniques. This optical stimulation of semiconductors is then coupled with voltage-based stimulation in order to store information. Other advantages of utilizing UCNPs for flash drives include that all materials employed are photo- and thermally stable. Furthermore, imperfections in the UCNP film will not affect data storage. These advantages yielded an impressive achieved storage limit, making UCNP films a promising material in optical storage. UCNPs can be applied in niche applications for displays and printing. Anti-counterfeiting codes or prints can be fabricated using UCNPs in existing colloidal ink preparations. Flexible, transparent displays have also been fabricated using UCNPs. New security inks which incorporate lanthanide doped upconverting nanoparticles have many advantages. Also, these inks are invisible until subjected to NIR light. Red, green and blue upconverting inks have been achieved. The color produced from some overlapped ink depends on the power density of the NIR excitation, which enables the incorporation of additional security features.
The use of upconverting nanoparticles in fingerprinting is highly selective. The upconverting nanoparticles can bind to lysozyme in sweat that is deposited when a fingertip touches a surface. Also, a cocaine-specific aptamer is developed to identify cocaine-laced fingerprints by the same method. Upconverting nanoparticles can also be used for barcoding. These micro-barcodes can be embedded onto various objects. The barcodes are seen under NIR illumination and can be imaged using an iPhone camera and a microscope objective.
References
Light
Fluorescence
Nanoparticles by physical property | Upconverting nanoparticles | [
"Physics",
"Chemistry"
] | 7,371 | [
"Physical phenomena",
"Luminescence",
"Fluorescence",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Waves",
"Light"
] |
52,502,945 | https://en.wikipedia.org/wiki/Calthemite | Calthemite is a secondary deposit, derived from concrete, lime, mortar or other calcareous material outside the cave environment. Calthemites grow on or under man-made structures and mimic the shapes and forms of cave speleothems, such as stalactites, stalagmites, flowstone etc. Calthemite is derived from the Latin calx (genitive calcis) "lime" + Latin < Greek théma, "deposit" meaning ‘something laid down’, (also Mediaeval Latin thema, "deposit") and the Latin –ita < Greek -itēs – used as a suffix indicating a mineral or rock. The term "speleothem", due to its definition (spēlaion "cave" + théma "deposit" in ancient Greek) can only be used to describe secondary deposits in caves and does not include secondary deposits outside the cave environment.
Origin and composition
Degrading concrete has been the focus of many studies and the most obvious sign is calcium-rich leachate seeping from a concrete structure.
Calthemite stalactites can form on concrete structures and "artificial caves" lined with concrete (e.g. mines and tunnels) significantly faster than those in limestone, marble or dolomite caves. This is because the majority of calthemites are created by chemical reactions which are different from normal "speleothem" chemistry.
Calthemites are usually the result of hyperalkaline solution (pH 9–14) seeping through a calcareous man-made structure until it comes into contact with the atmosphere on the underside of the structure, where carbon dioxide (CO2) from the surrounding air facilitates the reactions to deposit calcium carbonate as a secondary deposit. CO2 is the reactant (diffuses into solution) as opposed to speleothem chemistry where CO2 is the product (degassed from solution).
It is most likely that the majority of calcium carbonate (CaCO3) creating calthemites in shapes which, mimicking speleothems, is precipitated from solution as calcite as opposed to the other, less stable, polymorphs of aragonite and vaterite.
Calthemites are generally composed of calcium carbonate (CaCO3) which is predominantly coloured white, but may be coloured red, orange or yellow due to iron oxide (from rusting reinforcing) being transported by the leachate and deposited along with the CaCO3. Copper oxide from copper pipes may cause calthemites to be coloured green or blue. Calthemites may also contain minerals such as gypsum.
The definition of calthemites also includes secondary deposits which may occur in manmade mines and tunnels with no concrete lining, where the secondary deposit is derived from limestone, dolomite or other calcareous natural rock into which the cavity has been hollowed out. In this instance the chemistry is the same as that which creates speleothems in natural limestone caves (equations 5 to 8) below. It has been suggested the deposition of calthemite formations are one example of a natural process which has not previously occurred prior to the human modification of the Earth's surface, and therefore represents a unique process of the Anthropocene.
Chemistry and pH
The way stalactites form on concrete is due to different chemistry than those that form naturally in limestone caves and is the result of the presence of calcium oxide (CaO) in cement. Concrete is made from aggregate, sand and cement. When water is added to the mix, the calcium oxide in the cement reacts with water to form calcium hydroxide (Ca(OH)2), which under the right conditions can further dissociate to form calcium (Ca2+) and hydroxide (OH−) ions []. All of the following chemical reactions are reversible and several may occur simultaneously at a specific location within a concrete structure, influenced by leachate solution pH.
The chemical formula is:
Calcium hydroxide will readily react with any free CO2 to form calcium carbonate (CaCO3) []. The solution is typically pH 9 – 10.3, however this will depend on what other chemical reactions are also occurring at the same time within the concrete.
This reaction occurs in newly poured concrete as it sets, to precipitate CaCO3 within the mix, until all available CO2 in the mix has been used up. Additional CO2 from the atmosphere will continue to react, typically penetrating just a few millimetres from the concrete surface. Because the atmospheric CO2 cannot penetrate very far into the concrete, there remains free Ca(OH)2 within the set (hard) concrete structure.
Any external water source (e.g. rain or seepage) which can penetrate the micro cracks and air voids in set concrete will readily carry the free Ca(OH)2 in solution to the underside of the structure. When the Ca(OH)2 solution comes in contact with the atmosphere, CO2 diffuses into the solution drops and over time the reaction [] deposits calcium carbonate to create straw shaped stalactites similar to those in caves.
This is where the chemistry becomes a bit complicated, due to the presence of soluble potassium and sodium hydroxides in new concrete, which supports a higher solution alkalinity of about pH 13.2 – 13.4, the predominant carbon species is CO32− and the leachate becomes saturated with Ca2+. The following chemical formulae [Equations & ] will most likely be occurring, and [] responsible for the deposition of CaCO3 to create stalactites under concrete structures.
As the soluble potassium and sodium hydroxides are leached out of the concrete along the seepage path, the solution pH will fall to pH ≤12.5. Below about pH 10.3, the more dominant chemical reaction will become []. The leachate solution pH influences which dominant carbonate species (ions) are present, so at any one time there may be one or more different chemical reactions occurring within a concrete structure.
In very old lime, mortar or concrete structures, possibly tens or hundreds of years old, the calcium hydroxide (Ca(OH)2) may have been leached from all the solution seepage paths and the pH could fall below pH 9. This could allow a similar process to that which creates speleothems in limestone caves [Equations to ] to occur. Hence, CO2 rich groundwater or rainwater would form carbonic acid (H2CO3) (≈pH 7.5 – 8.5) and leach Ca2+ from the structure as the solution seeps through the old cracks []. This is more likely to occur in thin layered concrete such as that sprayed inside vehicle or railway tunnels to stabilise loose material. If [] is depositing the CaCO3 to creating calthemites, their growth will be at a much slower rate than [Equations and ], as the weak alkaline leachate has a lower Ca2+ carrying capacity compared to hyperalkaline solution. CO2 is degassed from solution as CaCO3 is deposited to create the calthemite stalactites. An increased CO2 partial pressure (PCO2) and a lower temperature can increase the HCO3− concentration in solution and result in a higher Ca2+ carrying capacity of the leachate, however the solution will still not attain the Ca2+ carrying capacity of [Equations to ]
The reactions [Equations to ] could be simplified to that shown in [], however the presence of carbonic acid (H2CO3) and other species are omitted. The chemical formula [] is usually quoted as creating "speleothems" in limestone caves, however in this instance the weak carbonic acid is leaching calcium carbonate (CaCO3) previously precipitated (deposited) in the old concrete and degassing CO2 to create calthemites.
If the leachate finds a new path through micro cracks in old concrete, this could provide a new source of calcium hydroxide (Ca(OH)2) which can change the dominant reaction back to []. The chemistry of concrete degradation is quite complex and only the chemistry relating to calcium carbonate deposition is considered in [Equations to ]. Calcium is also part of other hydration products in concrete, such as calcium aluminium hydrates and calcium aluminium iron hydrate. The chemical [Equations to ] are responsible for creating the majority of calthemite stalactites, stalagmites, flowstone etc., found on manmade concrete structures.
Maekawa et al., (2009) p. 230, provides an excellent graph showing the relationship between equilibrium of carbonic acids (H2CO3, HCO3− and CO32−) and pH in solution. Carbonic acid includes both carbonates and bicarbonates. The graph provides a good visual aid to understanding how more than one chemical reaction may be occurring at the same time within concrete at a specific pH.
Leachate solutions creating calthemites can typically attain a pH between 10–14, which is considered a strong alkaline solution with the potential to cause chemical burns to eyes and skin – dependent on concentration and contact duration.
Unusual occurrences
There are a few unusual circumstances where speleothems have been created in caves as a result of hyperalkaline leachate, with the same chemistry as occurs in [Equations to ]. This chemistry can occur when there is a source of concrete, lime, mortar or other manmade calcareous material located above a cave system and the associated hyperalkaline leachate can penetrate into the cave below. An example can be found in the Peak District – Derbyshire, England where pollution from 19th century industrial lime production has leached into the cave system below (e.g. Poole's Cavern) and created speleothems, such as stalactites and stalagmites.
CaCO3 deposition and stalactite growth
The growth rates of calthemite stalactite straws, stalagmites and flowstone etc., is very much dependent on the supply rate and continuity of the saturated leachate solution to the location of CaCO3 deposition. The concentration of atmospheric CO2 in contact with the leachate, also has a large influence on how quickly the CaCO3 can precipitate from the leachate. Evaporation of the leachate solution and ambient atmospheric temperature appears to have very minimal influence on the CaCO3 deposition rate.
Calthemite straw stalactites precipitated (deposited) from hyperalkaline leachate have the potential to grow up to ≈200 times faster than normal cave speleothems precipitated from near neutral pH solution. One calthemite soda straw has been recorded as growing 2 mm per day over several consecutive days, when the leachate drip rate was a constant 11 minutes between drips. When the drip rate is more frequent than one drop per minute, there is no discernible deposition of CaCO3 at the tip of the stalactite (hence no growth) and the leachate solution falls to the ground where the CaCO3 is deposited to create a calthemite stalagmite. If the leachate supply to the stalactite straw's tip reduces to a level where the drip rate is greater than approximately 25 to 30 minutes between drops, there is a chance that the straw tip will calcify over and block up. New straw stalactites can often form next to a previously active, but now dry (dormant) straw, because the leachate has simply found an easier path through the micro cracks and voids in the concrete structure.
Despite both being composed of Calcium Carbonate, Calthemite straws are on average just 40% the mass per unit length of speleothem straws of equivalent external diameter. This is due to the different chemistry involved in creating the straws. The calthemite straws have a thin wall thickness and a less-dense calcium carbonate structure compared to speleothem straws.
Calthemite straws can vary in outside diameter as they grow in length. Changes in diameter can take a matter of days or weeks and are due to changes in drip rate over time. A slow dripping calthemite straw tends to be slightly larger in diameter than a fast-dripping straw.
Calcite rafts on solution drops
Calcite rafts were first observed by Allison in 1923 on solution drops attached to concrete derived straw stalactites, and later by Ver Steeg. When the drip rate is ≥5 minutes between drops, calcium carbonate will have precipitated on the solution drop surface (at the end of a stalactite) to form calcite rafts visible to the naked eye (up to 0.5 mm across). If the drip rate is greater than ≈12 minutes between drops, and there is very little air movement, these rafts may join up and become a latticework of calcite rafts covering the drop surface. Significant air movement will cause the rafts to become scattered and spin turbulently around the drop's surface. This turbulent movement of calcite rafts can cause some to shear off the drop's surface tension and be pushed onto the outside of the straw stalactite, thus increasing the outside diameter and creating minute irregularities.
Stalagmites
If the drip rate is quicker than one drop per minute, most of the CaCO3 will be carried to the ground, still in solution. The leachate solution then has a chance to absorb CO2 from the atmosphere (or degas CO2 depending on reaction) and deposit the CaCO3 on the ground as a stalagmite.
In most locations within manmade concrete structures, calthemite stalagmites only grow to a maximum of a few centimetres high, and look like low rounded lumps. This is because of the limited supply of CaCO3 from the leachate seepage path through the concrete and the amount which reaches the ground. Their location may also inhibit their growth due to abrasion from vehicle tires and pedestrian traffic.
Rimstone or gours
Calthemite rimstone or gours can form beneath concrete structures on a floor with a gradual sloping surface or on the side of rounded stalagmites. When the leachate drip rate is more frequent than 1 drop per minute, most of the calcium carbonate is carried by the leachate from the underside of the concrete structure to the ground, where stalagmites, flowstone and gours are created. The leachate that does reach the ground usually evaporates quickly due to air movement beneath the concrete structure, hence micro-gours are more common than larger gours. In locations where the deposition site is subject to abrasion by vehicle tyres or pedestrians traffic, the chance of micro-gours forming is greatly reduced.
Coralloids
Calthemite coralloids (also known as popcorn), can form on the underside of concrete structures and look very similar to those which occurs in caves. Coralloids can form by a number of different methods in caves, however on concrete the most common form is created when hyperalkaline solution seeps from fine cracks in concrete. Due to solution evaporation, deposition of calcium carbonate occurs before any drop can form. The resulting coralloids are small and chalky with a cauliflower appearance.
See also
References
External links
Smith, G.K. (2016), "Calcite straw stalactites growing from concrete structures", Cave and Karst Science 43(1), pp 4–10
Calcite rafts can be seen spinning around a solution drop surface (YouTube video)
Small rafts have joined up to form lattice work of rafts on calthemite soda straw drop (Youtube video)
B. Schmidkonz, "Watch a dripstone grow", J. Chem. Educ., 94 (2017) 1492–1497
Calcium minerals
Carbonate minerals
Concrete
Inorganic chemistry
Speleothems
Corrosion | Calthemite | [
"Chemistry",
"Materials_science",
"Engineering"
] | 3,306 | [
"Structural engineering",
"Metallurgy",
"Corrosion",
"Electrochemistry",
"nan",
"Concrete",
"Materials degradation"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.