id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
40,977,707 | https://en.wikipedia.org/wiki/Powder%20coating%20on%20glass | Powder coating on glass is a specialized procedure related to traditional powder coating, which is the technique of applying electrostatically charged, dry powdered particles of pigment and resin to a solid item's surface. It requires its own unique process, however, because glass is a poor electrical conductor in comparison to metal, the traditional powder coating substrate.
Markets for Glass Applications
Powder coating on glass is used in industries such as cosmetics, fragrances, wine and spirits, where the contents inside of the glass containers require protection from ultraviolet (UV) rays, particularly UVA electromagnetic radiation, which is capable of penetrating glass. When applied with a dual-coat method, powder coating techniques on glass provide an opaque shield against the light's effects.
Cleaning Preparation
Powder coating on glass requires specialized equipment. The biggest challenge is getting the powder to adhere to the glass surface since there is no natural electrostatic attraction like there is with different metals.
A clean glass subsurface that will not interfere with the process is essential before beginning the powder coating procedure. Washing to remove oil, dirt and grease can be accomplished with solvents, wipes or a traditional wash system. Proper temperature control is critical from the very beginning, including during the preparation stage. Certain temperature ranges are recommended, but they are proprietary at the moment to companies who have pioneered the technique.
The Coating Process
After cleaning, an opaque base coat of powder is applied to the glass substrate as the initial, most important layer of UV protection. Once the powder attracts, the product is heated to activate the process of gelling, which secures the adhesive bond. It is crucial to control the amount of powder that goes on to the surface. With too little, the coating becomes transparent and the protection is diminished. Too much can create a dripping effect or disperse uneven amounts, leading to one side of the glass container being heavier than the other. In the case of powder coating nail polish and other cosmetics bottles, experienced powder coaters typically use a highly chemical-resistant form of powder, which makes it impervious to the aggressive chemicals inherent with polish and primer.
As more heat is applied, the powder coater adds the top coat, which flows together with the base coat. Oven curing follows, and the two coats become one, locking themselves together and encapsulating the bottle or container as a singular protective casing. Not only should this process effectively block out UV rays, but the molecular structure of the powder should provide added chip resistance and scratch resistance to the bottle.
Generally speaking, the transfer of powdered paint to a glass substrate can be broken into four specific phases. Assuming the object is properly cleaned, this includes: 1) Attraction – achieving the electrostatic charge; 2) Gelling – transforming the powder from dry to wet; 3) Flowing – melding or cross-linking the coat applications together for a strong, hardened protective casing; and 4) Curing – heat drying the powder coated product to arrive at its finished form.
Coverage for Different Shapes and Dimensions
It is possible to powder coat a wide variety of glass forms and dimensions, including cylindrical, oval and square shapes, to name just a few. Care must be given toward achieving even coverage, which is accomplished through proper heat control and powder application.
Colors and Textures
Glass will accept an almost limitless number of powder coated colors. Different textures and even metallics can also be applied. Professionals in this field have been able to achieve satisfactory silk screen printing and pad printing on the powder coated glass substrate, including in the case of difficult cylindrical shapes. Glass items compatible with powder coating include bottles and containers, decorative pieces, dinnerware, picture frames and more.
Powder Coating as Green Technology
Powder coating is considered to be an environmentally-friendly application. Unlike solvent-based wet paint systems, the process uses no volatile organic compounds (VOCs). In addition, there is no release of chemicals into the air through evaporation, and over-sprayed powder is recoverable and easily and safely disposable.
References
Further reading
A History of Powder Coating; PCI Mag; February 19, 2004; Paint & Coatings Industry
Azzam, Hani, CEO Modean Industries; How I Fell in Love with Powder Coating;
Genesis of Powder Coating; www.tiger-coatings.us/index.php?id=1826; TIGER Dylac USA
Matusow, Jamie; Innovation Acceleration; Beauty Packaging; Page 14; Dec. 2012
Powder Coating: The Complete Finisher's Handbook; The Powder Coating Institute; 1994
Pulker, H; Coatings on Glass; Edition 2, 1999
Redding, Marie; Head Over Heels for Nails; Beauty Packaging; Page 40; Jan./Feb. 2013
Velour, Tony; Powder Coating Glass – Glass Container; Instructables
Chemical processes
Packaging
Glass | Powder coating on glass | [
"Physics",
"Chemistry"
] | 973 | [
"Glass",
"Unsolved problems in physics",
"Chemical processes",
"Homogeneous chemical mixtures",
"nan",
"Chemical process engineering",
"Amorphous solids"
] |
40,979,038 | https://en.wikipedia.org/wiki/Regional%20Ocean%20Modeling%20System | Regional Ocean Modeling System (ROMS) is a free-surface, terrain-following, primitive equations ocean model widely used by the scientific community for a diverse range of applications. The model is developed and supported by researchers at the Rutgers University, University of California Los Angeles and contributors worldwide.
ROMS is used to model how a given region of the ocean responds to physical forcings such as heating or wind. It can also be used to model how a given ocean system responds to inputs like sediment, freshwater, ice, or nutrients, requiring coupled models nested within the ROMS framework.
Framework
ROMS is a 4D modeling system. It is a 3-dimensional model (a 2D horizontal grid and a vertical grid) that can be run over a given amount of time, time being the 4th dimension. It is gridded into vertical levels that make up the water column and horizontal cells that make up the coordinates of the 2D cartesian plane of the model region.
Kernel
Central to the ROMS framework are four models that form what is called the dynamical/numerical core or kernel:
Non-Linear Model kernel (NLM): NLROMS
Perturbation Tangent Linear Model kernel (TLM): TLROMS
Finite-amplitude tangent linear Representer Model kernel (RPM): RPROMS
Adjoint Model kernel (ADM): ADROMS
Vertical grid
The vertical grid is a hybrid stretched grid. It is hybrid in that its stretching intervals fall somewhere between the two extremes of 1) the evenly-spaced sigma grid used by the Princeton Ocean Model and 2) a true z-grid with a static depth interval . The vertical grid can be squeezed or stretched to increase or decrease the resolution for an area of interest, such as a thermocline or bottom boundary layer. Grid stretching in the vertical direction follows bottom topography, allowing for the idealized flow of water over features such as seamounts. The numbering of the vertical grid goes from the bottom waters upward to the air-water interface: the bottom water level is level 1 and the topmost surface water level is the highest number (such as level 20). With a coupled sediment module, the numbering of the sediment seabed levels goes from the sediment-water interface downward: the topmost seabed level is level 1 and the deepest seabed level is the highest number.
Horizontal grid
The horizontal grid is a structured grid, meaning that it has a rectangular 4-sided grid cell structure. The horizontal grid is also an orthogonal curvilinear grid, meaning that it maximizes ocean grid cells of interest and minimizes extra land grid cells. The horizontal grid is also a staggered grid or Arakawa-C grid, where the velocities in the north-south and east-west directions are calculated at the edges of each grid cell, while the values for scalar variables such as density are calculated at the center of each grid cell, known as "rho-points."
Physics
In both the vertical and horizontal directions, the default equations use centered, second-order finite difference schemes. Higher order schemes are available if desired, for example using parabolic spline reconstruction.
In general, the physical schemes used by ROMS are based on three governing equations:
Continuity
Conservation of momentum (Navier-Stokes)
Transport equations of tracer variables (such as salinity and temperature)
Equations are coupled to solve for five unknowns at each location in the model grid using numerical solutions:
East-west velocity (u)
North-south velocity (v)
Vertical velocity (w)
Salinity
Temperature
Source code
ROMS uses an open-access source code that can be downloaded by filling out an online request form. It runs on C-processing and was developed for shared computing uses. To download the source code a user must create an account and file a request with the developers on the ROMS website.
Input and output
Input
Boundaries such as coastlines can be specified for a given region using land- and sea-masking. The top vertical boundary, the air-sea interface, uses an interaction scheme developed by Fairall et al. (1996). The bottom vertical boundary, the sediment-water interface, uses a bottom stress or bottom-boundary-layer scheme developed by Styles and Glenn (2000).
Inputs that are needed for an implementer to run ROMS for a specific ocean region include:
Bathymetry and coastline
Freshwater input
Wind
Tides
Open boundary forcings (idealized, such as a reanalysis product, or specific data)
Heat flux
Physical mixing (see above)
The programming framework of ROMS is split into three parts: Initialize, Run, and Finalize, which is standard for the Earth System Modeling Framework (ESMF). "Run" is the largest of these three parts, where the user chooses which options they want to use and assimilates data if desired. The model run must be initialized or compiled before it is run.
Output
The output format of model run files is netCDF. Model output is often visualized using independent secondary programming software such as MATLAB or Python. Simple visualization software such as NASA's Panoply Data Viewer can also be used to visualize model output for teaching or demonstration purposes.
User options
The general approach of ROMS gives model implementers a high level of freedom and responsibility. One approach cannot meet the needs of all the diverse applications the model is currently used for. Therefore, it is up to each model implementer (either an individual or a research group) to choose how they want to use each of the available options. Options include choices such as:
Mixing formulations in the horizontal and vertical directions
Vertical grid stretching
Processing mode (serial, parallel with MPI, or parallel with OpenMP)
Debugging turned on or off
When using ROMS, if an implementer runs into a problem or bug, they can report it to the ROMS forum.
Applications
The versatility of ROMS has been proven in its diverse applications to different systems and regions. It is best applied to mesoscale systems, or those systems that can be mapped at high resolution, such as 1-km to 100-km grid spacing.
Coupled model applications
Biogeochemical, bio-optical, sea ice, sediment, and other models can be embedded within the ROMS framework to study specific processes. These are usually developed for specific regions of the world's oceans but can be applied elsewhere. For example, the sea ice application of ROMS was originally developed for the Barents Sea Region.
ROMS modeling efforts are increasingly being coupled with observational platforms, such as buoys, satellites, and ship-mounted underway sampling systems, to provide more accurate forecasting of ocean conditions.
Regional applications
There is an ever-growing number of applications of ROMS to particular regions of the world's oceans. These integrated ocean modeling systems use ROMS for the circulation component, and add other variables and processes of interest. A few examples are:
Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST)
Experimental System for Predicting Shelf and Slope Optics (ESPRESSO)
New York Harbor Observing and Prediction System (NYHOPS)
Chesapeake Bay Estuarine Carbon & Biogeochemistry (ChesROMS ECB)
Climatic indices in the Gulf of Alaska
LiveOcean daily forecast model of the NE Pacific and Salish Sea
The Western Mediterranean OPerational forecasting system (WMOP)
See also
General circulation model (GCM)
Ocean general circulation model (OGCM)
List of ocean circulation models
Climate model
Oceanography
Physical oceanography
Ecological forecasting
References
External links
ROMS website
ROMS Documentation Portal
Numerical climate and weather models
Physical oceanography
Oceanography | Regional Ocean Modeling System | [
"Physics",
"Environmental_science"
] | 1,558 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
40,982,512 | https://en.wikipedia.org/wiki/DelPhi | DelPhi is a scientific application which calculates electrostatic potentials in and around macromolecules and the corresponding electrostatic energies. It incorporates the effects of ionic strength mediated screening by evaluating the Poisson-Boltzmann equation at a finite number of points within a three-dimensional grid box. DelPhi is commonly used in protein science to visualize variations in electrostatics along a protein or other macromolecular surface and to calculate the electrostatic components of various energies.
Development
One of the main problems in modeling the electrostatic potential of biological macromolecules is that they exist in water at a given ionic strength and that they have an irregular shape. Analytical solutions of the corresponding Poisson-Boltzmann Equation (PBE) are not available for such cases and the distribution of the potential can be found only numerically. DelPhi, developed in Professor Barry Honig's lab in 1986, was the first PBE solver used by many researchers. The widespread popularity of DelPhi is due to its speed, accuracy (calculation of the electrostatic free energy is only slightly dependent on the resolution of the grid) and the ability to handle extremely high grid dimensions.
Features
Additional features such as assigning different dielectric constants to different regions of space, smooth Gaussian-based dielectric distribution function, modeling geometric objects and charge distributions, and treating systems containing mixed salt solutions also attracted many researchers. In addition to the typical potential map, DelPhi can generate and output the calculated distribution of either the dielectric constant or ion concentration, providing the biomedical community with extra tools for their research.
Pdb files are typically used as input for DelPhi calculations. Other required inputs are an atomic radii file and a charge file
.
Binary Potential files as output from DelPhi can be viewed in most molecular viewers such as UCSF Chimera, Jmol, and VMD, and can either be mapped onto surfaces or visualized at a fixed cutoff.
Versions
Delphi distribution comes as a sequential as well as parallelized codes, runs on Linux, Mac OS X and Microsoft Windows systems and the source code is available in Fortran 95 and C++ programming languages. DELPHI is also implemented into an accessible web-server. DELPHI has also been utilized to build a server that predicts pKa's of biological macromolecules such as proteins, RNAs and DNAs which can be accessed via web.
DelPhi v.7 is distributed in four versions:
IRIX version, compiled under IRIX 6.5 Operating System, 32bits, using f77 and cc compilers.
IRIX version, compiled under IRIX 6.5 Operating System, 64bits, using f77 and cc compilers.
LINUX version, compiled under Red Hat 7.1, kernel 2.4.2 Operating System, using GNU gfortran compilers,
PC version, compiled under Windows Operating System, using Microsoft Developer Studio C++ and Fortran compilers.
Their way of working is very similar; however, unexpected differences may appear due to different numerical precision or to the porting of the software to different architectures. For example, the elapsed time in the PC version is not calculated at present.
Each distribution contains one executable (named delphi or delphi.exe), the source codes (with corresponding makefile when needed), and some worked examples.
See also
Anthony Nicholls (physicist)
External links
Barry Honig
DelPhi Development Team
References
Chemistry software
Physics software | DelPhi | [
"Physics",
"Chemistry"
] | 724 | [
"Chemistry software",
"Physics software",
"nan",
"Computational physics"
] |
40,984,195 | https://en.wikipedia.org/wiki/Mode%20water | Mode water is defined as a particular type of water mass, which is nearly vertically homogeneous. Its vertical homogeneity is caused by the deep vertical convection in winter. The first term to describe this phenomenon is 18° water, which was used by Valentine Worthington to describe the isothermal layer in the northern Sargasso Sea cooling to a temperature of about 18 °C each winter. Then Masuzawa introduced the subtropical mode water concept to describe the thick layer of temperature 16–18 °C in the northwestern North Pacific subtropical gyre, on the southern side of the Kuroshio Extension. The terminology mode water was extended to the thick near-surface layer north of the Subantarctic Front by McCartney, who identified and mapped the properties of the Subantarctic mode water (SAMW). After that, McCartney and Talley then applied the term subpolar mode water (SPMW) to the thick near-surface mixed layers in the North Atlantic’s subpolar gyre.
Formation and erosion
Different mode waters have different formation and erosion mechanisms. The subtropical mode water (STMW) is formed mainly by subduction, the SPMW is formed mainly by other processes. SAMW is due to the combination of subduction and other processes. erosion mechanism of SPMW is a combination of turbulent mixing and air-sea flux. The erosion mechanism of STMW is likely air-sea flux. For SAMW erosion turbulent mixing may be the main factor.
Geographical distribution
Mode water formation areas are generally characterized by wintertime mixed layers that are relatively thick compared with other mixed layers in the same geographical region. North Atlantic, Southern eastern Indian Oceans and Ocean in the Pacific have the thickest mixed layers, so these thick layers are associated with the North Atlantic’s subpolar mode water and the Southern Ocean’s subantarctic mode water. Relatively thick mixed layers are also found in the subtropical mode water areas near the separated western boundary currents.
Temporal variability
One prominent feature of mode waters is they are stable in properties and locations, so that researchers can use data set from all decades to map the approximate core properties. The stability of properties is linked to the largest spatial-scale, longest time-scale wind and buoyancy forcing. It is not to say there is no variation in properties of the mode water. These variations in these near-surface water masses, in temperature, salinity, density and thickness, are linked to surface forcing changes, although in some cases the connection is not yet obvious. For example, Suga and Hanawa show as the seasons progress, mode water moves away from the formation area and sometimes becomes permanently capped.
Detection
To detect the mode water we can use the minimum value in the vertical gradient of potential density, or equivalently in the Brunt–Väisälä frequency. Since temperature profiles are more abundant and both salinity and temperature are relatively homogeneous in mode water, vertical temperature gradients are sometimes used instead of potential vorticity or the vertical gradient of potential density to identify the core of the mode water. There is no specific values of those gradients to define the boundaries of a given mode water.
Importance
Mode waters have a big impact on nutrients distribution as they prevent deep-ocean nutrients from upwelling to the euphotic zone. Further more, they will control the biological pump, which plays an important role in carbon dioxide uptake. Dynamically, mode waters also control potential vorticity and baroclinity in the subtropical North Atlantic.
References
Oceanography | Mode water | [
"Physics",
"Environmental_science"
] | 717 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
40,984,806 | https://en.wikipedia.org/wiki/Journal%20of%20Translational%20Medicine | The Journal of Translational Medicine is a peer-reviewed open-access medical journal published by BioMed Central since 2003. The editor-in-chief is Francesco Marincola. According to the Journal Citation Reports, the journal had a 2021 impact factor of 8.440.
References
External links
General medical journals
Creative Commons Attribution-licensed journals
BioMed Central academic journals
Academic journals established in 2003
English-language journals
Translational medicine | Journal of Translational Medicine | [
"Biology"
] | 87 | [
"Translational medicine"
] |
38,213,645 | https://en.wikipedia.org/wiki/Centre%20for%20Earthquake%20Studies | The Centre for Earthquake Studies (CES) () is a federally funded research institute and national laboratory dedicated to the advancement in understanding of natural vibration, seismology, and yield-based energy measurement of seismic waves.
The CES was established through federal funding as a direct response to the devastating 2005 Kashmir earthquake in order to understand earthquakes and provide scientific prediction of quakes to improve earthquake preparedness. The CES is the only national site in Pakistan working on earthquake precursors.
The national laboratory is headquartered in the campus area of the National Centre for Physics (NCP) and conducts mathematical research in earth sciences, in close coordination with the NCP.
History
The national site was founded by the Government of Pakistan on the advice of the science adviser Dr. Ishfaq Ahmad. The establishment of the national site came in response to Pakistans' deadliest earthquake, the 2005 Kashmir earthquake on 8 October 2005. Initially created as the Earthquake Studies Department at the National Centre for Physics, it gained independence shortly after its establishment. The CES undertakes research studies in the development of expertise in anomalous geophysical phenomenon prior to seismic activity. The CES primarily produces its research outcomes by using computer simulation and mathematical modelling to interpret seismic activity and give earthquake predictions.
The CES's campus also includes the various ATROPATENA stations network, and supports its research and development with close collaboration with the Global Network for the Forecasting of Earthquakes. Its first and founding director was Dr. Ahsan Mubarak who is still designated as the CES's senior scientist. Currently, Dr. Muhammad Qaisar is the CES's current administrator.
Galleries
See also
2005 Pakistan earthquake
Notes
Official links
Official website
Nuclear weapons programme of Pakistan
Pakistan federal departments and agencies
2005 establishments in Pakistan
Geology organizations
Laboratories in Pakistan
Earth science research institutes
International research institutes
Research institutes in Pakistan
Science parks in Pakistan
2005 Kashmir earthquake
Earthquake engineering | Centre for Earthquake Studies | [
"Engineering"
] | 385 | [
"Structural engineering",
"Earthquake engineering",
"Civil engineering"
] |
38,213,904 | https://en.wikipedia.org/wiki/FNSS%20Samur | Samur or SYHK (short for Seyyar Yüzücü Hücum Köprüsü) is a Turkish amphibious armoured vehicle-launched bridge. Samur is the Turkish word for sable.
The equipment was developed and produced for the Turkish Armed Forces (TSK) by the Turkish company FNSS Defence Systems. After six years of development work, four units were delivered on September 14, 2011, in Ankara. The SYHK will improve the capability of the Turkish Army during river crossing operations.
Characteristics
Basic systems
Central tire inflation system (CTIS)
Traction control system (TC)
Recovery crane
CBRN and ballistic protected personnel cabin
Standard and emergency anchoring systems
Radio and intercom
Controller area network bus CAN bus
Integrated failure detection system
Automatic bilge water pumping (manually if needed)
Vehicle specifications
Power plant: diesel engine
Transmission: 6 speeds forward, 1 reverse (fully automatic)
Number of axles: 4 (All-wheel drive)
Suspension: Double wishbone independent air suspension
Electric power system:
Battery: 2 x 12 V, 120 Ah (C20)
Alternator: 2 x 140 A brushless
Brake system: Hydraulic brake and anti-lock braking system (ABS) (all wheels)
Parking pawl: Integrated into transmission, spring mechanic and hydraulic controlled
Tires: 16.00 R20 solid disc (Run-flat tire type)
Max. speed:
Land:
Water: by two pump-jets
Operational range:
Max. grade: 50%
Max. grade (side slope): 30%
Max. steep obstacle height:
Max. ditch width:
Min. turning radius:
Max. payload capacity:
Double transport unit: MLC 70 (tracked vehicles)
Triple transport unit: MLC 100 (wheeled vehicles)
Deployed bridge mode: MLC 70 and MLC 100
General information
Crew: 3
Weight:
Vehicle class: MLC 35
Length:
Width:
Height:
Ground clearance: (adjustable)
References
External links
FNSS - SAMUR promotional video.
Armoured vehicle-launched bridges
Wheeled military vehicles
Amphibious military vehicles
Military recovery vehicles
Military engineering vehicles
Armoured fighting vehicles of Turkey
Post–Cold War military equipment of Turkey
Samur | FNSS Samur | [
"Engineering"
] | 428 | [
"Engineering vehicles",
"Military engineering",
"Military engineering vehicles"
] |
38,216,008 | https://en.wikipedia.org/wiki/Western%20Canadian%20Select | Western Canadian Select (WCS) is a heavy sour blend of crude oil that is one of North America's largest heavy crude oil streams and, historically, its cheapest. It was established in December 2004 as a new heavy oil stream by EnCana (now Cenovus), Canadian Natural Resources, Petro-Canada (now Suncor) and Talisman Energy (now Repsol Oil & Gas Canada). It is composed mostly of bitumen blended with sweet synthetic and condensate diluents and 21 existing streams of both conventional and unconventional Alberta heavy crude oils at the large Husky Midstream General Partnership terminal in Hardisty, Alberta. Western Canadian Select—the benchmark for heavy, acidic (TAN <1.1) crudes—is one of many petroleum products from the Western Canadian Sedimentary Basin oil sands. Calgary-based Husky Energy, now a subsidiary of Cenovus, had joined the initial four founders in 2015.
Western Canadian Select (WCS) is the benchmark price for western Canadian crude blends. The price of other Canadian crude blends produced locally are also based on the price of the benchmark.
During the COVID-19 pandemic many oil benchmarks around the world fell to record lows, with WCS dropping to $3.81 U.S. dollars per barrel on April 21, 2020. In June, Cenovus increased production at its Christina Lake oil sands project reaching record volumes of 405,658 bbls/d when the price of WCS increased "almost tenfold from April" to an average of $33.97 or C$46.03 per barrel (bbl). During the 2022 Russian invasion of Ukraine the price of WCS rose to over US$100 a barrel with the United States considering placing a ban on Russian oil imports. In June, the Western Canadian Select (WCS) benchmark price averaged $64.35 per barrel, which was closely aligned with the year-to-date (YTD) average of $63.09.
In November 2024, the Canadian Association of Energy Contractors (CAOEC) forecasted that a total of 6,604 wells would be drilled in Western Canada in 2025, marking a 7.3% increase from 2023. This level of activity would be the highest in the Western Canadian oil sector since the commodity price downturn of 2014-2015, which resulted in a prolonged period of industry contraction.
Overview
Western Canadian Select is Canada's benchmark heavy crude and has historically been the cheapest crude oil heavy sour blend in North America. As of 2024, there are only three corporationsSuncor, CNRL, and Cenovus, all Canadian and all headquartered in Calgarythat produce an estimated 77% of all Canadian oilsands production. Repsol, which was the fourth main WCS producer, was one of many foreign companies and investors who exited the oilsands in 2024.
Canada is the primary supplier of total petroleum to the United States. In 2024, Canada's oil exports to the United States were increased substantially partly because of the increased capacity with the completion of the Trans Mountain expansion pipeline. In July and September 2024, Canada exported over 4.3 million barrels of oil per day (b/d) to the United States. In comparison, Canada exported 3.2 million b/d of crude oil to the United States in May 2020.
WCS's influence over the crude oil market extends beyond the production of these three corporate giants, as the price of other Canadian crude blends produced locally are also based on the price of the benchmark, WCS, according to NE2, a brokerage and exchange company that handles approximately 38 percent of western Canadian oil production.
The calculation of the price of WCS is complex. Because WCS is a lower quality heavy crude oil and is also farther from the major oil markets in the United States, its price is calculated based on a discount to West Texas Intermediate (WTI)—a sweeter, lighter oil, which is produced in the heart of the oil markets regions. WTI is the benchmark price of oil in North America. The price of WTI changes from day to day but actual commodities trading market for crude oil is based on contract prices, not a daily price. The WCS discount on a futures contract for a two-month period is based on the average price of all WTI contracts in the most recent month prior to the WCS contract agreement.
Revenue
Husky Energy sold 65% of their Midstream business in 2016 and formed the Husky Midstream General Partnership (HMGP) with two additional partners. HMGP exclusively blends the crude super-stream to ensure a consistent high quality heavy crude product that is demanded by refineries. Since Husky joined the conglomerate, onstream WCS has been blended at the Husky Hardisty terminal (now owned by HMGP). In October 2020, Cenovus acquired the Calgary-based company established in the 1930s—Husky—for CA$3.8 billion.
Major producers
In 2004, Suncor Energy, Cenovus Energy, Canadian Natural Resources, and Talisman Energy (later Repsol) developed the Western Canadian Select (WCS) blend. According to Argus, in 2012 the WCS blend was still produced by only four companies because of the complex set of rules regarding compensate for contributions to the WCS blend. Cenovus and Husky completed a merger by January 2021, with the company operating under Cenovus. Through the merger Cenovus became the third-largest crude oil and natural gas company and the second-largest upgrader in Canada.
Major importers
The United States imports about 99% of Canada's oil exports, According to monthly data provided by the U.S. Energy Information Administration (EIA), Canada is the "largest exporter of total petroleum" to the United States with crude oil exports to the US of 3,026,000 bpd in September 2014, 3,789,000 bpd in September 2015 and 3,401,000 bpd in October 2015.
Canadian oil is much cheaper than oil from other sources. Since 2009, US refineries have increased use of Canadian crude oil, according to a March 20, 2020 report Since 2009, the US has decreased oil imports from Saudi Arabia, Mexico, and Venezuela. Of the total crude oil imports to the US, crude oil from Canada accounts for 56%, according to a 2019 EIA report.
Historical pricing
Crude prices are typically quoted at a particular location. Unless stated otherwise, the price of WCS is quoted at Hardisty and the price of West Texas Intermediate (WTI) is quoted at Cushing, Oklahoma.
Statista provides accurate current and historical records of the price of WCS.
By March 18, 2015, the price of benchmark crude oils, WTI had dropped to $US 43.34/barrel (bbl). from a high in June 2014 with WTI priced above US$107/bbl and Brent above US$115/bbl. WCS, a bitumen-derived crude, is a heavy crude that is similar to Californian heavy crudes, Mexico's Maya crude or Venezuelan heavy crude oils. On March 15, 2015, the differential between WTI and WCS was US$13.8. Western Canadian Select was among the cheapest crude oils in the world with a price of US$29.54/bbl on March 15, 2015, its lowest price since April 2009. By mid-April 2015 WCS had risen almost fifty percent to trade at $US44.94.
By June 2, 2015, the differential between WTI and WCS was US$7.8, the lowest it had ever been. By August 12, 2015, the WCS price dropped to $23.31 and the WTI/WCS differential had risen to $19.75, the lowest price in nine years when BP temporarily shut down its Whiting, Indiana refinery for two weeks, the sixth largest refinery in the United States, to repair the largest crude distillation unit at its Whiting, Indiana refinery. At the same time Enbridge was forced to shut down Line 55 Spearhead pipeline and Line 59 Flanagan South pipeline in Missouri because of a crude oil leak. By September 9, 2015, the price of WCS was US$32.52.
By December 14, 2015, with the price of WTI at $35 a barrel, WCS fell "75 percent to $21.82," the lowest in seven years and Mexico's Maya heavy crude was down "73 percent in 18 months to $27.74". By December 2015 the price of WCS was US$23.46, the lowest price since December 2008 and The WTI-WCS differential was US$13.65. In mid-December 2015, when the price of both Brent and WTI was about $35 a barrel and WCS was $21.82. Mexico's comparable heavy sour crude, Maya dropped in price 73% but the Mexican government used an oil hedge to "somewhat protect" it.
By February 2016 WTI had dropped to US$29.85 and WCS was US$14.10 with a differential of $15.75. By June 2016 WTI was priced at US$46.09, Brent at MYMEX was US$47.39 and WCS was US$33.94 with a differential of US$12.15. By June 2016 the price of WCS was US$33.94. By December 10, 2016, WTI had risen to US$51.46 and WCS was US$36.11 with a differential of $15.35.
On June 28, 2018, WTI spiked to US$74, a four-year high, then dropped by 30% by the end of November.
In November 2018, the price of WCS hit its record low of less than US$14 a barrel. From 2008 through 2018, WCS sold at an average discount of US$17 against WTI. In the fall of 2018, the differential increased to a record of around US$50. On December 2, Premier Rachel Notley announced a mandatory cut of 8.7% in Alberta's oil production. This represents cutting back 325,000 bpd in January 2019, and dropping to 95,000 bpd by the end of 2019. According to a December 12, 2018 article in the Financial Post, after the mandatory cuts were announced, the price of WCS rose c. 70% to c. US$41 a barrel with the WTI narrowing to c. US$11. The price difference between WCS and WTI was as wide as US$50 a barrel in October. As the international price of oil recovered from the December "sharp downturn", the price of WCS rose to US$28.60. According to CBC News, the lower global price of oil was related to declining economic growth as the China–U.S. trade war continued. The price rose as oil production was cut back by the Organization of Petroleum Exporting Countries (OPEC) and Saudi Arabia. According to the U.S. Energy Information Administration (EIA) report, oil production rose by 12% in the U.S., primarily because of shale oil. As a result, Goldman Sachs lowered its 2019 oil price forecast for 2019.
In March 2019, the differential of WTI over WCS decreased to $US9.94 as the price of WTI dropped to US$58.15 a barrel, which is 7.5% lower than it was in March 2018, while the price of WCS averaged increased to US$48.21 a barrel which is 35.7% higher than in March 2018. By October 2019, WTI was averaging US$53.96 a barrel which is 23.7% lower than in October 2018. In comparison, for the same period, WCS averaged US$41.96 a barrel which is 2.0% higher than in October 2018 with a differential of US$12.00 in October 2019.
By March 30, 2020, the price of WCS bitumen-blend crude was US$3.82 per barrel. In April 2020 the price briefly fell below zero, along with WTI, due to collapsing demand caused by the COVID-19 pandemic.
In June, the Western Canadian Select (WCS) benchmark price averaged $64.35 per barrel, which was closely aligned with the year-to-date (YTD) average of $63.09.
Curtailment
In the fall of 2018, the differential between WCS and WTI—which had averaged at US$17 for the decade from 2008 to 2018—widened to a record of around US$50. By December 2018 the price of WCS had plummeted to US$5.90. In response, the NDP government under Premier Notley, set temporary production limits of 3.56 million barrels per day (b/d) that came into effect on January 1, 2019. The curtailment was deemed necessary because of chronic pipeline bottlenecks out of Western Canada which cost the "industry and governments millions of dollars a day in lost revenue". Following the December 2 announcement of mandatory oil production cutbacks in Alberta, the price of WCS rose to US$26.65 a barrel. The global price of oil dropped dramatically in December before recovering in January. The price of WCS increased to US$28.60 with WTI at US$48.69. In the fall of 2019, the UCP government under Premier Kenney "extended the curtailment program into 2020 and increased the base exemptions for companies before the quotas kick in, lowering the number of producers affected by curtailment to 16".
Curtailment "supported domestic oil prices" but also "limited growth and overall industry investment as companies have been unable to expand production above their mandated quotas".
Integrated producers, such as Imperial Oil and Husky Energy, oppose curtailment because when the price of WCS is low, their refineries in the United States benefit. Other oil producers in Alberta support curtailment as a way of preventing the collapse of WCS.
In the summer of 2019, Suncor Energy, Cenovus Energy and Canadian Natural Resources agreed to increase production with the mandatory use of oil-by-rail as a condition for the increase. The Canadian Association of Petroleum Producers (CAPP)'s Terry Abel said that, "The whole point of curtailment was to try and match takeaway capacity with produced capacity so that we don’t create downward pressure on prices...To the extent you add incremental (rail) capacity, you should be able to make some adjustments to curtailment to accommodate that."
Characteristics
"The extremely viscous oil contained in oil sands deposits is commonly referred to as bitumen." (CAS 8052-42-4) At the Husky Hardisty terminal, Western Canadian Select is blended from sweet synthetic and condensate diluents from 25 existing Canadian heavy conventional and unconventional bitumen crude oils.
Western Canadian Select's characteristics are described as follows: API gravity level of between 19 and 22 (API), density (kg/m3) 930.1, MCR (Wt%) 9.6, sulphur (Wt%) 2.8-3.5%, TAN (Total Acid number) of (Mg KOH/g) 0.93.
Refiners in North America consider a crude with a TAN value greater than 1.1 as "high-TAN". A refinery must be retrofitted in order to handle high TAN crudes. Thus, a high TAN crude is limited in terms of the refineries in North America that are able to process it. For this reason, the TAN value of WCS is consistently maintained under 1.1 through blending with light, sweet crudes and condensate. Certain other bitumen blends, such as Access Western Blend and Seal Heavy Blend, have higher TAN values and are considered high TAN.
"Oil sands crude oil does not flow naturally in pipelines because it is too dense. A diluent is normally blended with the oil sands bitumen to allow it to flow in pipelines. For the purpose of meeting pipeline viscosity and density specifications, oil sands bitumen is blended with either synthetic crude oil (synbit) and/or condensate (Dilbit)." WCS may be referred to as a syndilbit, since it may contain both synbit and dilbit.
In a study commissioned by the U.S. Department of State (DOS), regarding the Environmental Impact Statement (EIS) for the Keystone XL pipeline project, the DOS assumes "that the average crude oil flowing through the pipeline would consist of about 50% Western Canadian Select (dilbit) and 50% Suncor Synthetic A (SCO)".
The Canadian Society of Unconventional Resources (CSUR) identifies four types of oil: conventional oil, tight oil, oil shale, and heavy oil like WCS.
Volumes
By September 2014 Canada was exporting 3,026,000 bpd to the United States. This increased to its peak of 3,789,000 bpd in September 2015 and 3,401,000 bpd in October 2015, which represents 99% of Canadian petroleum exports. Threshold volumes of WCS in 2010 were only approximately 250,000 b/d.
On May 1, 2016, a devastating wildfire ignited and swept through Fort McMurray, resulting in the largest wildfire evacuation in Albertan history. As the fires progressed north of Fort McMurray, "oil sands production companies operating near Fort McMurray either shut down completely or operated at reduced rates". By June 8, 2016, the U.S. Department of Energy estimated that "disruptions to oil production averaged about 0.8 million b/d in May, with a daily peak of more than 1.1 million b/d. Although projects are slowly restarting as fires subside, it may take weeks for production to return to previous levels." The Fort McMurray fires did not significantly affect the price of WCS.
"According to EIA's February Short-Term Energy Outlook, production of petroleum and other liquids in Canada, which totaled 4.5 million barrels per day (b/d) in 2015, is expected to average 4.6 million b/d in 2016 and 4.8 million b/d in 2017. This increase is driven by growth in oil sands production of about 300,000 b/d by the end of 2017, which is partially offset by a decline in conventional oil production." The EIA claims that while oil sands projects may be operating at a loss, these projects are able to "withstand volatility in crude oil prices". It would cost more to shut a project down—from $500 million to $1 billion than to operate at a loss.
Comparative cost of production
In their May 2019 comparison of the "cost of supply curve update" in which the Norway-based Rystad Energy—an "independent energy research and consultancy"—ranked the "world's total recoverable liquid resources by their breakeven price", Rystad reported that the average breakeven price for oil from the oil sands was US$83 in 2019, making it the most expensive to produce, compared to all other "significant oil producing regions" in the world. The International Energy Agency made similar comparisons.
In 2016, the Wall Street Journal reported that the United Kingdom at US$44.33, Brazil at US$34.99, Nigeria at US$28.99, Venezuela at US$27.62, and Canada at US$26.64 had the highest production costs. Saudi Arabia at US$8.98, Iran at US$9.08, Iraq at US$10.57, had the cheapest.
An earlier 2014 comparison, based on the Scotiabank Equity Research and Scotiabank Economics report that was published November 28, 2014, compared the cost of cumulative crude oil production.
This analysis "excludes" 'up-front' costs (initial land acquisition, seismic and infrastructure costs): treats 'up-front' costs as 'sunk'. Rough estimate of 'up-front' costs = US$5–10 per barrel, though wide regional differences exist. Includes royalties, which are more advantageous in Alberta and Saskatchewan." The Weighted average of US$60-61 includes existing Integrated Oil Sands at C$53 per barrel."
Lowering production costs
WCS is very expensive to produce. There are exceptions, such as Cenovus Energy's Christina Lake facility which produces some of the lowest-cost barrels in the industry.
In June 2012 Fairfield, Connecticut-based General Electric (GE), with its focus on international markets, opened its Global Innovation Centre in downtown Calgary with "130 privately employed scientists and engineers", the "first of its kind in North America", and the second in the world. GE's first Global Innovation centre is in Chengdu, China, which also opened in June 2012. GE's Innovation Centre is "attempting to embed innovation directly into the architecture". James Cleland, general manager of the Heavy Oil Centre for Excellence, which makes up one-third of the Global Innovation Centre, said, "Some of the toughest challenges we have today are around environmental issues and cost escalations... The oil sands would be rebranded as eco-friendly oil or something like that; basically to have changed the game."
GE's thermal evaporation technology developed in the 1980s for use in desalination plants and the power generation industry was repurposed in 1999 to improve on the water-intensive Steam Assisted Gravity Drainage (SAGD) method used to extract bitumen from the Athabasca Oil Sands. In 1999 and 2002 Petro-Canada's MacKay River facility was the first to install 1999 and 2002 GE SAGD zero-liquid discharge (ZLD) systems using a combination of the new evaporative technology and crystallizer system in which all the water was recycled and only solids were discharged off site. This new evaporative technology began to replace older water treatment techniques employed by SAGD facilities, which involved the use of warm lime softening to remove silica and magnesium and weak acid cation ion exchange used to remove calcium.
Cleland describes how Suncor Energy is investigating the strategy of replication where engineers design an "ideal" small-capacity SAGD plant with a 400 to 600 b/d capacity that can be replicated through "successive phases of construction" with cost-saving "cookie cutter", "repeatable" elements.
Price of crude oil
The price of petroleum as quoted in news in North America, generally refers to the WTI Cushing Crude Oil Spot Price per barrel (159 liters) of either WTI/light crude as traded on the New York Mercantile Exchange (NYMEX) for delivery at Cushing, Oklahoma, or of Brent as traded on the Intercontinental Exchange (ICE, into which the International Petroleum Exchange has been incorporated) for delivery at Sullom Voe. West Texas Intermediate (WTI), also known as Texas Light Sweet, is a type of crude oil used as a benchmark in oil pricing and the underlying commodity of New York Mercantile Exchange's oil futures contracts. WTI is a light crude oil, lighter than Brent Crude oil. It contains approximately 0.24% sulphur, rating it a sweet crude, sweeter than Brent. Its properties and production site make it ideal for being refined in the United States, primarily in the Midwest and Gulf Coast (USGC) regions. WTI has an API gravity of around 39.6 (specific gravity approx. 0.827). Cushing, Oklahoma, a major oil supply hub connecting oil suppliers to the Gulf Coast, has become the most significant trading hub for crude oil in North America.
The National Bank of Canada's Tim Simard, argued that WCS is the benchmark for those buying shares in Canadian oil sands companies, such as Canadian Natural Resources, Cenovus Energy Northern Blizzard Resources, Pengrowth Energy, or Twin Butte Energy or others where a "big part of their exposure will be to heavy crude”.
The price of Western Canadian Select (WCS) crude oil (petroleum) per barrel suffers a differential against West Texas Intermediate (WTI) as traded on the New York Mercantile Exchange (NYMEX) as published by Bloomberg Media, which itself has a discount versus London-traded Brent oil. This is based on data on prices and differentials from Canadian Natural Resources (TSX:CNQ)(NYSE:CNQ).
"West Texas Intermediate Crude oil (WTI) is a benchmark crude oil for the North American market, and Edmonton Par and Western Canadian Select (WCS) are benchmarks crude oils for the Canadian market. Both Edmonton Par and WTI are high-quality low sulphur crude oils with API gravity levels of around 40°. In contrast, WCS is a heavy crude oil with an API gravity level of 20.5°."
West Texas Intermediate WTI is a sweet, light crude oil, with an API gravity of around 39.6 and a specific gravity of about 0.827, which is lighter than Brent crude. It contains about 0.24% sulphur thus is rated as a sweet crude oil (having less than 0.5% sulphur), sweeter than Brent which has 0.37% sulphur. WTI is refined mostly in the Midwest and Gulf Coast regions in the U.S., since it is high-quality fuel and is produced within the country.
"WCS prices at a discount to WTI because it is a lower quality crude (3.51Wt. percent sulphur and 20.5 API gravity) and because of a transportation differential. The price of WCS is currently set at the U.S. Gulf Coast. It costs approximately $10/bbl for a barrel of crude to be transported from Alberta to the U.S. Gulf Coast, accounting for at least $10/bbl of the WTI-WCS discount. Pipeline constraints can also cause the transportation differential to rise significantly.
By March 2015, with the price of Ice Brent at US$60.55, and WTI at US$51.48, up US$1.10 from the previous day, WCS also rose US$1.20 to US$37.23 with a WTI-WCS price differential of US$14.25. By June 2, 2015, with Brent at US$64.88/bbl, WTI at US$60.19/bbl and WCS at US$52.39/bbl.
According to the Financial Post, most Canadian investors continued to quote the price of WTI and not WCS even though many Canadian oil sands producers sell at WCS prices, because WCS "has always lacked the transparency and liquidity necessary to make it a household name with investors in the country". In 2014 Auspice created the Canadian Crude Excess Return Index to gauge WCS futures. Tim Simard, head of commodities at the National Bank of Canada, claims "WCS has "some interesting different fundamental attributes than the conventional WTI barrel." WCS has "better transparency and broader participation" than Maya. However, he explained that in 2015 "one of the only ways to take a position in oil is to use an ETF that is tied to WTI." Simard claims that when the global price of oil is lower, for example, "the first barrels to be turned off in a low-price environment are heavy barrels" making WCS "closer to the floor" than WTI.
In order to address the transparency and liquidity issues facing WCS, Auspice created the Canadian Crude Index (CCI), which serves as a benchmark for oil produced in Canada. The CCI allows investors to track the price, risk and volatility of the Canadian commodity. The CCI can be used to identify opportunities to speculate outright on the price of Canadian crude oil or in conjunction with West Texas Intermediate (WTI) to put on a spread trade which could represent the differential between the two. The CCI provides a fixed price reference for Canadian crude oil by targeting an exposure that represents a three-month rolling position in crude oil. To create a price representative of Canadian crude the index uses two futures contracts: A fixed-price contract, which represents the price of crude oil at Cushing, Oklahoma, and a basis differential contract, which represents the difference in price between Cushing and Hardisty, Alberta. Both contracts are priced in U.S. dollars per barrel. Together, these create a fixed price for Canadian crude oil, and provide an accessible and transparent index to serve as a benchmark to build investable products upon, and could ultimately increase its demand to global markets.
In the spring of 2015, a veteran journalist specializing in energy and finance, Jeffrey Jones, described how the price of WCS was briefly the "hottest commodity" with its price surging over 70% "outpacing West Texas intermediate (WTI), Brent" and "quietly" became the "hottest commodity in North American energy". In April 2015, Enbridge filled a "new 570,000-barrel-a-day pipeline". A May 2015 TD Securities report provides some of the factors contributing the WCS price gains as "normal seasonal strength driven by demand for the thick crude to make asphalt as road paving", improvements to WCS access to various U.S. markets in spite of pipeline impediments, five-year high production levels and high heavy oil demand in U.S. refineries particularly in the US Midwest, a key market for WCS.
By September 9, 2015, the price of WCS was US$32.52 and the WTI-WCS differential was differential US$13.35. It plunged to US$14 a barrel, a record low, in November 2018 but rose to US$28 by December 24.
On March 30, 2020, the combination of the COVID-19 pandemic and the 2020 Russia–Saudi Arabia oil price war, caused the price of oil to drop to below $30 a barrel.
Crude oil differentials and Western Canadian Select (WCS)
By June 2015 the differential between WTI and WCS was US$7.8, the lowest it has ever been.
In a 2013 white paper for the Bank of Canada, authors Alquist and Guénette examined implications for high global oil prices for the North American market. They argued that North America was experiencing a crude oil inventory surplus. This surplus combined with the "segmentation of the North American crude oil market from the global market", contributed to "the divergence between continental benchmark crudes such as WTI and Western Canada Select (WCS) and seaborne benchmark crudes such as Brent".
Alberta's Minister of Finance argues that WCS "should be trading on par with Mayan crude at about $94 a barrel". Maya crudes are close to WCS quality levels. However, Maya was trading at US$108.73/bbl in February 2013, while WCS was US$69/bbl. In his presentation to the U.S. Energy Information Administration (EIA) in 2013 John Foran demonstrated that Maya had traded at only a slight premium to WCS in 2010. Since then WCS price differentials widened "with rising oil sands and tight oil production and insufficient pipeline capacity to access global markets". Mexico enjoys a location discount with its proximity to the heavy oil-capable refineries in the Gulf Coast. As well, Mexico began to strategically and successfully seek out joint venture refinery partnerships in the 1990s to create a market for its heavy crude oil in the U.S. Gulf. In 1993, (Petróleos Mexicanos, the state-owned Mexican oil company) and Shell Oil Company agreed on a joint US$1 billion refinery upgrading construction project which led to the construction of a new coker, hydrotreating unit, sulphur recovery unit and other facilities in Deer Park, Texas on the Houston Ship Channel in order to process large volumes of PEMEX heavy Maya crude while fulfilling the U.S. Clean Air Act requirements.
(Prices except Maya for years 2007-February 2013)(Prices for Maya) (Prices for April 24, 2013).
By July 2013, Western Canadian Select (WCS) "heavy oil prices climbed from US$75 to more than US$90 per barrel—the highest level since mid-2008, when WTI oil prices were at a record (US$147.90)—just prior to the 2008-09 'Great Recession'". WCS' heavy oil prices were "expected to remain at the US$90, which is closer to the world price for heavy crude and WCS 'true, inherent value'". The higher price of WCS oil off WTI was explained by "new rail shipments alleviating some export pipeline constraints — and the return of WTI oil prices to international levels".
By January 2014 there was a proliferation of trains and pipelines carrying WCS along with an increased demand on the part of U.S. refineries. By early 2014 there were approximately 150,000 bpd of heavy oil being transported by rail.
According to the Government of Alberta's June 2014 Energy Prices report the price of WCS rose 15% from $68.87 in April 2013 to $79.56 in April 2014 but experienced a low of $58 and a high of $91. During the same time period the price of the benchmark West Texas Intermediate (WTI) rose 10.9% averaging $102.07 a barrel in April 2014.
In April 2020, the price of WTI was $16.55 and the price of WCS was $3.50 with a differential of -$13.05. In June the price of WTI was $38.31 and WCS $33.97, with a differential of -$4.34.
Transport
Pipelines
According to the Oil Sands Magazine, as of March 31, 2020, Western Canadian crude oil export pipelines—Trans Mountain Corporation, TC Energy, Enbridge, and Plains All American Canada—have a total estimated export capacity of 4,230,000 b/d.
Heavy discounts on Albertan crudes in 2012 were falsely attributed to crudes being "landlocked" in the U.S. Midwest. Since that time, several major pipelines have been constructed to release that glut, including Seaway, the Southern leg of Keystone XL and Flanagan South. At the same time Enbridge was forced to shut down Line 55 Spearhead pipeline and Line 59 Flanagan South pipeline in Missouri because of a crude oil leak.
However, significant obstacles persist in approvals on pipelines to export crude from Alberta. In April 2013, Calgary-based Canada West Foundation warned that Alberta is "running up against a [pipeline capacity] wall around 2016, when we will have barrels of oil we can't move". For the time being, rail shipments of crude oil have filled the gap and narrowed the price differential between Albertan and North American crudes. However, additional pipelines exporting crude from Alberta will be required to support ongoing expansion in crude production.
Trans Mountain Pipeline System
The Trans Mountain Pipeline System, which has transported liquid fuels since 1953, was purchased from the Canadian division of Kinder Morgan Energy Partners, by the Canada Development Investment Corporation (CDIC)'s Trans Mountain Corporation. The Trans Mountain Pipeline is the only pipeline that carries Albertan crude and refined oil to the British Columbia Coast. The CDIC, which is accountable to the Parliament of Canada, is in charge of the pipeline system and the Trans Mountain Expansion Project (TMX).
Keystone Pipeline System
TC Energy's Keystone Pipeline System is an oil pipeline system in Canada and the United States that was commissioned in 2010. It runs from the Western Canadian Sedimentary Basin in Alberta to refineries in Illinois and Texas, and also to oil tank farms and an oil pipeline distribution center in Cushing, Oklahoma.
Frustrated by delays in getting approval for Keystone XL (via the US Gulf of Mexico), the Northern Gateway Project (via Kitimat, BC) and the expansion of the existing Trans Mountain line to Vancouver, British Columbia, Alberta intensified exploration of two northern projects "to help the province get its oil to tidewater, making it available for export to overseas markets". Canadian Prime Minister Stephen Harper, spent $9 million by May 2012 and $16.5 million by May 2013 to promote Keystone XL.
In the United States, Democrats are concerned that Keystone XL would simply facilitate getting Alberta oil sands products to tidewater for export to China and other countries via the American Gulf Coast of Mexico.
The project was rejected by the Obama administration on November 6, 2015, "over environmental concerns". It was revived by Presidential executive order on January 24, 2017, by President Donald Trump. which "would transport more than 800,000 barrels per day of heavy crude" from Alberta to the Gulf Coast.
On March 31, 2020, TC Energy's CEO Russ Girling said that construction of the Keystone XL Pipeline would resume, following Alberta's Premier, Jason Kenney's announcement that the UCP government was taking an "equity stake" and providing a "loan guarantee", which amounts to a "total financial commitment of just over $7 billion" to the Keystone XL project. On January 20, 2021, President Joe Biden revoked the permit for the pipeline on his first day in office fulfilling a long-time promise.
Energy East pipeline
The Energy East pipeline was a proposed pipeline project announced on August 1, 2013, by TransCanada CEO Russ Girling. The $12 billion 4,400-kilometre (2,700 mile) pipeline project was canceled by TransCanada in 2017. A number of groups announced their intention to oppose the pipeline. The project was canceled on October 5, 2017, by TransCanada. In the long term, this meant that WCS could be shipped to Atlantic tidewater via deep water ports such as Quebec City and Saint John. Potential heavy oil overseas destinations include India, where super refineries capable of processing vast quantities of oil sands oil are already under construction. In the meantime, Energy East pipeline would be used to send light sweet crude, such as Edmonton Par crude from Alberta to eastern Canadian refineries in Montreal and Quebec City, for example. Eastern Canadian refineries, such as Imperial Oil's 88,000-barrel-a-day refinery in Dartmouth, N.S., currently imports crude oil from North and West Africa and Latin America, according to Mark Routt, "a senior energy consultant at KBC in Houston, who has a number of clients interested in the project". The proposed Energy East Pipeline would have had the potential of carrying 1.1-million barrels of oil per day from Alberta and Saskatchewan to eastern Canada.
Patricia Mohr, a Bank of Nova Scotia senior economist and commodities analyst, in her report on the economic advantages to Energy East, argued that, Western Canadian Select, the heavy oil marker in Alberta, "could have earned a much higher price in India than actually received" in the first half of 2013 based on the price of Saudi Arabian heavy crude delivered to India" if the pipeline had already been operational.In her report, Mohr predicted that initially Quebec refineries, such as those owned by Suncor Energy and Valero, could access light oil or upgraded synthetic crude from Alberta's oil sands via Energy East to displace "imports priced off more expensive Brent crude". In the long term, supertankers using the proposed Irving/TransCanada deep-sea Saint John terminal could ship huge quantities of Alberta's blended bitumen, such as WCS to the super refineries in India. Mohr predicted in her report that the price of WCS would increase to US$90 per barrel in July 2013 up from US$75.41 in June."
Canada's largest refinery, capable of processing 300,000 barrels of oil per day, is owned and operated by Irving Oil, in the deep-water port of Saint John, New Brunswick, on the east coast. A proposed $300-million deep water marine terminal, to be constructed and operated jointly by TransCanada and Irving Oil's, would be built near Irving Oil's import terminal with construction to begin in 2015.
Maine-based Portland–Montreal Pipe Line Corporation, which consists of Portland Pipe Line Corporation (in the United States) and Montreal Pipe Line (in Canada), is considering ways to carry Canadian oil sands crude to Atlantic tidewater at Portland's deep-water port. The proposal would mean that crude oil from the oil sands would be piped via the Great Lakes, Ontario, Quebec and New England to Portland, Maine. The pipelines are owned by ExxonMobil and Suncor.
Enbridge Pipeline System
Enbridge, which operates in North America, has the longest crude oil transportation system in the continent.
Enbridge Northern Gateway Pipelines, which was first announced in 2006, would have transported heavy crude oil from Athabasca to Kitimat, British Columbia. Under Prime Minister of Canada Justin Trudeau, Bill-48 was passed in 2015, which imposed a ban on oil tanker traffic on the north coast of British Columbia. Bill-48 made the project uneconomical.
Enbridge owns and operates the Alberta Clipper pipeline—Line 67—part of the Enbridge Pipeline System, which has been running from Hardisty, Alberta to Superior, Wisconsin, in the United States since 2010, connecting the oil sands production area with the existing network.
Enbridge reversed the flow direction of the Seaway pipeline to originate in Cushing, transporting WCS to Freeport, Texas, on May 17, 2012, which caused a price increase in WCS. With the opening of Enbridge's major pipeline Seaway—the Southern leg of Keystone XL and Flanagan South Line 59 in Missouri in 2015, some of the "bottleneck" was relieved. In April 2015, Enbridge filled a "new 570,000-barrel-a-day pipeline".
By March 2020, Cenovus Energy has committed to 75,000 barrels a day in long-term contracts with Enbridge to ship via Mainline and Flanagan South systems to Texas. As of March 30, 2020, the price oil producers pay to transport heavy oil to Texas through Enbridge pipelines, is US$7 to US$9 a barrel. At that time, the price of WCS a barrel was US$3.82 per barrel.
Plains All American Pipeline
The 16.5 km long Milk River and the 0.75 km Rangeland pipelines are owned and operated by the Texas-headquartered Plains All American Pipeline. The Milk River pipeline transports 97,900 bbl/day.
Rail
By 2011, output from the Bakken Shale formation in North Dakota Crude was increasing faster than pipelines could be built. Oil producers and pipeline companies turned to railroads for transportation solutions. Bakken oil competes with WCS for access to transportation by pipeline and by rail. By the end of 2010, Bakken oil production rates had reached per day, thereby outstripping the pipeline capacity to ship oil out of the Bakken. By January 2011 Bloomberg News reported that Bakken crude oil producers were using railway cars to ship oil.
In 2013, there were new rail shipments of WCS. Since 2012, the amount of crude oil transported by rail in Canada had quadrupled and by 2014 it was expected to continue to surge.
In August 2013, then-U.S. Development Group's (now USD Partners) CEO, Dan Borgen, a Texas-based oil-by-rail pioneer, shifted his attention away from the U.S. shale oil plays towards the Canadian oil sands. Borgen "helped introduce the energy markets to specialized terminals that can quickly load mile-long oil tank trains heading to the same destination - facilities that .... revolutionized the U.S. oil market". Since 2007, Goldman Sachs has played a leading role in financing USD's "expansion of nearly a dozen specialized terminals that can quickly load and unload massive, mile-long trains carrying crude oil and ethanol across the United States". USD's pioneering projects included large-scale “storage in transit” (SIT) inspired by the European model for the petrochemicals industry. USD sold five of the specialized oil-by-rail US terminals to "Plains All American Pipeline for $500 million in late 2012, leaving the company cash-rich and asset light". According to Leff, concerns have been raised about the link between Goldman Sachs and USD.
By January 2014 there was a proliferation of trains and pipelines carrying WCS along with an increased demand on the part of U.S. refineries. By early 2014 there were approximately 150,000 bpd of heavy oil being transported by rail.
The price of WCS rose in August 2014 as anticipated expansions in crude-by-rail capacity at Hardisty increased when USDG Gibson Energy's Hardisty Terminal, the new state-of-the-art crude-by-rail origination terminal and loading facility with pipeline connectivity, became operational in June 2014 with a capacity to load up to two 120-rail car unit trains per day (120,000 of heavy crude bbd). The Hardisty rail terminal can load up to two 120-railcar unit trains per day "with 30 railcar loading positions on a fixed loading rack, a unit train staging area and loop tracks capable of holding five unit trains simultaneously". By 2015 there was "a newly-constructed pipeline connected to Gibson Energy's Hardisty storage terminal" with "over 5 million barrels of storage in Hardisty".
Before the 2019 provincial election, the previous NDP government, had approved a plan that would cost $3.7 billion over a three-year period to transport up to 120,000 barrels per day out of Alberta by leasing 4,400 rail cars. While the NDP government said the leased cars "would generate $5.9 billion in increased royalties, taxes and commercial revenues", the UCP government under Premier Jason Kenney, who won the 2019 election, disagreed. The UCP's October 2019 budget included a $1.5 billion incentive to cancel the NDP crude-by-rail program. The government said that this would "mitigate further losses by $300 million." They entered into negotiations to privatize the crude-by-rail agreements.
After months of discussions, Premier Kenney's UCP government announced in late October 2019, that petroleum producers could increase their "oil output levels above current provincial quotas", if they incrementally increased the amount of oil they ship by rail.
Canadian Pacific Railway
In 2014, Canadian Pacific Railway (CPR) COO Keith Creel said CPR was in a growth position in 2014 thanks to the increased Alberta crude oil (WCS) transport that will account for one-third of CPR's new revenue gains through 2018 "aided by improvements at oil-loading terminals and track in western Canada". By 2014 CPR was shaped by CEO Hunter Harrison and American activist shareholder Bill Ackman. Americans own 73% of CPR shares, while Canadians and Americans each own 50% of CN. In order to improve returns for their shareholders, railways cut back on their workforce and downsized the number of locomotives.
Creel said in a 2014 interview that the transport of Alberta's heavy crude oil would account for about 60% of the CP's oil revenues, and light crude from the Bakken Shale region in Saskatchewan and the U.S. state of North Dakota would account for 40%. Prior to the implementation of tougher regulations in both Canada and the United States following the Lac-Mégantic rail disaster and other oil-related rail incidents which involved the highly volatile, sensitive light sweet Bakken crude, Bakken accounted for 60% of CPR's oil shipments. Creel said that "It [WCS is] safer, less volatile and more profitable to move and we’re uniquely positioned to connect to the West Coast as well as the East Coast."
Railway officials claim that more Canadian oil-by-rail traffic is "made up of tough-to-ignite undiluted heavy crude and raw bitumen".
CPR's high capacity North Line, which runs from Edmonton to Winnipeg, is connected to "all the key refining markets in North America". Chief Executive Hunter Harrison told the Wall Street Journal in 2014 that Canadian Pacific would improve tracks along its North Line as part of a plan to ship Alberta oil east.
Waterborne
On September 21, 2014, Suncor Energy loaded its first tanker of heavy crude, about 700,000 barrels of WCS, onto the tanker Minerva Gloria at the port of Sorel near Montreal, Quebec. Minerva Gloria is an Aframax Crude Oil double hulled tanker with a deadweight tonnage (DWT) of 115,873 tons. Her destination was Sarroch, on the Italian island of Sardinia. Minerva Gloria measures × .
The 116,000-dwt Stealth Skyros measures × . From October 2013 to October 2014 Koch held a one-year charter on Stealth Skyros which was fixed for 12 months at $19,500 per day.
Repsol and WCS
The Spanish oil company Repsol obtained the license from the U.S. Department of Commerce to export 600,000 barrels of WCS from the United States. The WCS was shipped via Freeport, Texas, in the Gulf Coast (USGC) to the port of Bilbao on the Suezmax oil tanker, Aleksey Kosygin. It is considered to be "the first re-export of Canadian crude from the USGC to a non-US port" as the "US government tightly controls any crude exports, including of non-US grades." The Brussels-based European Union's European Environment Agency (EEA) monitored the trade. WCS, with its API of 20.6 and sulphur content of 3.37%, has been controversial.
In December 2014, Repsol agreed to buy Talisman Energy (TLM.TO), Canada's fifth-largest independent oil producer, for US$8.3 billion which is estimated to be at about 50 percent of Talisman's value in June 2014. By December 2014, the price of WCS had dropped to US$40.38 from $79.56 in April 2014. The global demand for oil decreased, production increased and the price of oil plunged starting in June and continuing to drop through December.
Other oil sands crude oil products
Derivatives markets
Most Western Canadian Select (WCS) is piped to Illinois for refinement and then to Cushing, Oklahoma, for sale. WCS' futures contracts are available on the Chicago Mercantile Exchange (CME)while bilateral over-the-counter WCS swaps can be cleared on Chicago Mercantile Exchange (CME)'s ClearPort or by NGX.
Refineries
WCS is transported from Alberta to refineries with capacity to process heavy oil from the oil sands. The Petroleum Administration for Defense Districts (Padd II), in the US Midwest, have experience running the WCS blend. Most of WCS goes to refineries in the Midwestern United States where refineries "are configured to process a large percentage of heavy, high-sulphur crude and to produce large quantities of transportation fuels, and low amounts of heavy fuel oil". While the US refiners "invested in more complex refinery configurations with higher processing capability" that use "cheaper feedstocks" like WCS and Maya, Canada did not. While Canadian refining capacity has increased through scale and efficiency, there are only 19 refineries in Canada compared to 148 in the United States.
WCS crude oil with its "very low API (American Petroleum Institute) gravity and high sulfur content and levels of residual metals" requires specialized refining that few Canadian refineries have. It can only be processed in refiners modified with new metallurgy capable of running high-acid (TAN) crudes.
"The transportation costs associated with moving crude oil from the oil fields in Western Canada to the consuming regions in the east and the greater choice of crude qualities make it more economic for some refineries to use imported crude oil. Therefore, Canada’s oil economy is now a dual market. Refineries in Western Canada run domestically produced crude oil, refineries in Quebec and the eastern provinces run primarily imported crude oil, while refineries in Ontario run a mix of both imported and domestically produced crude oil. In more recent years, eastern refineries have begun running Canadian crude from east coast offshore production."
US refineries import large quantities of crude oil from Canada, Mexico, Colombia and Venezuela, and they began in the 1990s to build coker and sulphur capacity enhancements to accommodate the growth of these medium and heavy sour crude oils while meeting environmental requirements and consumer demand for transportation fuels. "While US refineries have made significant investments in complex refining hardware, which supports processing heavier, sourer crude into gasoline and distillates, similar investment outside the US has been pursued less aggressively. Medium and heavy crude oil make up 50% of US crude oil inputs and the US continues to expand its capacity to process heavy crude.
Large integrated oil companies that produce WCS in Canada have also started to invest in upgrading refineries in order to process WCS.
BP Whiting, Indiana refinery
The BP Plc refinery in Whiting, Indiana, is the sixth-largest refinery in the US with a capacity of 413,500 b/d. In 2012 BP began investing in a multi-billion modernization project at the Whiting refinery in order to distill WCS. This $4 billion refit was completed in 2014 and was one of the factors contributing to the increase in price of WCS. The centerpiece of the upgrade was Pipestill 12, the refinery's largest crude distillation unit, which came online in July 2013. Distillation units provide feedstock for all the other units of the refinery by distilling the crude as it enters the refinery. The Whiting refinery is situated close to the border between Indiana and Illinois. It is the major buyer of CWS and WTI from Cushing, Oklahoma, the delivery point of the US benchmark oil contract.
On August 8, 2015, there was a malfunction of piping inside Pipestill 12 causing heavy damage and the unit was offline until August 25. This was one of the major factors contributing to the drop in the price of oil with WCS at its lowest price in nine years.
Toledo refinery, Ohio
The Toledo refinery in northwestern Ohio, in which BP has invested around $500 million on improvements since 2010, is a joint venture with Husky Energy, which operates the refinery, and processes approximately 160,000 barrels of crude oil per day. Since the early 2000s, the company has been focusing its refining business on processing crude from oil sands and shales.
Sarnia-Lambton $10-billion oil sands bitumen upgrading project
Since September 2013 WCS has been processed at Imperial Oil's Sarnia, Ontario, refinery and ExxonMobil Corporation's (XOM) has Joliet plant, Illinois and Baton Rouge, Louisiana.
By April 2013, Imperial Oil's Sarnia, Ontario refinery was the only plugged-in coking facility in eastern Canada that could process raw bitumen.
In July 2014 the Canadian Academy of Engineering identified the Sarnia-Lambton $10-billion oil sands bitumen upgrading project to produce refinery ready crudes, as a high priority national scale project.
Co-op Refinery Complex
Lloydminster heavy oil, a component in the Western Canadian Select (WCS) heavy oil blend, is processed at the CCRL Refinery Complex heavy oil upgrader which had a fire in the coker of the heavy oil upgrader section of the plant, on February 11, 2013. It was the third major incident in 16 months, at the Regina plant. The price of Western Canadian Select weakened against U.S. benchmark West Texas Intermediate (WTI) oil.
Pine Bend Refinery
The Pine Bend Refinery, the largest oil refinery in Minnesota, located in the Twin Cities receives 80% of its incoming heavy crude from the Athabasca oil sands. The crude oil is piped from the northwest to the facility through the Lakehead and Minnesota pipelines which are also owned by Koch Industries. Most petroleum enters and exits the plant through a Koch-owned, 537-mile pipeline system that stretches across Minnesota and Wisconsin. The U.S. Energy Information Agency (EIA) ranked it at 14th in the country as of 2013 by production. By 2013 its nameplate capacity increased to per day.
Repsol
Repsol responded to the enforcement in January 2009 of the European Union's reduced sulphur content in automotive petrol and diesel from 50 to 10 parts per million, with heavy investment in upgrading their refineries. They were upgrading three of their five refineries in Spain (Cartagena, A Coruña, Bilbao, Puertollano and Tarragona) with cokers that have the capacity to refine Western Canadian Select heavy oil. Many other European refineries closed as margins decreased. Repsol tested the first batches of WCS at its Spanish refineries in May 2014.
Cartagena refinery
In 2012 Repsol completed its €3.15-billion upgrade and expansion of its Cartagena refinery in Murcia, Spain, which included a new coking unit capable of refining heavy crude like WCS.
Petronor
Repsol's 2013 completed upgrades, which included a new coker unit and highly efficient cogeneration unit at their Petronor refinery at Muskiz near Bilbao, cost over 1 billion euros and represents "the largest industrial investment in the history of the Basque Country". This new coker unit will produce "higher-demand products such as propane, butane, gasoline and diesel" and "eliminate the production of fuel oil". The cogeneration unit will reduce CO2 emissions and help achieve Spain's Kyoto protocol targets. The refinery is self-sufficient in electricity and capable of distributing power to the grid.
Blenders: ANS, WCS, Bakken Oil
In their 2013 article published in Oil & Gas Journal, John Auers and John Mayes suggest that the "recent pricing disconnects have created opportunities for astute crude oil blenders and refiners to create their own substitutes for waterborne grades (like Alaska North Slope (ANS)) at highly discounted prices. A "pseudo" Alaskan North Slope substitute, for example, could be created with a blend of 55% Bakken and 45% Western Canadian Select at a cost potentially far less than the ANS market price." They argue that there are financial opportunities for refineries capable of blending, delivering, and refining "stranded" cheaper crude blends, like Western Canadian Select(WCS). In contrast to the light, sweet oil produced "from emerging shale plays in North Dakota (Bakken) and Texas (Eagle Ford) as well as a resurgence of drilling in older, existing fields, such as the Permian basin", the oil sands of Alberta is "overwhelmingly heavy".
Impact of Bakken tight oil on WCS
The CIBC reported that the oil industry continued to produce massive amounts of oil in spite of a stagnant crude oil market. Oil production from the Bakken formation alone was forecast in 2012 to grow by 600,000 barrels every year through 2016. By 2012, Canadian tight oil and oil sands production was also surging.
By the end of 2014, as the demand for global oil consumption continued to decline, the remarkably rapid oil output growth in ‘light, tight’ oil production in the North Dakota Bakken, the Permian and Eagle Ford Basins in Texas, while rejuvenating economic growth in "U.S. refining, petrochemical and associated transportation industries, rail & pipelines", [it also] "destabilized international oil markets".
Since 2000, the wider use of oil extraction technologies such as hydraulic fracturing and horizontal drilling, have caused a production boom in the Bakken formation which lies beneath the northwestern part of North Dakota. WCS and Bakken compete for pipelines and railway space. By the end of 2010, oil production rates had reached per day, thereby outstripping the pipeline capacity to ship oil out of the Bakken. This oil competes with WCS for access to transportation by pipeline and rail. Bakken production has also increased in Canada, although to a lesser degree than in the US, since the 2004 discovery of the Viewfield Oil Field in Saskatchewan.
Royalties
Royalty rates in Alberta are based on the price of WTI. That royalty rate is applied to a project's net revenue if the project has reached payout or gross revenue if the project has not yet reached payout. A project's revenue is a direct function of the price it is able to sell its crude for. Since WCS is a benchmark for oil sands crudes, revenues in the oil sands are discounted when the price of WCS is discounted. Those price discounts flow through to the royalty payments.
The Province of Alberta receives a portion of benefits from the development of energy resources in the form of royalties that fund in part programs like health, education and infrastructure.
In 2006–07, the oil sands royalty revenue was $2.411 billion. In 2007–08, it rose to $2.913 billion and it continued to rise in 2008–09 to $2.973 billion. Following the revised Alberta Royalty Regime, it fell in 2009–10 to $1.008 billion. In that year, Alberta's total resource revenue "fell below $7 billion...when the world economy was in the grip of recession".
In February 2012, the Province of Alberta "expected $13.4 billion in revenue from non-renewable resources in 2013-14". By January 2013, the province was anticipating only $7.4 billion. "30 percent of Alberta’s approximately $40-billion budget is funded through oil and gas revenues. Bitumen royalties represent about half of that total." In 2009–10, royalties from the oil sands amounted to $1.008 billion (Budget 2009 cited in Energy Alberta 2009).
In order to accelerate the development of the oil sands, the federal and provincial governments more closely aligned taxation of the oil sands with other surface mining resulting in "charging one percent of a project’s gross revenues until the project’s investment costs are paid in full at which point rates increased to 25 percent of net revenue. These policy changes and higher oil prices after 2003 had the desired effect of accelerating the development of the oil sands industry." A revised Alberta Royalty Regime was implemented on January 1, 2009. through which each oil sands project pays a gross revenue royalty rate of 1% (Oil and Gas Fiscal Regimes 2011:30). Oil and Gas Fiscal Regimes 2011 summarizes the petroleum fiscal regimes for the western provinces and territories. The Oil and Gas Fiscal Regimes described how royalty payments were calculated:
When the price of oil per barrel is less than or equal to $55/bbl indexed against West Texas Intermediate (WTI) (Oil and Gas Fiscal Regimes 2011:30)(Indexed to the Canadian dollar price of West Texas Intermediate (WTI) (Oil and Gas Fiscal Regimes 2011:30) to a maximum of 9%). When the price of oil per barrel is less than or equal to $120/ bbl indexed against West Texas Intermediate (WTI) "payout".
Payout refers "the first time when the developer has recovered all the allowed costs of the project, including a return allowance on those costs equal to the Government of Canada long-term bond rate ["LTBR"].
In order to encourage growth and prosperity and due to the extremely high cost of exploration, research and development, oil sands and mining operations pay no corporate, federal, provincial taxes or government royalties other than personal income taxes as companies often remain in a loss position for tax and royalty purposes for many years. Defining a loss position becomes increasingly complex when vertically integrated multinational energy companies are involved. Suncor claims their realized losses were legitimate and that Canada Revenue Agency (CRA) is unfairly claiming "$1.2-billion" in taxes which is jeopardizing their operations.
From 2009 to 2015, oil sands royalties represented the largest contributor to the province's royalty revenues and contributed about 10% of all of Alberta revenues. In 2014-2015 oil sands revenue was over $5 billion and represented over 10% of Alberta's $48.5 operational expenses. As of December 2015, the only sources of revenue that contributed more were personal income tax provides at 23%, federal transfers at 13%, and corporate income tax at 11%.
In 2023, 1.8 billion barrels of oil were extracted from the Alberta oil sands.
Oil Sands Royalty Rates
"Bitumen Valuation Methodology (BVM) is a method to determine for royalty purposes a value for bitumen produced in oil sands projects and either upgraded on-site or sold or transferred to affiliates. The BVM ensures that Alberta receives market value for its bitumen production, taken in cash or bitumen royalty-in-kind, through the royalty formula. Western Canadian Select (WCS), a grade or blend of Alberta bitumens, diluents (a product such as naphtha or condensate which is added to increase the ability of the oil to flow through a pipeline) and conventional heavy oils, developed by Alberta producers and stored and valued at Hardisty, AB was determined to be the best reference crude price in the development of a BVM."
Bitumen Bubble
In January 2013, the then-Premier of Alberta, Alison Redford, used the term "bitumen bubble" to explain the impact of a dramatic and unanticipated drop in the amount of taxes and revenue from the oil sands linked to the deep discount price of Western Canadian Select against WTI and Maya crude oil, would result in deep cuts in the 2013 provincial budget. In 2012 oil prices rose and fell all year. Premier Redford described the "bitumen bubble" as the differential or "spread between the different prices and the lower price for Alberta's Western Canadian Select (WCS)". In 2013 alone, the "bitumen bubble" effect resulted in a loss of about six billion dollars in provincial revenue.
See also
Petroleum classification
List of crude oil products
Canadian Centre for Energy Information
History of the petroleum industry in Canada (oil sands and heavy oil)
Syncrude
Suncor
CNRL
Notes
Citations
References
This summarizes the petroleum fiscal regimes for the western provinces and territories.
External links
Benchmark crude oils
Oil and gas markets
Proposed energy projects
Bituminous sands
Petroleum geology
Petroleum industry
Unconventional oil
Economy of Canada
Proposed energy infrastructure in Canada | Western Canadian Select | [
"Chemistry"
] | 13,821 | [
"Bituminous sands",
"Unconventional oil",
"Petroleum industry",
"Petroleum",
"Asphalt",
"Chemical process engineering",
"Petroleum geology"
] |
39,659,607 | https://en.wikipedia.org/wiki/Angular%20momentum%20diagrams%20%28quantum%20mechanics%29 | In quantum mechanics and its applications to quantum many-particle systems, notably quantum chemistry, angular momentum diagrams, or more accurately from a mathematical viewpoint angular momentum graphs, are a diagrammatic method for representing angular momentum quantum states of a quantum system allowing calculations to be done symbolically. More specifically, the arrows encode angular momentum states in bra–ket notation and include the abstract nature of the state, such as tensor products and transformation rules.
The notation parallels the idea of Penrose graphical notation and Feynman diagrams. The diagrams consist of arrows and vertices with quantum numbers as labels, hence the alternative term "graphs". The sense of each arrow is related to Hermitian conjugation, which roughly corresponds to time reversal of the angular momentum states (c.f. Schrödinger equation). The diagrammatic notation is a considerably large topic in its own right with a number of specialized features – this article introduces the very basics.
They were developed primarily by Adolfas Jucys (sometimes translated as Yutsis) in the twentieth century.
Equivalence between Dirac notation and Jucys diagrams
Angular momentum states
The quantum state vector of a single particle with total angular momentum quantum number j and total magnetic quantum number m = j, j − 1, ..., −j + 1, −j, is denoted as a ket . As a diagram this is a singleheaded arrow.
Symmetrically, the corresponding bra is . In diagram form this is a doubleheaded arrow, pointing in the opposite direction to the ket.
In each case;
the quantum numbers j, m are often labelled next to the arrows to refer to a specific angular momentum state,
arrowheads are almost always placed at the middle of the line, rather than at the tip,
equals signs "=" are placed between equivalent diagrams, exactly like for multiple algebraic expressions equal to each other.
The most basic diagrams are for kets and bras:
Arrows are directed to or from vertices, a state transforming according to:
a standard representation is designated by an oriented line leaving a vertex,
a contrastandard representation is depicted as a line entering a vertex.
As a general rule, the arrows follow each other in the same sense. In the contrastandard representation, the time reversal operator, denoted here by T, is used. It is unitary, which means the Hermitian conjugate T† equals the inverse operator T−1, that is T† = T−1. Its action on the position operator leaves it invariant:
but the linear momentum operator becomes negative:
and the spin operator becomes negative:
Since the orbital angular momentum operator is L = x × p, this must also become negative:
and therefore the total angular momentum operator J = L + S becomes negative:
Acting on an eigenstate of angular momentum , it can be shown that:
The time-reversed diagrams for kets and bras are:
It is important to position the vertex correctly, as forward-time and reversed-time operators would become mixed up.
Inner product
The inner product of two states and is:
and the diagrams are:
For summations over the inner product, also known in this context as a contraction (c.f. tensor contraction):
it is conventional to denote the result as a closed circle labelled only by j, not m:
Outer products
The outer product of two states and is an operator:
and the diagrams are:
For summations over the outer product, also known in this context as a contraction (c.f. tensor contraction):
where the result for T was used, and the fact that m takes the set of values given above. There is no difference between the forward-time and reversed-time states for the outer product contraction, so here they share the same diagram, represented as one line without direction, again labelled by j only and not m:
Tensor products
The tensor product ⊗ of n states , , ... is written
and in diagram form, each separate state leaves or enters a common vertex creating a "fan" of arrows - n lines attached to a single vertex.
Vertices in tensor products have signs (sometimes called "node signs"), to indicate the ordering of the tensor-multiplied states:
a minus sign (−) indicates the ordering is clockwise, , and
a plus sign (+) for anticlockwise, .
Signs are of course not required for just one state, diagrammatically one arrow at a vertex. Sometimes curved arrows with the signs are included to show explicitly the sense of tensor multiplication, but usually just the sign is shown with the arrows left out.
For the inner product of two tensor product states:
there are n lots of inner product arrows:
Examples and applications
The diagrams are well-suited for Clebsch–Gordan coefficients.
Calculations with real quantum systems, such as multielectron atoms and molecular systems.
See also
Ladder operator
Fock space
Feynman diagrams
References
Wormer and Paldus (2006) provides an in-depth tutorial in angular momentum diagrams.
Further reading
Notes
Angular momentum
Quantum mechanics | Angular momentum diagrams (quantum mechanics) | [
"Physics",
"Mathematics"
] | 1,022 | [
"Physical quantities",
"Quantity",
"Theoretical physics",
"Quantum mechanics",
"Angular momentum",
"Momentum",
"Moment (physics)"
] |
39,660,108 | https://en.wikipedia.org/wiki/Mass%20concrete | Mass concrete is defined by American Concrete Institute Committee 207 as "any volume of concrete with dimensions large enough to require that measures be taken to cope with the generation of heat from the hydration of cement and attendant volume change to minimize cracking."
As the interior temperature of mass concrete rises due to the process of cement hydration, the outer concrete may be cooling and contracting. If the temperature differs too much within the structure, the material can crack.
The main factors influencing temperature variation in the mass concrete structure are: the size of the structure, the ambient temperature, the initial temperature of the concrete at the time of placement and curing program, the cement type, and the cement contents in the mix.
Mass concrete structures include massive mat foundations, dams, and other concrete structures with a width or depth exceeding three feet or one meter.
History
Historically, in Britain, mass concrete designated early concrete with no reinforcement cast in situ using shuttering. It was used mainly between 1850 and 1900 on a variety of buildings, mainly as a walling material or where mass was required for gravity such as in dams, reservoirs, retaining walls and maritime structures. In those days, the term was not officially defined and did not contain any connotation to large dimensions generating heat from hydration of cement, as these occurrences were not yet understood.
References
Concrete
Building materials | Mass concrete | [
"Physics",
"Engineering"
] | 270 | [
"Structural engineering",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Concrete",
"Matter",
"Architecture"
] |
39,664,301 | https://en.wikipedia.org/wiki/Ferrate | Ferrate loosely refers to a material that can be viewed as containing anionic iron complexes. Examples include tetrachloroferrate ([FeCl4]2−), oxyanions (), tetracarbonylferrate ([Fe(CO)4]2−), the organoferrates. The term ferrate derives . Some ferrates are called super-iron by some and have uses in battery applications and as an oxidizer. It can be used to clean water safely from a wide range of pollutants, including viruses, microbes, arsenic, sulfur-containing compounds, cyanides and other nitrogen-containing contaminants, many organic compounds, and algae.
References
Iron compounds
Anions
Ferrates | Ferrate | [
"Physics",
"Chemistry"
] | 160 | [
"Matter",
"Anions",
"Salts",
"Ferrates",
"Ions"
] |
39,665,579 | https://en.wikipedia.org/wiki/Lanthanum%20trifluoride | Lanthanum trifluoride is a refractory ionic compound of lanthanum and fluorine. The chemical formula is .
The LaF3 structure
Bonding is ionic with lanthanum highly coordinated. The cation sits at the center of a trigonal prism. Nine fluorine atoms are close: three at the bottom corners of the trigonal prism, three in the faces of the trigonal prism, and three at top corners of the trigonal prism. There are also two fluorides a little further away above and below the prism. The cation can be considered 9-coordinate or 11-coordinate. At 300 K, the structure allows the formation of Schottky defects with an activation energy of 0.07 eV, and free flow of fluoride ions with an activation energy of 0.45 eV, making the crystal unusually electrically conductive.
The larger sized rare earth elements (lanthanides), which are those with smaller atomic number, also form trifluorides with the LaF3 structure. Some actinides do as well.
Applications
This white salt is sometimes used as the "high-index" component in multilayer optical elements such as ultraviolet dichroic and narrowband mirrors. Fluorides are among the most commonly used compounds for UV optical coatings due to their relative inertness and transparency in the far ultraviolet (FUV) Multilayer reflectors and antireflection coatings are typically composed of pairs of transparent materials, one with a low index of refraction, the other with a high index. LaF is one of very few high-index materials in the far UV. The material is also a component of multimetal fluoride glasses such as ZBLAN. It is also doped with europium(II) fluoride in fluoride selective electrodes.
Natural occurrence
LaF3 occurs in the nature as the extremely rare mineral fluocerite-(La). The suffix in the name is known as the Levinson modifier and, by showing the dominant element at a particular site in the structure, is used to differentiate from similar minerals (here: fluocerite-(Ce)).
References
Lanthanum compounds
Fluorides
Lanthanide halides
Crystal structure types | Lanthanum trifluoride | [
"Chemistry",
"Materials_science"
] | 468 | [
"Crystallography",
"Fluorides",
"Crystal structure types",
"Salts"
] |
39,668,177 | https://en.wikipedia.org/wiki/Combinatorial%20Chemistry%20%26%20High%20Throughput%20Screening | Combinatorial Chemistry & High Throughput Screening is a peer-reviewed scientific journal that covers combinatorial chemistry. It was established in 1998 and is published by Bentham Science Publishers. The editor-in-chief is Gerald H. Lushington (LiS Consulting, Lawrence, KS, USA). The journal has 5 sections: Combinatorial/ Medicinal Chemistry, Chemo/Bio Informatics, High Throughput Screening, Pharmacognosy, and Laboratory Automation.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2014 impact factor of 1.222, ranking it 40th out of 70 journals in the category "Chemistry, Applied".
References
External links
Biochemistry journals
Bentham Science Publishers academic journals
Academic journals established in 1998
English-language journals
Combinatorial chemistry | Combinatorial Chemistry & High Throughput Screening | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 171 | [
"Combinatorial chemistry",
"Biochemistry journals",
"Biochemistry journal stubs",
"Materials science",
"Combinatorics",
"Biochemistry stubs",
"Biochemistry literature"
] |
47,025,674 | https://en.wikipedia.org/wiki/Hollow%20fiber%20membrane | Hollow fiber membranes (HFMs) are a class of artificial membranes containing a semi-permeable barrier in the form of a hollow fiber. Originally developed in the 1960s for reverse osmosis applications, hollow fiber membranes have since become prevalent in water treatment, desalination, cell culture, medicine, and tissue engineering. Most commercial hollow fiber membranes are packed into cartridges which can be used for a variety of liquid and gaseous separations.
Manufacturing
HFMs are commonly produced using artificial polymers. The specific production methods involved are heavily dependent on the type of polymer used as well as its molecular weight. HFM production, commonly referred to as "spinning", can be divided into four general types:
Melt Spinning, in which a thermoplastic polymer is melted and extruded through a spinneret into air and subsequently cooled.
Dry Spinning, in which a polymer is dissolved in an appropriate solvent and extruded through a spinneret into air.
Dry-Jet Wet Spinning, in which a polymer is dissolved in an appropriate solvent and extruded into air and a subsequent coagulant (usually water).
Wet spinning, in which a polymer is dissolved and extruded directly into a coagulant (usually water).
Common to each of these methods is the use of a spinneret, a device containing a needle through which solvent is extruded and an annulus through which a polymer solution is extruded. As the polymer is extruded through the annulus of the spinneret, it retains a hollow cylindrical shape. As the polymer exits the spinneret, it solidifies into a membrane through a process known as phase inversion. The properties of the membrane -such as average pore diameter and membrane thickness- can be finely tuned by changing the dimensions of the spinneret, temperature and composition of "dope" (polymer) and "bore" (solvent) solutions, length of air gap (for dry-jet wet spinning), temperature and composition of the coagulant, as well as the speed at which produced fiber is collected by a motorized spool. Extrusion of the polymer and solvent through the spinneret can be accomplished either through the use of gas-extrusion or a metered pump. Some of the polymers most commonly used for fabricating HFMs include cellulose acetate, polysulfone, polyethersulfone, and polyvinylidene fluoride.
After fibers are created, they are typically assembled together in a membrane module, with many fibers in parallel. Fiber ends are fixed together in a resin or epoxy at both ends. This part may be cut clean through to more readily expose their entrance/exits. Typically, these are place inside a cylinder, which has inlets and outlets on opposite sides for the bore (Lumen) side, and side ports for allowing flow to go over the membranes on the shell side. Typically, the higher pressure feed is on the bore side, to avoid fiber collapse.
Characterization
The properties of HFMs can be characterized using the same techniques commonly used for other types of membranes. The primary properties of interest for HFMs are average pore diameter and pore distribution, measurable via a technique known as porosimetry, a feature of several laboratory instruments used for measuring pore size. Pore diameter can also be measured via a technique known as evapoporometry, in which evaporation of 2-propanol through the pores of a membrane is related to pore size via the Kelvin equation. Depending on the diameters of pores in an HFM, scanning electron microscopy or transmission electron microscopy can be used to yield a qualitative perspective of pore size.
Applications
Hollow fiber membranes are ubiquitously used in industrial separations, especially the filtration of drinking water.
Industrial water filters are mainly equipped with ultrafiltration hollow fiber membranes. Domestic water filtration systems have microfiltration hollow fiber membranes. In microfiltration a membrane pore diameter of 0.1 micrometers cuts-off microorganisms like germs and bacteria, Giardia cysts and other intestinal parasites, as well removing sediments. Ultrafiltration membranes are capable of removing not only bacteria, but also viruses.
Hollow fibers are commonly used substrates for specialized bioreactor systems, with the ability of some hollow fiber cartridges to culture billions of anchorage-dependent cells within a relatively low (<100 mL) bioreactor volume.
Hollow fibers can be used for drug efficacy testing in cancer research, as an alternative to the traditional, but more expensive, xenograft model.
Hollow fiber membranes are used in Membrane oxygenators in extracorporeal membrane oxygenation which oxygenates blood, replacing lungs in critically ill patients.
See also
Membrane
List of synthetic polymers
Reverse osmosis
Nanofiltration
Ultrafiltration
Microfiltration
References
Polymer chemistry
Membrane technology | Hollow fiber membrane | [
"Chemistry",
"Materials_science",
"Engineering"
] | 995 | [
"Membrane technology",
"Materials science",
"Polymer chemistry",
"Separation processes"
] |
47,027,045 | https://en.wikipedia.org/wiki/Guest%20Host%20Displays | Guest Host Displays, Dichroic Displays, Polymer Dispersed Displays
Guest host displays are similar to more common liquid crystal displays, but also include polymers, inorganic particles, or dichroic dye within the liquid crystal matrix.
In dichroic dye displays, as the birefringence of the host liquid crystals change from planar to perpendicular orientation, the guest dyes also change orientation, from absorbing / planar orientation, to non-absorbing / perpendicular orientation.
Unlike common TN (Twisted Nematic) or STN (Super Twisted Nematic) liquid crystal displays, guest host displays are typically driven direct, and are not usually multiplex driven.
In addition, guest host displays usually require higher operating voltages than TN or STN displays. For example, the polymer dispersed liquid crystal display (also called a P.D.L.C. display), is usually operated at voltages from 4.5 V to 24 V to as high as 100 V. Similarly, dichroic dye containing guest host displays, require voltages from 4.5 V to 10 V and higher.
However, the P.D.L.C. display and many dichroic dye containing guest host displays, such as the White-Taylor Phase Change display, do not require polarizers, which is a significant advantage over TN or STN displays. Lacking polarizers these displays commonly have lower contrast than TN or STN displays, But are often sunlight readable, and usually have no backlight, and hence no backlight glare.
Polarizer free displays enable low cost devices, since the polarizer is one of the more expensive components comprising the common liquid crystal display.
Lacking polarizers, the guest host display substrates can be manufactured from low cost birefringent plastic films. And the plastic film substrates enable additional economies such as continuous R2R manufacturing (Roll to Roll manufacturing) of the displays, with its inherent economies over batch manufacturing processes.
Continuous manufacturing of displays is described in U.S. Patents 4,228,574, 4,924,243, 4,094,058, and patents pending.
In some cases, the R2R manufacturing of the guest host displays can be integrated with other roll to roll manufacturing process. For example, automated pick and place machines, such as a rotary circuit board placement machine from M.G.S. or a linear actuator, VonWeise actuator, with bulk tube feeders from M.M.T.F., U.I.C.T.F., T.F., can automate the placement of driver circuit boards and other components.
Advances in A.C.A. and A.C.F. conductive adhesives further enable the automated assembly of displays.
Recent advances in transparent conductive polythiophene coated substrates make display electrodes which resist cracking and breaking, unlike common oxide based transparent conductors.
Advances in Nanoimprint Lithography (N.I.L.) enable precise micro scale and nano scale embossing of display spacers, gaskets, and edge seals R2R. Processes similar to N.I.L. are described in U.S. Patents 5,544,582, 5,365,356, 5,268,782, 5,539,5454,720,173, 5,559621, and patents pending.
Dr. Ernest Lueder teaches that "...(SiOx and Ormocer coated plastic films have O2 and H2O permeations) sufficiently low for maintaining a proper operation of the most sensitive FLCD cells."
Flexible substrates also enable greater design flexibility for the product designer, allowing flexible, conformal, die cut displays which complement the overall product design.
R2R manufacturing of displays is promoted by the non-profit FlexTech Alliance, the non-profit Organic Electronics Association , and R2R intelligent packaging is promoted by the A.I.P.I.A.. Academic research and development is currently being done at the Glenn H. Brown Liquid Crystal Institute at Kent State University, at the Arizona State University Flexible Display Center, at the University of Florida CREOL College of Optics and Photonics, at the VTT Technical Research Centre, at Tohoku University, at the Liquid Crystal Group of the University of Hamburg, and at the University of Stuttgart Institute for Large Area Microelectronics.
Guest host displays consume electrical current much more slowly than l.e.d.s (light emitting diodes), giving them operating life spans of several months, versus the short lifespans of battery operated l.e.d.s.
P.D.L.C. displays are commonly used as privacy glass in homes, offices, and vehicles. Dichroic displays had been extensively researched as robust avionics for aircraft. Both P.D.L.C. Displays and dichroic displays can function as colorful animated skins for consumer products such as mylar balloons and greeting cards.
Guest host displays commonly comprise liquid crystals, polymer or inorganic additives, twist agent, and optionally, dichroic dyes. Liquid crystals are distributed by Merck (DE), Yangcheng Smiling (CN), and Phentex Corporation (US, CN). Dichroic dyes are distributed by Yamamoto Chemicals. Displays are manufactured by Polytronix (US, TW, CN), DreamGlass Group (ES), Shenzhen Santech (CN), P.P.I. (US), Vitswell (CN), Transicoil (US), and by many others.
Further reading :
Printing Processes for the Vacuum Free Manufacture of Liquid Crystal Cells with Plastic Substrates M. Randler, E. Lueder, V. Frey, J. Brill, M. Muecke, University of Stuttgart, Labor fuer Bildschirmtechnik, published by the Society for Information Display, Digest of Technical Proceedings.
Liquid Crystal Dispersions
Liquid Crystals Series, V0L 1
Volume 1 of Series on Advances in Mathematics for Applied Sciences
Series on liquid crystals
Editor Paul S. Drzaic
Publisher World Scientific, 1995
, 9789810217457
Liquid Crystal Dispersions
Liquid Crystal Displays: Addressing Schemes and Electro-Optical Effects, by Ernst Lueder, Wiley, 2010, Chapter 21 and Chapter 22, Printing of Layers for LC Cells and at Google Books.
Flexible Flat Panel Displays, edited by Gregory Crawford, Wiley, 2005 and Google Books. See the chapter Barrier Layer Technology for Flexible Displays and the chapter Roll-to-Roll Manufacturing of Flexible Displays
Liquid Crystals: Applications and Uses, Volumes 1-3, edited by Birenda Bahadur, World Scientific, 1992. Chapter 11 Dichroic Liquid Crystal Displays and Google Books.
Reflective Liquid Crystal Displays, by Shin-Tson Wu, Deng-Ke Yang, Wiley, 2001, Chapter 6 and Google Books.
Liquid Crystals In Complex Geometries: Formed by Polymer And Porous Networks, edited by G P Crawford, S Zumer, CRC Press, 1996.
Conducting polymer substrates for plastic liquid crystal displays
References
Liquid crystals
Liquid crystal displays
Display technology
Flexible electronics
Articles containing video clips | Guest Host Displays | [
"Engineering"
] | 1,489 | [
"Electronic engineering",
"Flexible electronics",
"Display technology"
] |
47,031,941 | https://en.wikipedia.org/wiki/Chia-Kun%20Chu | Chia-Kun (John) Chu (; August 14, 1927 – January 2, 2023) was a Chinese-American applied mathematician who was the Fu Foundation Professor Emeritus of Applied Mathematics at Columbia University. He had been on Columbia faculty since 1965 and served as the department chairman of applied physics and nuclear engineering three times (1982–1983, 1985–1988, 1995–1997).
Biography
Chu received a bachelor's in mechanical engineering from Chiao-Tung University in 1948, a master's from Cornell University in 1950, and a Ph.D. from Courant Institute, New York University in 1959.
Chu was an internationally recognized applied mathematician and one of the pioneers of computational mathematics in fluid dynamics, magnetohydrodynamics, and shock waves. He developed approximations to the differential equations of fluid dynamics and coined the term "computational fluid dynamics".
Chu received numerous honors. He was a recipient of Guggenheim Fellowship and was elected fellow of American Physical Society and fellow of Japan Society for the Promotion of Science. He was awarded an honorary Doctor of Science degree from Columbia University in 2006.
Chia-Kun Chu was the son of bank chairman Ju Tang Chu. Chia-Kun Chu was also the brother-in-law of Z.Y. Fu, a Columbia donor who gave his name for the Fu Foundation School of Engineering and Applied Science.
Chia-Kun Chu died on January 2, 2023, at the age of 95.
See also
Chinese people in New York City
References
Notes
1927 births
2023 deaths
21st-century American physicists
20th-century American mathematicians
National Chiao Tung University (Shanghai) alumni
Cornell University College of Engineering alumni
Courant Institute of Mathematical Sciences alumni
Fluid dynamicists
American academics of Chinese descent
Fellows of the American Physical Society
Columbia University faculty
Scientists from Shanghai | Chia-Kun Chu | [
"Chemistry"
] | 365 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
47,032,561 | https://en.wikipedia.org/wiki/Castellated%20beam | A castellated beam is a beam style where an I-beam is subjected to a longitudinal cut along its web following a specific pattern.
The purpose is to divide and reassemble the beam with a deeper web by taking advantage of the cutting pattern.
References
See also
Cellular beam
Open web steel joist
Structural engineering | Castellated beam | [
"Engineering"
] | 65 | [
"Structural engineering",
"Civil engineering",
"Civil engineering stubs",
"Construction"
] |
52,576,970 | https://en.wikipedia.org/wiki/KLM%20protocol | The KLM scheme or KLM protocol is an implementation of linear optical quantum computing (LOQC) developed in 2000 by Emanuel Knill, Raymond Laflamme and Gerard J. Milburn. This protocol allows for the creation of universal quantum computers using solely linear optical tools. The KLM protocol uses linear optical elements, single-photon sources and photon detectors as resources to construct a quantum computation scheme involving only ancilla resources, quantum teleportations and error corrections.
Overview
The KLM scheme induces an effective interaction between photons by making projective measurements with photodetectors, which falls into the category of non-deterministic quantum computation. It is based on a non-linear sign shift between two qubits that uses two ancilla photons and post-selection. It is also based on the demonstrations that the probability of success of the quantum gates can be made close to one by using entangled states prepared non-deterministically and quantum teleportation with single-qubit operations. Without a high enough success rate of a single quantum gate unit, it may require an exponential amount of computing resources. The KLM scheme is based on the fact that proper quantum coding can reduce the resources for obtaining accurately encoded qubits efficiently with respect to the accuracy achieved, and can make LOQC fault-tolerant for photon loss, detector inefficiency and phase decoherence. LOQC can be robustly implemented through the KLM scheme with a low enough resource requirement to suggest practical scalability, making it as promising a technology for quantum information processing as other known implementations.
Elements of LOQC in the KLM scheme
Qubits and modes
To avoid losing generality, the discussion below does not limit itself to a particular instance of mode representation. A state written as means a state with zero photons in mode (could be the "vertical" polarization channel) and one photon in the mode (could be the "horizontal" polarization channel).
In the KLM protocol, each of the photons is usually in one of two modes, and the modes are different between the photons (the possibility that a mode is occupied by more than one photon is zero). This is not the case only during implementations of controlled quantum gates such as CNOT. When the state of the system is as described, the photons can be distinguished, since they are in different modes, and therefore a qubit state can be represented using a single photon in two modes, vertical (V) and horizontal (H): for example, and . It is common to refer to the states defined via occupation of modes as Fock states.
Such notations are useful in quantum computing, quantum communication and quantum cryptography. For example, it is very easy to consider a loss of a single photon using these notations, simply by adding the vacuum state containing zero photons in those two modes. As another example, when having two photons in two separated modes (e.g. two time bins or two arms of an interferometer), it is easy to describe an entangled state of the two photons. The singlet state (two linked photons with overall spin quantum number ) can be described as follows:
if and describe the basis states of the two separated modes, then the singlet state is
State measurement/readout
In the KLM protocol, a quantum state can be read out or measured using photon detectors along selected modes. If a photodetector detects a photon signal in a given mode, it means the corresponding mode state is a 1-photon state before measuring. As discussed in KLM's proposal, photon loss and detection efficiency dramatically influence the reliability of the measurement results. The corresponding failure issue and error correction methods will be described later.
A left-pointed triangle will be used in circuit diagrams to represent the state readout operator in this article.
Implementations of elementary quantum gates
Ignoring error correction and other issues, the basic principle in implementations of elementary quantum gates using only mirrors, beam splitters and phase shifters is that by using these linear optical elements, one can construct any arbitrary 1-qubit unitary operation; in other words, those linear optical elements support a complete set of operators on any single qubit.
The unitary matrix associated with a beam splitter is:
,
where and are determined by the reflection amplitude and the transmission amplitude (relationship will be given later for a simpler case). For a symmetric beam splitter, which has a phase shift under the unitary transformation condition and , one can show that
,
which is a rotation of the single qubit state about the -axis by in the Bloch sphere.
A mirror is a special case where the reflecting rate is 1, so that the corresponding unitary operator is a rotation matrix given by
.
For most cases of beam splitters used in QIP, the incident angle .
Similarly, a phase shifter operator associates with a unitary operator described by , or, if written in a 2-mode format
,
which is equivalent to a rotation of about the -axis.
Since any two rotations along orthogonal rotating axes can generate arbitrary rotations in the Bloch sphere, one can use a set of symmetric beam splitters and mirrors to realize an arbitrary operators for QIP. The figures below are examples of implementing a Hadamard gate and a Pauli-X-gate (NOT gate) by using beam splitters (illustrated as rectangles connecting two sets of crossing lines with parameters and ) and mirrors (illustrated as rectangles connecting two sets of crossing lines with parameter ).
In the above figures, a qubit is encoded using two mode channels (horizontal lines): represents a photon in the top mode, and represents a photon in the bottom mode.
In the KLM scheme, qubit manipulations are realized via a series of non-deterministic operations with increasing probability of success. The first improvement to this implementation that will be discussed is the nondeterministic conditional sign flip gate.
Implementation of nondeterministic conditional sign flip gate
An important element of the KLM scheme is the conditional sign flip or nonlinear sign flip gate (NS-gate) as shown in the figure below on the right. It gives a nonlinear phase shift on one mode conditioned on two ancilla modes.
In the picture on the right, the labels on the left of the bottom box indicate the modes. The output is accepted only if there is one photon in mode 2 and zero photons in mode 3 detected, where the ancilla modes 2 and 3 are prepared as the state. The subscript is the phase shift of the output, and is determined by the parameters of inner optical elements chosen. For case, the following parameters are used: , , , , , , and . For the case, the parameters can be chosen as , , , , , , and . Similarly, by changing the parameters of beam splitters and phase shifters, or by combining multiple NS gates, one can create various quantum gates. By sharing two ancilla modes, Knill invented the following controlled-Z gate (see the figure on the right) with success rate of 2/27.
The advantage of using NS gates is that the output can be guaranteed conditionally processed with some success rate which can be improved to nearly 1. Using the configuration as shown in the figure above on the right, the success rate of an NS gate is . To further improve successful rate and solve the scalability problem, one needs to use gate teleportation, described next.
Gates teleportation and near-deterministic gates
Given the use of non-deterministic quantum gates for KLM, there may be only a very small probability that a circuit with gates with a single-gate success possibility of will work perfectly by running the circuit once. Therefore, the operations must on average be repeated on the order of times or such systems must be run in parallel. Either way, the required time or circuit resources scale exponentially. In 1999, Gottesman and Chuang pointed out that one can prepare the probabilistic gates offline from the quantum circuit by using quantum teleportation. The basic idea is that each probabilistic gate is prepared offline, and the successful event signal is teleported back to the quantum circuit. An illustration of quantum teleportation is given in the figure on the right. As can be seen, the quantum state in mode 1 is teleported to mode 3 through a Bell measurement and an entangled resource Bell state , where the state 1 may be regarded as prepared offline.
The resource Bell state can be generated from the state by use of a mirror with parameter
By using teleportation, many probabilistic gates may be prepared in parallel with -photon entangled states, sending a control signal to the output mode. Through using probabilistic gates in parallel offline, a success rate of can be obtained, which is close to 1 as becomes large. The number of gates needed to realize a certain accuracy scales polynomially rather than exponentially. In this sense, the KLM protocol is resource-efficient. One experiment using the KLM originally proposed controlled-NOT gate with four-photon input was demonstrated in 2011, and gave an average fidelity of .
Error detection and correction
As discussed above, the success probability of teleportation gates can be made arbitrarily close to 1 by preparing larger entangled states. However, the asymptotic approach to the probability of 1 is quite slow with respect to the photon number . A more efficient approach is to encode against gate failure (error) based on the well-defined failure mode of the teleporters. In the KLM protocol, the teleporter's failure can be diagnosed if zero or photons are detected. If the computing device can be encoded against accidental measurements of some certain number of photons, then it will be possible to correct gate failures and the probability of eventually successfully applying the gate will increase.
Many experimental trials using this idea have been carried out (see, for example, Refs). However, a large number of operations are still needed to achieve a success probability very close to 1. In order to promote the KLM protocol as a viable technology, more efficient quantum gates are needed. This is the subject of the next part.
Improvements
This section discusses the improvements of the KLM protocol that have been studied after the initial proposal.
There are many ways to improve the KLM protocol for LOQC and to make LOQC more promising. Below are some proposals from the review article Ref. and other subsequent articles:
Using cluster states in optical quantum computing.
Circuit-based optical quantum computing revisited.
Using one-step deterministic multipartite entanglement purification with linear optics to generate entangled photon states.
There are several protocols for using cluster states to improve the KLM protocol, the computation model with those protocols is an LOQC implementation of the one-way quantum computer:
The Yoran-Reznik protocol - this protocol makes use of cluster-chains in order to increase the success probability of teleportation.
The Nielsen protocol - this protocol improves the Yoran-Reznik protocol by first using teleportation to add qubits to cluster-chains and then uses the enlarged cluster chains to further increase the success probability of teleportation.
The Browne-Rudolph protocol - this protocol improves the Nielsen protocol by using teleportation not only to add qubits to cluster-chains but also to fuse them .
See also
Boson sampling
References
Quantum information science
Quantum optics
Quantum gates | KLM protocol | [
"Physics"
] | 2,369 | [
"Quantum optics",
"Quantum mechanics"
] |
52,577,070 | https://en.wikipedia.org/wiki/Levich%20constant | A Levich constant (B) is often used in order to simplify the Levich equation. Furthermore, B is readily extracted from rotating disk electrode experimental data.
The B can be defined as:
where
n is the number of moles of electrons transferred in the half reaction (number)
F is the Faraday constant (C/mol)
A is the electrode area (cm2)
D is the diffusion coefficient (see Fick's law of diffusion) (cm2/s)
v is the kinematic viscosity (cm2/s)
C is the analyte concentration (mol/cm3)
References
Electrochemical equations
Electrochemistry | Levich constant | [
"Chemistry",
"Mathematics"
] | 137 | [
"Mathematical objects",
"Equations",
"Electrochemistry",
"Electrochemistry stubs",
"Physical chemistry stubs",
"Electrochemical equations"
] |
52,577,096 | https://en.wikipedia.org/wiki/Water%20farming | Water farming is a practice in Florida where farmers are paid to keep stormwater on their properties and receive water from other areas to store on their properties. This practice is also known as Dispersed Water Management by the South Florida Water Management District, and as Water Farcing by critics.
History
In 2005, the South Florida Water Management District created a new program for water farming called the "Dispersed Water Management Program" with eight farmers. The program uses shallow-water storage on existing lands to hold water.
Lobbying
In 2014, Alico dispatched lobbyists to the Florida legislature to try to get funding for the project. When asked about this effort, Alico spokeswoman Sarah Bascom is reported to have said that Alico was just helping a state agency in need by lobbying for the SFWMD who was not permitted by law to lobby. Prior to this effort, funding was provided by the South Florida Water Management District. Though $31.8 million was appropriated for the project in the 2015 legislative session, Gov. Rick Scott vetoed the funding. The largest contract entered into by the South Florida Water Management District -- for 11 years and $122 million -- went to Alico, which would use approximately 35,192 acres of ranchland for water farming. The return on the investment for Florida taxpayers for $122 million invested is about a 1.5-inch reduction in the Lake Okeechobee water level.
Controversy
Much controversy surrounds the use of water farms due to their cost and hard to quantify benefit. A state audit found that the cost of a water farm on public land would be approximately $25 per million gallons, while the cost using privately owned land can be as high as $418 per million gallons. As such, the audit makes clear that the programs costs tens of millions of dollars more than it should.
Contract Termination
Another concern about the contracts by the South Florida Water Management District with the water farmers is the District is paying for capital improvements to the land owned by the water farmers, but the water farmers can terminate the contract at will. This could make the investment by the District a very poor one if the water farmers decide to plant as solutions to citrus greening emerge.
References
Environmental mitigation | Water farming | [
"Chemistry",
"Engineering"
] | 446 | [
"Environmental mitigation",
"Environmental engineering"
] |
52,580,097 | https://en.wikipedia.org/wiki/Metre%20sea%20water | The metre (or meter) sea water (msw) is a metric unit of pressure used in underwater diving. It is defined as one tenth of a bar.
The unit used in the US is the foot sea water (fsw), based on standard gravity and a sea-water density of 64 lb/ft3. According to the US Navy Diving Manual, one fsw equals 0.30643 msw, , or , though elsewhere it states that 33 fsw is (one atmosphere), which gives one fsw equal to about 0.445 psi.
The msw and fsw are the conventional units for measurement of diver pressure exposure used in decompression tables and the unit of calibration for pneumofathometers and hyperbaric chamber pressure gauges.
Feet of sea water
One atmosphere is approximately equal to 33 feet of sea water or 14.7 psi, which gives 4.9/11 or about 0.445 psi per foot. Atmospheric pressure may be considered constant at sea level, and minor fluctuations caused by the weather are usually ignored. Pressures measured in fsw and msw are gauge pressure, relative to the surface pressure of 1 atm absolute, except when a pressure difference is measured between the locks of a hyperbaric chamber, which is also generally measured in fsw and msw.
The pressure of seawater at a depth of 33 feet equals one atmosphere. The absolute pressure at 33 feet depth in sea water is the sum of atmospheric and hydrostatic pressure for that depth, and is 66 fsw, or two atmospheres absolute. For every additional 33 feet of depth, another atmosphere of pressure accumulates. Therefore at the surface the gauge pressure of 0 fsw is equivalent to an absolute pressure of , and the gauge pressure in fsw at any depth is incremented by 1 ata to provide absolute pressure. (Pressure in ata = Depth in feet/33 + 1)
Usage
In diving the absolute pressure is used in most computations, particularly for decompression and breathing gas consumption but depth is measured by way of hydrostatic pressure. In metric units the ambient pressure is usually measured in metres sea water (msw), and converted to bar for calculations. In US customary units ambient pressure is normally measured in feet of sea water (fsw), and converted to atmospheres absolute or pounds per square inch absolute (psia) for decompression computation. Feet and metres sea water are convenient measures which approximate closely to depth and are intuitively simple to grasp for the diver, compared to the options of more conventional units of pressure which give no direct indication of depth. The distinction between gauge and absolute pressure is important for calculation of gas properties and pressure must be identified as either gauge or absolute. Gauge pressure in msw or fsw is converted to absolute pressure in bar or atm for decompression and gas consumption calculation, but decompression tables are usually provided ready for use directly with the gauge pressure in msw and fsw. Depth gauges and dive computers with readouts calibrated in feet and metres are actually displaying a pressure measurement, usually in feet or metres sea water, as most diving is done in the sea. If ambient pressure in fresh water and hyperbaric chambers is measured in feet and metres sea water, the same decompression algorithms and tables can be used, which eliminates the need to use calibration factors when diving in these environments.
Conversions
In the metric system, a pressure of 10 msw is defined as 1 bar. Pressure conversion between msw and fsw is slightly different from length conversion between metres and feet; 10 msw = 32.6336 fsw and 10 m = 32.8083 ft.
The US Navy Diving Manual gives conversion factors for "fw" (feet water) based on a fresh water density of 62.4 lb/ft3 and for fsw based on a sea water density of 64.0 lb/ft3.
One standard metre sea water equals:
at
by definition
, in SI units
, in cgs units
One standard metre sea water is also approximately equal to:
One standard foot sea water is approximately equal to:
, in SI units
, in cgs units
Similar units
Feet fresh water (ffw) or Feet water (fw), equivalent to 1/34 atm.
References
Sources
Units of pressure
Underwater diving physics | Metre sea water | [
"Physics",
"Mathematics"
] | 892 | [
"Applied and interdisciplinary physics",
"Underwater diving physics",
"Quantity",
"Units of pressure",
"Units of measurement"
] |
52,580,209 | https://en.wikipedia.org/wiki/Protein%20film%20voltammetry | In electrochemistry, protein film voltammetry (or protein film electrochemistry, or direct electrochemistry of proteins) is a technique for examining the behavior of proteins immobilized (either adsorbed or covalently attached) on an electrode. The technique is applicable to proteins and enzymes that engage in electron transfer reactions and it is part of the methods available to study enzyme kinetics.
Provided that it makes suitable contact with the electrode surface (electron transfer between the electrode and the protein is direct) and provided that it is not denatured, the protein can be fruitfully interrogated by monitoring current as a function of electrode potential and other experimental parameters.
Various electrode materials can be used. Special electrode designs are required to address membrane-bound proteins.
Experiments with redox proteins
Small redox proteins such as cytochromes and ferredoxins can be investigated on condition that their electroactive coverage (the amount of protein undergoing direct electron transfer) is large enough (in practice, greater than a fraction of pmol/cm2).
Electrochemical data obtained with small proteins can be used to measure the redox potentials of the protein's redox sites, the rate of electron transfer between the protein and the electrode, or the rates of chemical reactions (such as protonations) that are coupled to electron transfer.
Interpretation of the peak current and peak area
In a cyclic voltammetry experiment carried out with an adsorbed redox protein, the oxidation and reduction of each redox site shows as a pair of positive and negative peaks. Since all the sample is oxidised or reduced during the potential sweep, the peak current and peak area should be proportional to scan rate (observing that the peak current is proportional to scan rate proves that the redox species that gives the peak is actually immobilised). The same is true for experiments performed with non-biological redox molecules adsorbed onto electrodes. The theory was mainly developed by the French electrochemist Etienne Laviron in the 1980s,,.
Since both this faradaic current (which results from the oxidation/reduction of the adsorbed molecule) and the capacitive current (which results from electrode charging) increase in proportion to scan rate, the peaks should remain visible when the scan rate is increased. In contrast, when the redox analyte is in solution and diffuses to/from the electrode, the peak current is proportional to the square root of the scan rate (see: Randles–Sevcik equation).
Peak area
Irrespective of scan rate, the area under the peak (in units of AV) is equal to , where is the number of electrons exchanged in the oxidation/reduction of the center, is the electrode surface and is the electroactive coverage (in units of mol/cm2). The latter can therefore be deduced from the area under the peak after subtraction of the capacitive current.
Peak shape
Slow scan rate
At slow scan rates there should be no separation between the oxidative and reductive peaks.
A one-electron site (e.g. a heme or FeS cluster) gives a broad peak (fig 1A). The equation that gives the shape and intensity of the peak is:
Ideally, the peak position is in both directions. The peak current is (it is proportional to scan rate, , and to the amount of redox sites on the electrode, ). The ideal half width at half height (HWHH) equates mV at 20 °C. Non-ideal behaviour may result in the peak being broader than the ideal limit.
The peak shape for a two-electron redox site (e.g. a flavin) depends on the stability of the half-reduced state (fig 1B). If the half-reduced state is stable over a large range of electrode potential, the signal is the sum of two one-electron peaks (purple line in fig 1B). If the half reduced state is unstable, the signal is a single peak (red line in fig 1B), which may have up to four times the height and half the width of a one-electron peak.,
A protein that contains multiple redox centers should give multiple peaks which all have the same area (scaled by ).
Fast scan rates
If the reaction is a simple electron transfer reaction, the peaks should remain symmetrical at fast scan rates. A peak separation is observed when the scan rate , where is the exchange electron transfer rate constant in Butler Volmer theory. Laviron equation,, predicts that at fast scan rates, the peaks separate in proportion to . The larger or the smaller , the larger the peak separation. The peak potentials are , as shown by lines in fig 2B ( is the charge transfer coefficient). Examining the experimental change in peak position against scan rate therefore informs on the rate of interfacial electron transfer .
Effect of coupled chemical reactions
Coupled reactions are reactions whose rate or equilibrium constant is not the same for the oxidized and reduced forms of the species that is being investigated. For example, reduction should favour protonation (): the protonation reaction is coupled to the reduction at . The binding of a small molecule (other than the proton) may also be coupled to a redox reaction.
Two cases must be considered depending on whether the coupled reaction is slow or fast (meaning that the time scale of the coupled reaction is larger or smaller than the voltammetric time scale ).
Fast chemical reactions that are coupled to electron transfer (such as protonation) only affect the apparent values of and , but the peaks remain symmetrical. The dependence of on ligand concentration (e.g. the dependence of on pH plotted in a Pourbaix diagram) can be interpreted to obtain the dissociation constants (e.g. acidity constants) from the oxidized or reduced forms of the redox species.
Asymmetry may result from slow chemical reactions that are coupled to (and gate) the electron transfer. From fast scan voltammetry, information can be gained about the rates of the reactions that are coupled to electron transfer. The case of and reversible surface electrochemical reactions followed by irreversible chemical reactions was addressed by Laviron in refs, but the data are usually interpreted using the numerical solution of the appropriate differential equations.
Experiments with redox enzymes
In studies of enzymes, the current results from the catalytic oxydation or reduction of the enzyme's substrate.
The electroactive coverage of large redox enzymes (such as laccase, hydrogenase etc.) is often too low to detect any signal in the absence of substrate, but the electrochemical signal is amplified by catalysis: indeed, the catalytic current is proportional to turnover rate times electroactive coverage. The effect of varying the electrode potential, the pH or the concentration of substrates and inhibitors etc. can be examined to learn about various steps in the catalytic mechanism.
Interpretation of the value of the catalytic current
For an enzyme immobilised on an electrode, the value of the current at a certain potential equates , where is the number of electrons exchanged in the catalytic reaction, is the electrode surface, is the electroactive coverage, and TOF is the turnover frequency (or "turnover number"), that is, the number of substrate molecules transformed per second and per molecule of adsorbed enzyme).The latter can be deduced from the absolute value of the current only on condition that is known, which is rarely the case. However, information is obtained by analysing the relative change in current that results from changing the experimental conditions.
The factors that may influence the TOF are (i) the mass transport of substrate towards the electrode where the enzyme is immobilised (diffusion and convection), (ii) the rate of electron transfer between the electrode and the enzyme (interfacial electron transfer), and (iii) the "intrinsic" activity of the enzyme, all of which may depend on electrode potential.
The enzyme is often immobilized on a rotating disk working electrode (RDE) that is spun quickly to prevent the depletion of the substrate near the electrode. In that case, mass transport of substrate towards the electrode where the enzyme is adsorbed may not be influential.
Steady-state voltammetric response
Under very oxidising or very reducing conditions, the steady-state catalytic current sometimes tends to a limiting value (a plateau) which (still provided there is no mass transport limitation) relates to the activity of the fully oxidised or fully reduced enzyme, respectively. If interfacial electron transfer is slow and if there is a distribution of electron transfer rates (resulting from a distribution of orientations of the enzymes molecules on the electrode), the current keeps increasing linearly with potential instead of reaching a plateau; in that case the limiting slope is proportional to the turnover rate of the fully oxidised or fully reduced enzyme.
The change in steady-state current against potential is often complex (e.g. not merely sigmoidal).
Departure from steady-state
Another level of complexity comes from the existence of slow redox-driven reactions that may change the activity of the enzyme and make the response depart from steady-state. Here, slow means that the time scale of the (in)activation is similar to the voltammetric time scale . If a RDE is used, these slow (in)activations are detected by a hysteresis in the catalytic voltammogram that is not due to mass-transport. The hysteresis may disappear at very fast scan rates (if the inactivation has no time to proceed) or at very slow scan rates (if the (in)activation reaction reaches a steady-state).
Combining protein film voltammetry and spectroscopy
Conventional voltammetry offers a limited picture of the enzyme-electrode interface and on the structure of the species involved in the reaction. Complementing standard electrochemistry with other methods can provide a more complete picture of catalysis.
References
Electroanalytical methods
Enzyme kinetics
Chemical kinetics
Catalysis
Bioelectrochemistry | Protein film voltammetry | [
"Chemistry"
] | 2,068 | [
"Catalysis",
"Chemical reaction engineering",
"Electroanalytical chemistry",
"Enzyme kinetics",
"Bioelectrochemistry",
"Electrochemistry",
"Electroanalytical methods",
"Chemical kinetics"
] |
42,443,353 | https://en.wikipedia.org/wiki/Corrosion%20monitoring | Corrosion monitoring is the use of a corrator (corrosion meter) or set of methods and equipment to provide offline or online information about corrosion rate expressed in mpy (mill per year). - for better care and to take or improve preventive measures to combat and protect against corrosion.
Corrosion protection or corrosion monitoring
Various methods are used to prevent and protect against corrosion, such as cathodic protection, selection and injection of chemicals such as corrosion inhibitors or other ways to prevent corrosion. However, in order to see the results of these methods and how effective these measures are, corrosion monitoring should be done and, if necessary, corrosion protection methods should be modified or optimized based on the results obtained from corrosion monitoring. In many industries, preventive measures are used to protect against corrosion, but obviously without knowing the results of these measures, protection is a chance, that is, either full protection will not be done or in case of excessive protection, it will waste capital and resources.
The difference between corrosion inspection and corrosion monitoring
There may be a general misconception between the term corrosion inspection and corrosion monitoring, but inspection means frequent checkpoints to check for changes or deviations from predicted results, while corrosion monitoring is a continuous check to control and act quickly against change. In inspection, the purpose is to evaluate or estimate the corrosion time in order to replace or correct the corrosion, while in corrosion monitoring, the purpose is to take care of the change in order to prevent corrosion and to improve the ways of prevention.
Online corrosion monitoring methods
Online corrosion monitoring is mostly done using the following methods:
Use of weight loss coupons
Corrosion coupons are made in different shapes and sizes. These coupons are often made from same material of the pipe or tank/vessel which should be monitored for corrosion. The most common used coupons are as follows:
strip coupon
ladder strip coupon
flush disc coupon
multi-disc coupon
scale coupon
Use of electrical resistance (ER) corrosion probes
Electric resistance corrosion probes are used in different types for different applications of online corrosion monitoring. The corrosion rate of these probes can be measured online or transferred to the control system by using corrosion handheld or fixed data loggers or by corrosion transmitters.
The general type of ER probes element are as below:
flush type
cylindrical
spiral loop
wire loop
tube loop
ER probes can be provided by adapter in order to connect to data logger or transmitters. The length of probes is dependent on the mounting and monitoring position.
Use of linear polar resistance probes
This method is mostly used for corrosion monitoring in water industry. These probes are suitable for monitoring fluctuations that may occur in a fluid inside the system. These probes are mostly used for conductive fluids such as water or any similar else.
Use of online ultrasonic thickness sensors
Online non-intrusive ultrasonic thickness sensors are a popular choice for corrosion monitoring in various industries, including oil and gas, chemical processing, and power generation. These sensors can provide accurate and reliable thickness measurements of metal structures without requiring physical access or disruption to the equipment. The sensors can be installed permanently and remotely connected to a monitoring system, allowing for continuous data collection and analysis. With the ability to detect corrosion early on, online ultrasonic thickness sensors can help prevent equipment failure, reduce downtime, and improve overall safety and efficiency.
Use of hydrogen probes
Hydrogen probes are used to monitor the penetration of hydrogen into steels, which can cause brittleness, porosity or decarbonization.
Using a sand probe
These probes are mostly used to monitor corrosion caused by erosion or wear. Generally, the erosion are occurred in gas pipelines where the speed of fluid cause erosion. Here the erosion is more important than corrosion.
Use a biological probe or bio-probe
Biological probe or bio-probes are used to collect samples for microbiological analysis. Microorganisms can accelerate the corrosion process, so monitoring the corrosion caused by them is effective in timely notification and preventive measures.
Main equipment of corrosion monitoring system
In addition to probes, the items below are the main ones used in corrosion monitoring:
Access fittings
The access fittings - such as flanged, flare-weld, etc.- are used for connection and access to coupons and probes. The general size of A/F is 2" which is called 2" system. But for small pipe sizes, 1" system can be used.
Coupon holder
Coupon holders are used to fix corrosion coupons inside a pipe or tank and attaches itself to a solid plug or hollow plug. They are usually made from stainless steel 316/316L or Monel and inconel or other corrosion resistant materials.
Service valve
Service valve is a ball type used to block fluid while replacing corrosion coupons or probes despite fluid flow. The service valve is attached to solid or hollow plug after removing of access fitting cover. It blocks the fluid while the coupon or probe is retrieved by retriever and passed through it.
Retriever for retrieval without process interruption
Retriever is used to insert and retrieve coupons and probes without interrupting the process. They can be provided in hydraulic and mechanical types.
References
Corrosion
Measuring instruments
Tools
Technology systems
Corrosion_prevention | Corrosion monitoring | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 1,056 | [
"Systems engineering",
"Corrosion prevention",
"Technology systems",
"Metallurgy",
"Corrosion",
"Measuring instruments",
"Electrochemistry",
"nan",
"Materials degradation"
] |
42,443,651 | https://en.wikipedia.org/wiki/Bioretrosynthesis | Bioretrosynthesis is a technique for synthesizing organic chemicals from inexpensive precursors and evolved enzymes. The technique builds on the retro-evolution hypothesis proposed in 1945 by geneticist Norman Horowitz.
Technique
The technique works backwards from the target to identify a precursor molecule and an enzyme that converts it into the target, and then a second precursor that can produce the first and so on until a simple, inexpensive molecule becomes the beginning of the series. For each precursor, the enzyme is evolved using induced mutations and natural selection to produce a more productive version. The evolutionary process can be repeated over multiple generations until acceptable productivity is achieved. The process does not require high temperature, high pressure, the use of exotic catalysts or other elements that can increase costs. The enzyme "optimizations" that increase the production of one precursor from another are cumulative in that the same precursor productivity improvements can potentially be leveraged across multiple target molecules.
Didanosine
In 2014 the technique was used to produce the HIV drug didanosine: a simpler molecule was identified that can be converted into didanosine when subjected to a specific chemical transformation in the presence of a specific enzyme. The gene that creates the enzyme was then "copied", adding random mutations to each copy using ribokinase engineering. The mutant genes were inserted into Escherichia coli bacteria and used to produce (now-mutant) enzymes. The enzymes were then mixed with the precursor and the mutant enzymes that produced the greatest amount of didanosine were retained and replicated. One mutant stimulated a 50x increase in didanosine production. The first step was repeated, using the first precursor in place of didanosine, finding a yet simpler precursor and an enzyme to produce it. One mutated enzyme produced a 9,500x increase in nucleoside production. A third retrogression allowed them to start with the simple and inexpensive sugar named dideoxyribose and produce didanosine in a three-step sequence.
References
External links
Chemical synthesis
Genetic engineering
Organic chemistry | Bioretrosynthesis | [
"Chemistry",
"Engineering",
"Biology"
] | 419 | [
"Biological engineering",
"Genetic engineering",
"nan",
"Chemical synthesis",
"Molecular biology"
] |
42,446,604 | https://en.wikipedia.org/wiki/Chromatography%20column | A chromatography column is a device used in chromatography for the separation of chemical compounds. A chromatography column contains the stationary phase, allowing the mobile phase to pass through it.
Materials
Chromatography columns of different types are used in both gas and liquid chromatography:
Liquid chromatography: Traditional chromatography columns were made of glass. Modern columns are mostly made of borosilicate glass, acrylic glass or stainless steel. To prevent the stationary phase from leaking out of the column interior a polymer, stainless steel or ceramic net is usually applied. Depending on the application material- and size-requirements may change.
Gas chromatography (GC): Older columns were made of glass or metal packed with particles of a solid stationary phase. More recently, narrower diameter (capillary) columns have been made using fused silica coated on the inside with a film of the stationary phase material. GC columns are typically very long to take advantage of their low resistance to the flow of carrier gas. The materials of the column and the stationary phase must be suitable for GC operating temperatures, which may range as high as 300°C or more.
Sizes
While small-scale columns range from inner diameters of 0.5 cm and withstand pressures of up to 130 MPa, industrial large scale columns reach diameters of up to 2 m and operate at considerable lower pressures (below 1 MPa). While it is favorable to view the packed bed of a column large scale columns are manufactured from steel due to its superior resilience.
Chromatography columns can be used as stand-alone devices or in combination with manual or automated chromatography systems. Medium to large columns are almost exclusively operated together with automated systems to decrease the risk of process failure and loss of product.
Different columns for different scales
Small scale
Transitions between scales are always fluent. There is no sharp cut that defines the end of small- and the beginning of medium/pilot scale. However, chromatography columns with an inner diameter (ID) of up to 5 cm are generally considered small scale or laboratory scale columns.
Small scale chromatography columns are mostly intended for design of experiments (DoE); proof of concept; validation (drug manufacture) or research and development experiments. Columns of this scale category are distinguished by their small dimensions in comparison to chromatography columns intended for larger scales as well as relatively high pressure tolerance and selection of materials in contact with the liquid phase. This is especially important for applications in the biopharmaceutical industry which underlie close scrutiny by regulatory agencies (U.S. Food and Drug Administration; European Medicines Agency).
See also
Separation process
References
External links
Learn More About Chromatography Columns
Chromatography | Chromatography column | [
"Chemistry"
] | 571 | [
"Chromatography",
"Separation processes"
] |
42,447,237 | https://en.wikipedia.org/wiki/Reverse%20Transcription%20Loop-mediated%20Isothermal%20Amplification | Reverse transcription loop-mediated isothermal amplification (RT-LAMP) is a one step nucleic acid amplification method to multiply specific sequences of RNA. It is used to diagnose infectious disease caused by RNA viruses.
It combines LAMP DNA-detection with reverse transcription, making cDNA from RNA before running the reaction. RT-LAMP does not require thermal cycles (unlike PCR) and is performed at a constant temperature between 60 and 65 °C.
RT-LAMP is used in the detection of RNA viruses (groups II, IV, and V on the Baltimore Virus Classification system), such as the SARS-CoV-2 virus and the Ebola virus.
Applications
RT-LAMP is used to test for the presence of specific RNA-samples of viruses for the specific sequence of the virus, made possible by comparing the sequences against a large external database of references.
Detection of the SARS-CoV2-Virus
The RT-LAMP technique is being supported as a cheaper and easier alternative to RT-PCR for the early diagnostics of people that are infectious for COVID-19. There are open access test designs (including the recombinant proteins) which makes it legally possible for anyone to produce a test. In contrast to classic rapid tests by lateral flow, RT-LAMP allows the early diagnosis of the disease by testing the viral RNA.
The tests can be done without previous RNA-isolation, detecting the viruses directly from swabs or from saliva.
Detection of non-human viruses
One example of use case of RT-LAMP was as an experiment to detect a new duck Tembusu-like, BYD virus, named after the region, Baiyangdian, where it was first isolated Another recent application of this method, was in a 2013 experiment to detect an Akabane virus using RT-LAMP. The experiment, done in China, isolated the virus from aborted calf fetuses.
Detection of body fluids
RT-LAMP is also being used in Forensic Serology to identify body fluids. Researchers have done experiments to show that this method can effectively identify certain body fluids. Knowing there would be limitations, Su et al, come to the conclusion that RT-LAMP was only able to identify blood.
Methodology
Reverse transcription
A specific sequence of the cDNA is detected by 4 LAMP primers. Two of them are inner primers (FIP and BIP), which serve as base for the Bst enzyme copy the template into a new DNA. The outer primers(F3 and B3) anneal to the template strand and help the reaction to proceed.
As in the case of RT-PCR, the RT-LAMP procedure starts by making DNA from the sample RNA. This conversion is made by a reverse transcriptase, an enzyme derived from retroviruses capable of making such a conversion. This DNA derived from RNA is called cDNA, or complementary DNA. The FIP primer is used by the reverse transcriptase to build a single-strand of copy DNA. The F3 primer binds to this side of the template strand as well, and displaces the previously made copy.
Amplification
This displaced, single-stranded copy is a mixture of target RNA and primers. The primers are designed to have a sequence that binds to the sequence itself, forming a loop.
The BIP primer binds to the other end of this single strand and is used by the Bst DNA polymerase to build a complementary strand, making double-strand DNA. The F3 primer binds to this end and displaces, once again, this newly generated single-stranded DNA molecule.
This new single strand that has been released will act as the starting point for the LAMP cycling amplification. This single-stranded DNA has a dumbbell-like structure as the ends fold and self-bind, forming two loops.
The DNA polymerase and the FIP or BIP primers keep amplifying this strand and the LAMP-reaction product is extended. This cycle can be started from either the forward or backward side of the strand using the appropriate primer. Once this cycle has begun, the strand undergoes self-primed DNA synthesis during the elongation stage of the amplification process. This amplification takes place in less an hour, under isothermal conditions between 60 and 65 °C.
Read out
The read out of RT-LAMP tests is frequently colorimetric. Two of the common ways are based on measuring either pH or magnesium ions. The amplification reaction causes pH to lower and Mg2+ levels to drop. This can be perceived by indicators, such as Phenol red, for pH, and hydroxynaphthol blue (HNB), for magnesium. Another option is to use SYBR Green I, a DNA intercalating coloring agent.
Advantages and disadvantages
This method is specifically advantageous because it can all be done quickly in one step. The sample is mixed with the primers, reverse transcriptase and DNA polymerase and the reaction takes place under a constant temperature. The required temperature can be achieved using a simple hot water bath.
PCR requires thermocycling; RT-LAMP does not, making it more time efficient and very cost effective. This inexpensive and streamlined method can be more readily used in developing countries that do not have access to high tech laboratories.
A disadvantage of this method is generating the sequence specific primers. For each LAMP assay, primers must be specifically designed to be compatible with the target DNA. This can be difficult which discourages researchers from using the LAMP method in their work. There is however, a free software called Primer Explorer, developed by Fujitsu in Japan, which can aid in the selection of these primers.
See also
Loop-mediated isothermal amplification
References
External links
LAMP Primer Explorer
MorphoCatcher, a tool for design of species-specific primers
Scholia page for RT-LAMP
Open access protocols for RT-LAMP to detect SARS-CoV-2
Molecular biology techniques
RNA | Reverse Transcription Loop-mediated Isothermal Amplification | [
"Chemistry",
"Biology"
] | 1,234 | [
"Molecular biology techniques",
"Molecular biology"
] |
57,284,293 | https://en.wikipedia.org/wiki/Ratcheting | In continuum mechanics, ratcheting, or ratchetting, also known as cyclic creep, is a behavior in which plastic deformation accumulates due to cyclic mechanical or thermal stress.
In an article written by J. Bree in 1967, the phenomenon of ratcheting is described as "Unsymmetric cycles of stress between prescribed limits will cause progressive ‘creep’ or ‘ratchet(t)ing’ in the direction of the mean stress". Ratcheting is a progressive, incremental inelastic deformation characterized by a shift of the stress-strain hysteresis loop along the strain axis. When the amplitude of cyclic stresses exceed the elastic limit, the plastic deformation that occurs keep accumulating paving way for a catastrophic failure of the structure. Nonlinear kinematic hardening, which occurs when the stress state reaches the yield surface, is considered as the main mechanism behind ratcheting. Several factors influences the extent of ratcheting including the load condition, mean stress, stress amplitude, stress ratio, load history, plastic slip, dislocation movement, and cells deformations.
The effect of structural ratcheting can sometimes be represented in terms of the Bree diagram.
Alternative material models have been proposed to simulate ratcheting, such as Chaboche, Ohno-Wang, Armstrong–Frederick, etc.
Ratcheting is a significant effect to be considered to check permanent deformation in systems which undergoes a cyclic loading. Common examples of such repetitive stresses include sea waves, road traffic, and earthquakes. Initially it was studied to inspect the permanent deformation of thin, nuclear fuel cans with an internal pressure and temperature gradient while undergoing repetitive non-zero mean stresses.
References
Continuum mechanics
Solid mechanics
Deformation (mechanics) | Ratcheting | [
"Physics",
"Materials_science",
"Engineering"
] | 349 | [
"Solid mechanics",
"Continuum mechanics",
"Deformation (mechanics)",
"Classical mechanics",
"Plasticity (physics)",
"Materials science",
"Mechanics"
] |
46,177,880 | https://en.wikipedia.org/wiki/Oxyselenide | Oxyselenides are a group of chemical compounds that contain oxygen and selenium atoms (Figure 1). Oxyselenides can form a wide range of structures in compounds containing various transition metals, and thus can exhibit a wide range of properties. Most importantly, oxyselenides have a wide range of thermal conductivity, which can be controlled with changes in temperature in order to adjust their thermoelectric performance. Current research on oxyselenides indicates their potential for significant application in electronic materials.
Synthesis
The first oxyselenide to be crystallized was manganese oxyselenide in 1900. In 1910, oxyselenides containing phosphate were created by treating P2Se5 with metal hydroxides. Uranium oxyselenide was formed next by treating H2Se with uranium dioxides at 1000 °C. This technique was also utilized in synthesizing oxyselenides of rare-earth elements in the mid-1900s. Synthesis of oxyselenide compounds currently involves treating oxides with aluminum powder and selenium at high temperatures.
Recent discoveries in iron oxyarsenides and their superconductivity have highlighted the importance of mixed anion systems. Mixed copper oxychalcogenides came about when the electronic properties of both chalcogenides and oxides were taken into account. Chemists began pursuing the synthesis of a compound with metallic and charge density wave properties as well as high temperature superconductivity. Upon synthesizing the copper oxyselenide Na1.9Cu2Se2·Cu2O by reacting Na2Se3.6 with Cu2O, they concluded that a new type of oxychalcogenides could be synthesized by reacting metal oxides with polychalcogenide fluxes.
Derivatives
New oxyselenides of the formula Sr2AO2M2Se2 (A=Co, Mn; M=Cu, Ag) have been synthesized. They crystallize into structures consisting of alternating perovskite-like (metal oxide) and antifluorite (metal selenide) layers (Figure 2). The optical band gap of each oxyselenide is very narrow, indicating semiconductivity.
Another derivative that reveals oxyselenide properties is β-La2O2MSe2 (M= Fe, Mn). This molecule possesses an orthorhombic structure (Figure 3), opening up the possibilities for different packing arrangements of oxyselenides. They are ferromagnetic at low temperatures (~27 K) and show high resistivity at room temperature. The Mn analogue, diluted in NaCl solution, suggests an optical band gap of 1.6 eV at room temperature, making it an insulator. Meanwhile, the band gap for the Fe analogue is approximately 0.7 eV between 150 K and 300 K, making it a semiconductor. In contrast, cobalt oxyselenide La2Co2O3Se2 is antiferromagnetically ordered, suggesting that although the different transition metals are responsible for the changes in an oxyselenide's magnetic property, the molecule's overall lattice structure may also influence its conductivity.
The magnetic and conducting properties of different metal compounds coordinated with oxyselenide are not only affected by the transition metal used, but also by the synthesis conditions. For example, the percentage of aluminium used during the synthesis of Ce2O2ZnSe2 as an oxygen retriever affected the band gaps, indicated by the varying product colours. Various structures allow for many potential configurations. For example, as observed before in La2Co2O3Se2, Sr2F2Mn2Se2O exhibits a frustrated magnetic correlation in the structure resulting in an antiferromagnetic lattice.
In 2010, p-type polycrystalline BiCuSeO oxyselenides were reported as possible thermoelectric materials.
The weak bonds between the [Cu2Se2]−2 conducting and [Bi2O2]+2 insulating layer, as well as the anharmonic crystal lattice structure, may account for the substance's low thermal conductivity and high thermoelectric performance. Recently, BiCuSeO's ZT value, a dimensionless figure-of-merit indicating thermoelectric performance, has been increased from 0.5 to 1.4. Experiment has shown that Ca doping can improve electrical conductivity, thereby increasing the ZT value. Additionally, replacing 15% of the Bi3+ ions with group 2 metal ions, Ca2+, Sr2+, or Ba2+ (Figure 4), also optimizes the charge carrier concentration.
References
Mixed anion compounds
Materials science
Oxygen compounds
Selenium(−II) compounds | Oxyselenide | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 978 | [
"Matter",
"Applied and interdisciplinary physics",
"Mixed anion compounds",
"Materials science",
"nan",
"Ions"
] |
46,180,649 | https://en.wikipedia.org/wiki/Curb%20box | A curb box (also known as a valve box, buffalo box, b-box, or in British English stopcock chamber) is a vertical cast iron sleeve, accessible from the public way, housing the shut-off valve (curb cock or curb stop) for a property's water service line. It is typically located between a building and the district's water main lines and usually consists of a metal tube with a removable or sliding lid, allowing access to the turn-key within. It typically serves as the point denoting the separation of utility-maintained and privately maintained water facilities.
The name buffalo box, the first word often capitalized, is applied to curb boxes because they originated in Buffalo, New York.
References
Plumbing valves
Plumbing | Curb box | [
"Engineering"
] | 152 | [
"Construction",
"Plumbing"
] |
46,181,065 | https://en.wikipedia.org/wiki/Vibrational%20bond | A vibrational bond is a chemical bond that happens between two very large atoms, like bromine, and a very small atom, like hydrogen, at very high energy states. Vibrational bonds only exist for a few milliseconds. This bond is detectable through modern analytic chemistry and is significant because it affects the rate at which other reactions can occur.
History
Vibrational bonds were mathematically predicted almost thirty years before they were experimentally observed. The original theoretical calculations had been carried out by D.C. Clary and J.N.L Connor during the early 1980s. Together they hypothesized that with very large atoms and small atoms at high energy states, the elements would stabilize and create temporary bonds for very short periods of time. The vibrational bond would be weaker than any currently known bond, like the commonly known ionic or covalent bonds.
One year after the theoretical discovery of vibrational bonds, J. Manz and his team confirmed the calculations that were previously made, and elaborated on them by showing that the vibrational bonds were most likely to occur during symmetric reactions, but stated that vibrational bonds may also be possible with asymmetric reactions. Their team explained that although vibrational bonding theories proved to be correct they found some inconsistencies with the 'classic model' and found that symmetric reactions will show resonance, but only in certain transition states. However, the classic model would still be viable to use to predict vibrational bonds.
In 1989, Donald Fleming noticed that a reaction between bromine and muonium slowed down as temperature increased. This phenomenon was known as a "vibrational bond" and would capture the attention of Donald Fleming again in 2014. In 1989 the technology did not exist to collect sufficient data on the reaction, and Donald Fleming and his team moved away from the research.
Discovery
Donald Fleming and his team recently began their investigation of vibrational bonds, and as they had expected from the results of their experiments in 1989, the BrLBr reaction slowed at high temperatures, now using modern instrumental analysis from photo detachment electron spectroscopy, the vibrational bond was detected but lasted only a few milliseconds. The vibrational bond acted differently than van der Waals forces reactions because the energy was balanced differently.
Bond
In chemistry it is known that increased temperature increases the rate or reaction of an experiment, however vibrational bonds are not formed like covalent bonds where electrons are shared between the two bonding atoms. Vibrational bonds are created at high energy where the muonium bounces to and from bromine atoms "like a ping pong ball bouncing between two bowling balls," according to Donald Fleming. This bouncing action lowers the potential energy of the BrMuBr molecule, and therefore slows the rate of the reaction.
This type of bond has been confirmed in the BrMuBr molecules but in the heavier isotopes of hydrogen (protium, deuterium, and tritium), the vibrational bonding can only occur as the van der Waals forces are overcome and the vibrational bond is formed.
Relevance
This discovery changes the understanding of chemical bonds, with van der Waals interactions, and recently discovered vibrational bonding will show that there are different mechanisms and energies for different bonds, and the experimental discovery of the vibrational bonding has the potential to encourage more research in isotopic interactions.
References
Chemical bonding | Vibrational bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 677 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
51,081,492 | https://en.wikipedia.org/wiki/Pi-stacking | In chemistry, pi stacking (also called π–π stacking) refers to the presumptively attractive, noncovalent pi interactions between the pi bonds of aromatic rings, because of orbital overlap. According to some authors direct stacking of aromatic rings (the "sandwich interaction") is electrostatically repulsive.
What is more commonly observed (see figure to the right) is either a staggered stacking (parallel displaced) or pi-teeing (perpendicular T-shaped) interaction both of which are electrostatic attractive For example, the most commonly observed interactions between aromatic rings of amino acid residues in proteins is a staggered stacked followed by a perpendicular orientation. Sandwiched orientations are relatively rare.
Pi stacking is repulsive as it places carbon atoms with partial negative charges from one ring on top of other partial negatively charged carbon atoms from the second ring and hydrogen atoms with partial positive charges on top of other hydrogen atoms that likewise carry partial positive charges. In staggered stacking, one of the two aromatic rings is offset sideways so that the carbon atoms with partial negative charge in the first ring are placed above hydrogen atoms with partial positive charge in the second ring so that the electrostatic interactions become attractive. Likewise, pi-teeing interactions in which the two rings are oriented perpendicular to either other is electrostatically attractive as it places partial positively charged hydrogen atoms in close proximity to partially negatively charged carbon atoms. An alternative explanation for the preference for staggered stacking is due to the balance between van der Waals interactions (attractive dispersion plus Pauli repulsion).
These staggered stacking and π-teeing interactions between aromatic rings are important in nucleobase stacking within DNA and RNA molecules, protein folding, template-directed synthesis, materials science, and molecular recognition. Despite the wide use of term pi stacking in the scientific literature, there is no theoretical justification for its use.
Evidence against pi stacking
The benzene dimer is the prototypical system for the study of pi stacking, and is experimentally bound by 8–12 kJ/mol (2–3 kcal/mol) in the gas phase with a separation of 4.96 Å between the centers of mass for the T-shaped dimer. The small binding energy makes the benzene dimer difficult to study experimentally, and the dimer itself is only stable at low temperatures and is prone to cluster.
Other evidence against pi stacking comes from X-ray crystallography. Perpendicular and offset parallel configurations can be observed in the crystal structures of many simple aromatic compounds. Similar offset parallel or perpendicular geometries were observed in a survey of high-resolution x-ray protein crystal structures in the Protein Data Bank. Analysis of the aromatic amino acids phenylalanine, tyrosine, histidine, and tryptophan indicates that dimers of these side chains have many possible stabilizing interactions at distances larger than the average van der Waals radii.
Benzene dimers and related species
The preferred geometries of the benzene dimer have been modeled at a high level of theory with MP2-R12/A computations and very large counterpoise-corrected aug-cc-PVTZ basis sets. The two most stable conformations are the parallel displaced and T-shaped, which are essentially isoenergetic. In contrast, the sandwich configuration maximizes overlap of the pi system, which destabilizes the interaction. The sandwich configuration represents an energetic saddle point, which is consistent with the relative rarity of this configuration in x-ray crystal data.
The relative binding energies of these three geometric configurations of the benzene dimer can be explained by a balance of quadrupole/quadrupole and London dispersion forces. While benzene does not have a dipole moment, it has a strong quadrupole moment. The local C–H dipole means that there is positive charge on the atoms in the ring and a correspondingly negative charge representing an electron cloud above and below the ring. The quadrupole moment is reversed for hexafluorobenzene due to the electronegativity of fluorine. The benzene dimer in the sandwich configuration is stabilized by London dispersion forces but destabilized by repulsive quadrupole/quadrupole interactions. By offsetting one of the benzene rings, the parallel displaced configuration reduces these repulsive interactions and is stabilized. The large polarizability of aromatic rings lead to dispersive interactions as major contribution to stacking effects. These play a major role for interactions of nucleobases e.g. in DNA. The T-shaped configuration enjoys favorable quadrupole/quadrupole interactions, as the positive quadrupole of one benzene ring interacts with the negative quadrupole of the other. The benzene rings are furthest apart in this configuration, so the favorable quadrupole/quadrupole interactions evidently compensate for diminished dispersion forces.
Substituent effects
The ability to fine-tune pi stacking interactions would be useful in numerous synthetic efforts. One example would be to increase the binding affinity of a small-molecule inhibitor to an enzyme pocket containing aromatic residues. The effects of heteroatoms and substituents on pi stacking interactions is difficult to model and a matter of debate.
Electrostatic model
An early model for the role of substituents in pi stacking interactions was proposed by Hunter and Sanders. They used a simple mathematical model based on sigma and pi atomic charges, relative orientations, and van der Waals interactions to qualitatively determine that electrostatics are dominant in substituent effects. According to their model, electron-withdrawing groups reduce the negative quadrupole of the aromatic ring and thereby favor parallel displaced and sandwich conformations. Contrastingly, electron donating groups increase the negative quadrupole, which may increase the interaction strength in a T-shaped configuration with the proper geometry. Based on this model, the authors proposed a set of rules governing pi stacking interactions which prevailed until more sophisticated computations were applied.
Experimental evidence for the Hunter–Sanders model was provided by Siegel et al. using a series of substituted syn- and . In these compounds the aryl groups "face-off" in a stacked geometry due to steric crowding, and the barrier to epimerization was measured by nuclear magnetic resonance spectroscopy. The authors reported that aryl rings with electron-withdrawing substituents had higher barriers to rotation. The interpretation of this result was that these groups reduced the electron density of the aromatic rings, allowing more favorable sandwich pi stacking interactions and thus a higher barrier. In other words, the electron-withdrawing groups resulted in "less unfavorable" electrostatic interactions in the ground state.
Hunter et al. applied a more sophisticated chemical double mutant cycle with a hydrogen-bonded "zipper" to the issue of substituent effects in pi stacking interactions. This technique has been used to study a multitude of noncovalent interactions. The single mutation, in this case changing a substituent on an aromatic ring, results in secondary effects such as a change in hydrogen bond strength. The double mutation quantifies these secondary interactions, such that even a weak interaction of interest can be dissected from the array. Their results indicate that more electron-withdrawing substituents have less repulsive pi stacking interactions. Correspondingly, this trend was exactly inverted for interactions with pentafluorophenylbenzene, which has a quadrupole moment equal in magnitude but opposite in sign as that of benzene. The findings provide direct evidence for the Hunter–Sanders model. However, the stacking interactions measured using the double mutant method were surprisingly small, and the authors note that the values may not be transferable to other systems.
In a follow-up study, Hunter et al. verified to a first approximation that the interaction energies of the interacting aromatic rings in a double mutant cycle are dominated by electrostatic effects. However, the authors note that direct interactions with the ring substituents, discussed below, also make important contributions. Indeed, the interplay of these two factors may result in the complicated substituent- and geometry-dependent behavior of pi stacking interactions.
Direct interaction model
The Hunter–Sanders model has been criticized by numerous research groups offering contradictory experimental and computational evidence of pi stacking interactions that are not governed primarily by electrostatic effects.
The clearest experimental evidence against electrostatic substituent effects was reported by Rashkin and Waters. They used meta- and para-substituted N-benzyl-2-(2-fluorophenyl)-pyridinium bromides, which stack in a parallel displaced conformation, as a model system for pi stacking interactions. In their system, a methylene linker prohibits favorable T-shaped interactions. As in previous models, the relative strength of pi stacking interactions was measured by NMR as the rate of rotation about the biaryl bond, as pi stacking interactions are disrupted in the transition state. Para-substituted rings had small rotational barriers which increased with increasingly electron-withdrawing groups, consistent with prior findings. However, meta-substituted rings had much larger barriers of rotation despite having nearly identical electron densities in the aromatic ring. The authors explain this discrepancy as direct interaction of the edge of hydrogen atoms of one ring with the electronegative substituents on the other ring. This claim is supported by chemical shift data of the proton in question.
Much of the detailed analyses of the relative contributions of factors in pi stacking have been borne out by computation. Sherill and Sinnokrot reported a surprising finding using high-level theory that all substituted benzene dimers have more favorable binding interactions than a benzene dimer in the sandwich configuration. Later computational work from the Sherill group revealed that the substituent effects for the sandwich configuration are additive, which points to a strong influence of dispersion forces and direct interactions between substituents. It was noted that interactions between substituted benzenes in the T-shaped configuration were more complex. Finally, Sherill and Sinnokrot argue in their review article that any semblance of a trend based on electron donating or withdrawing substituents can be explained by exchange-repulsion and dispersion terms.
Houk and Wheeler also provide compelling computational evidence for the importance of direct interaction in pi stacking. In their analysis of substituted benzene dimers in a sandwich conformation, they were able to recapitulate their findings using an exceedingly simple model where the substituted benzene, Ph–X, was replaced by H–X. Remarkably, this crude model resulted in the same trend in relative interaction energies, and correlated strongly with the values calculated for Ph–X. This finding suggests that substituent effects in the benzene dimer are due to direct interaction of the substituent with the aromatic ring, and that the pi system of the substituted benzene is not involved. This latter point is expanded upon below.
In summary, it would seem that the relative contributions of electrostatics, dispersion, and direct interactions to the substituent effects seen in pi stacking interactions are highly dependent on geometry and experimental design. The lack of consensus on the matter may simply reflect the complexity of the issue.
Requirement of aromaticity
The conventional understanding of pi stacking involves quadrupole interactions between delocalized electrons in p-orbitals. In other words, aromaticity should be required for this interaction to occur. However, several groups have provided contrary evidence, calling into question whether pi stacking is a unique phenomenon or whether it extends to other neutral, closed-shell molecules.
In an experiment not dissimilar from others mentioned above, Paliwal and coauthors constructed a molecular torsion balance from an aryl ester with two conformational states. The folded state had a well-defined pi stacking interaction with a T-shaped geometry, whereas the unfolded state had no aryl–aryl interactions. The NMR chemical shifts of the two conformations were distinct and could be used to determine the ratio of the two states, which was interpreted as a measure of intramolecular forces. The authors report that a preference for the folded state is not unique to aryl esters. For example, the cyclohexyl ester favored the folded state more so than the phenyl ester, and the tert-butyl ester favored the folded state by a preference greater than that shown by any aryl ester. This suggests that aromaticity is not a strict requirement for favorable interaction with an aromatic ring.
Other evidence for non-aromatic pi stacking interactions results include critical studies in theoretical chemistry, explaining the underlying mechanisms of empirical observations. Grimme reported that the interaction energies of smaller dimers consisting of one or two rings are very similar for both aromatic and saturated compounds. This finding is of particular relevance to biology, and suggests that the contribution of pi systems to phenomena such as stacked nucleobases may be overestimated. However, it was shown that an increased stabilizing interaction is seen for large aromatic dimers. As previously noted, this interaction energy is highly dependent on geometry. Indeed, large aromatic dimers are only stabilized relative to their saturated counterparts in a sandwich geometry, while their energies are similar in a T-shaped interaction.
A more direct approach to modeling the role of aromaticity was taken by Bloom and Wheeler. The authors compared the interactions between benzene and either 2-methylnaphthalene or its non-aromatic isomer, 2-methylene-2,3-dihydronaphthalene. The latter compound provides a means of conserving the number of p-electrons while, however, removing the effects of delocalization. Surprisingly, the interaction energies with benzene are higher for the non-aromatic compound, suggesting that pi-bond localization is favorable in pi stacking interactions. The authors also considered a homodesmotic dissection of benzene into ethylene and 1,3-butadiene and compared these interactions in a sandwich with benzene. Their calculation indicates that the interaction energy between benzene and homodesmotic benzene is higher than that of a benzene dimer in both sandwich and parallel displaced conformations, again highlighting the favorability of localized pi-bond interactions. These results strongly suggest that aromaticity is not required for pi stacking interactions in this model.
Even in light of this evidence, Grimme concludes that pi stacking does indeed exist. However, he cautions that smaller rings, particularly those in T-shaped conformations, do not behave significantly differently from their saturated counterparts, and that the term should be specified for larger rings in stacked conformations which do seem to exhibit a cooperative pi electron effect.
Examples
One demonstration of stacking is found in the buckycatcher. This molecular tweezer is based on two concave buckybowls with a perfect fit for one convex fullerene molecule. Complexation takes place simply by evaporating a toluene solution containing both compounds. In solution an association constant of 8600 M−1 is measured based on changes in NMR chemical shifts.
Pi stacking is prevalent in protein crystal structures, and also contributes to the interactions between small molecules and proteins. As a result, pi–pi and cation–pi interactions are important factors in rational drug design. One example is the FDA-approved acetylcholinesterase (AChE) inhibitor tacrine which is used in the treatment of Alzheimer's disease. Tacrine is proposed to have a pi stacking interaction with the indolic ring of Trp84, and this interaction has been exploited in the rational design of novel AChE inhibitors.
Supramolecular assembly
π systems are building blocks in supramolecular assembly because they often engage in noncovalent interactions. An example of π–π interactions in supramolecular assembly is the synthesis of catenane. The major challenge for the synthesis of catenane is to interlock molecules in a controlled fashion. Stoddart and co-workers developed a series of systems utilizing the strong π–π interactions between electron-rich benzene derivatives and electron-poor pyridinium rings. [2]Catanene was synthesized by reacting bis(pyridinium) (A), bisparaphenylene-34-crown-10 (B), and 1, 4-bis(bromomethyl)benzene (C) (Fig. 2). The π–π interaction between A and B directed the formation of an interlocked template intermediate that was further cyclized by substitution reaction with compound C to generate the [2]catenane product.
Charge transfer salts
A combination of tetracyanoquinodimethane (TCNQ) and tetrathiafulvalene (TTF) forms a strong charge-transfer complex referred to as TTF-TCNQ. The solid shows almost metallic electrical conductance. In a TTF-TCNQ crystal, TTF and TCNQ molecules are arranged independently in separate parallel-aligned stacks, and an electron transfer occurs from donor (TTF) to acceptor (TCNQ) stacks.
Graphite
Graphite consists of stacked sheets of covalently bonded carbon. The individual layers are called graphene. In each layer, each carbon atom is bonded to three other atoms forming a continuous layer of sp2 bonded carbon hexagons, like a honeycomb lattice with a bond length of 0.142 nm, and the distance between planes is 0.335 nm.
See also
Noncovalent interaction
Dispersion (chemistry)
Cation–pi interaction
Intercalation (biochemistry)
Intercalation (chemistry)
References
External links
Larry Wolf (2011): π-π (π-Stacking) interactions: origin and modulation
Organic chemistry
Chemical bonding
Supramolecular chemistry | Pi-stacking | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,697 | [
"Condensed matter physics",
"nan",
"Nanotechnology",
"Chemical bonding",
"Supramolecular chemistry"
] |
51,086,269 | https://en.wikipedia.org/wiki/Marine%20thruster | A marine thruster is a device for producing directed hydrodynamic thrust mounted on a marine vehicle, primarily for maneuvering or propulsion. There are a variety of different types of marine thrusters and each of them plays a role in the maritime industry. Marine thrusters come in many different shapes and sizes, for example screw propellers, Voith-Schneider propellers, waterjets, ducted propellers, tunnel bow thrusters, and stern thrusters, azimuth thrusters, rim-driven thrusters, ROV and submersible drive units. A marine thruster consists of a propeller or impeller which may be encased in some kind of tunnel or ducting that directs the flow of water to produce a resultant force intended to obtain movement in the desired direction or resist forces which would cause unwanted movement. The two subcategories of marine thrusters are for propulsion and maneuvering, the maneuvering thruster typically in the form of bow or stern thrusters and propulsion thrusters ranging from Azimuth thrusters to Rim Drive thrusters.
Positioning Thrusters
Positioning thrusters come in applications, Bow thrusters at the forward end of the vessel, and stern thrusters mounted aft on the boat. Their purpose is to maneuver or position the boat to a greater precision than the propulsion device can accomplish. Their positioning along the length of the vessel allows for directed lateral thrust ahead and astern of the centre of lateral resistance so that the vessel may be maneuvered away from obstructions in its path, or towards a desired position, especially when coming to or away from a dock. These positioning thrusters are usually significantly smaller than the main propulsion thrusters because they only have to do small adjustments rather than moving the whole vessel at speed. Both bow and stern thrusters may be housed in through-hull tunnels. Depending on the size of the motors driving these propellers, they could draw an insignificant amount of power or a large amount of power that requires much caution to operate. Another smaller subset of positioning thrusters is those used for maneuvering unmanned aquatic vehicles like Guanay II AUV tested by scientists from Spain (Masmitja, 2018).
Propulsion Thrusters
Propulsion thrusters are those thrusters which provide longitudinal motion for vessels as an alternative to traditional propellers. There are a variety of types of propulsion thrusters but the most common form is the azimuth thruster, that can rotate 360 degrees on a vertical axis to optionally produce thrust for maneuvering. (Lindborg, 1997). The amount of thrust produced is controllable. There are variants of azimuth thrusters such as CRP thrusters which have two contra-rotating Azimuth thrusters or Swing-Up Azimuth thrusters that can be retracted when not in use to reduce drag on the vessel (Wartsila Encyclopedia). Other propulsion thrusters like outboard thrusters which can be easily put in and out of service, rim drive thrusters that are driven via the external ring with the blades mounted on the inner face of the ring with their tips towards the center, or tilted thrusters pointed away from the hull to minimize interaction with the ship and increase thruster efficiency. The choice between using thrusters or traditional propellers to propel marine vessels is a compromise between versatility and efficiency. Propellers are designed to work in-line with a propulsion plant and produce one-directional thrust while thrusters are more customizable and have a more versatile application. They have this versatility at the cost of complexity and lower efficiency – they are not as robust as propellers and typically have applications on smaller vessels that don't require as much power.
Reference List
Propulsion
Marine engineering | Marine thruster | [
"Engineering"
] | 741 | [
"Marine engineering"
] |
51,087,129 | https://en.wikipedia.org/wiki/FUEL%20Project | FUEL Project aims at solving the problem of inconsistency and lack of standardization in Software Translation across the platform. FUEL Project develops content especially for language users. It works at making technology friendly to user by working as an interpreter. An interpreter who has localized the language of technology and made a glossary of different keywords which is accessible by the user. Currently FUEL Project is working in 40 different languages worldwide. It helps the user to translate and understand different computer terminologies in his own native language. Additionally, the content and glossaries are provided as open source. They are completely free to access by user.
History
FUEL Project was launched in 2008. It was initiated by Red Hat. It was started to create a desktop for a Hindi known user. It was inspired by the thought of making a Hindi user able to understand the language of technology. Glossary of different terms/command, style guide in that language and other content was generated. It standardized the translation standard. The result of effort made other language communities also work with FUEL Project. FUEL Project has organized several language community workshops with the help of local communities, language academies, university language departments, several important organizations and body like Red Hat, C-DAC, Wiki Media Foundation.
Need
It is widely seen that many user are not able to understand computer terminologies. So for them it is needed to standardize those terminologies. Make technology talk in which they want it to. With each day passing society is drifting more towards a digital society where things are done online. Government Services, Online Exams, Filling forms for different purposes and what not. Digital platform is considered the easiest method, but for some understanding the same is not easy. Making them understand which term/command means what in the language they understand gave birth to FUEL Project. So for a user it is very necessary to get friendly with the budding technology each day. For doing so the user need to understand the language of technology then translate it for him to do a task. FUEL Project works by making the language of technology friendly to him.
Services by FUEL Project
Terminology
Translation Style and Convention Guides
Unicode Text Rendering Reference System
Guides for Translation Quality Assessment
Knowledge Base contents
Translators' Training
References
http://fuelproject.org/
Software projects | FUEL Project | [
"Technology",
"Engineering"
] | 458 | [
"Software projects",
"Information technology projects"
] |
51,087,444 | https://en.wikipedia.org/wiki/Shot%20timer | A shot timer is a shot activated timer used in shooting sports, which starts the competitor by an audible signal and also records the competitor's time electronically by detecting the sound of each shot, together with the time from the start signal. When the competitor is finished, the timer will show the time from the start signal until the last shot. The time is usually recorded to hundredths of a second (centisecond), which is required by competitions in the International Practical Shooting Confederation.
History
In 1924 Ed McGivern set a world record by firing six shots from a double-action revolver in four-fifths of a second. This time was measured with a complicated timing contraption attached to the revolver. This timing system was too complicated to be adapted in everyday shooting or shooting sports.
Shooters began using stopwatches to time shooting, often paired with a "stop plate" (a steel target engaged at the end of a shooting stage) to try to measure the total shooting timer.
In 1981, popular shooter and holster maker Bill Rodgers invented a new timing device that provided a start signal and an adjustable preset "par" end time. The goal being making all the required shots between the start and end signals of the timer.
Soon after Ron Bailey of Competition Electronics built a timer with an included microphone that was able to mark the time of each shot fired.
In 1982, Ronin Colman created the PACT Championship Timer. Throughout the next two decades, shooters would transition from the stopwatches to shot timers and a new market was created.
Outside of competitive shooting, the adaptation and use of a shot timer are still relatively low. A study conducted in 2019 showed 82% of shooters surveyed do not use a shot timer at all, and 14.5% did not know what a shot timer was.
Types of sensors
Microphone
Since the introduction of shot timers with microphones in the 1980s, this has been the most common method of detecting shots until today. The timer then usually has one or more microphones and is usually carried by a referee or official, or attached to the shooter's belt. Many timers have the option to adjust their audio pickup sensitivity. Loud firearms like large caliber rifles usually trigger the timer anyway, while less powerful firearms such as small caliber 5.6×15mmR (.22 LR) rifles often require the microphone to be adjusted to a more sensitive setting. If the pickup is set too sensitive there is a risk of detecting shots from other than the shooter intended to being measured or even speech.
Accelerometer
There are also shot timers that attach to the firearm and detect shots using accelerometers and gyroscopes instead of using a separate hand-held device with microphones. These can also be used for data analysis of general movement of the firearm. Mantis was one of the first major manufacturers of such systems when they launched the MantisX system in 2015. Before that, Double Alpha Academy had launched the shot-timer watch ShotMaxx in 2013 after two years of development, which took advantage of data fusion using a built-in microphone and accelerometer to detect shots. These types of shot timers can be particularly suitable for indoor ranges where there are a lot of gun sound reflections, or on quieter firearms such as small caliber rifles.
Common features and functions
Some timers come with additional functions:
Raw Time: The overall time recorded up to the last shot.
Total Time: Raw time plus any additional penalty time from missed targets.
Par times: The timer first gives a start, then a stop signal after a pre-programmed time.
Split Time: The interval between consecutive shots.
Instant or delayed start signal: The start signal can either come instantly by the push of the start button, or by a delay of a programmed number of seconds or within a range of seconds.
Big board display: The possibility for direct connection to a big board display for instant audience feedback.
Integration with an application on a mobile device such as RangeTech, PractiScore, PractiScore Log or Drills via Bluetooth.
The possibility for scoring points on target in addition to the time, with automatic hit factor calculation, making for a complete scoring solution. The results can be loaded to a computer, i.e. by Bluetooth or RF.
Memory storage of previous runs
Auxiliary jack for external loud horn, visual starts or target turning
Adjustable shot detector sensitivity
Audible preparatory commands: An audible command to prep the shooter for the start signal. Often the words "Shooter Ready, Standby."
Mobile app shot timers
A number of shot timer apps can also be downloaded for smartphones, although they are generally less reliable than dedicated standalone units. Traditional mobile phones and tablets lack the necessary style and quality of microphone to pick up gunshots; especially in a gun range where there are multiple shooters or at an indoor gun range where the sound is greater and the echo can also create a false positive on the timer.
However, many shooters use a mobile app-based shot timer only for its par function when conducting dry fire practice in which there is no need to pick up the sound of gunfire.
See also
Gun chronograph
References
Shooting sports
Firearm terminology
Time measurement systems | Shot timer | [
"Physics"
] | 1,053 | [
"Spacetime",
"Time measurement systems",
"Physical quantities",
"Time"
] |
50,140,110 | https://en.wikipedia.org/wiki/International%20Society%20for%20Biocuration | The International Society for Biocuration (ISB) is a non-profit organisation that promotes the field of biocuration and was founded in early 2009. It provides a forum for information exchange through meetings and workshops. The society's conference, the International Biocuration Conference, has been held in Pacific Grove, California (2005), San José, CA (2007), Berlin (2009), Tokyo, Japan (2010), Washington, DC (2012), Cambridge, UK (2013), Toronto, Canada (2014), Beijing, China (2015) and Geneva, Switzerland (2016). The meeting in 2017 will be held in Stanford, California.
Database is the official journal of the society and it has published the proceedings of the societies conferences since 2009.
Aims of the society
The aims of the society include:
promoting interactions among biocurators
fostering the professional development of biocurators
promoting best practices
ensuring interoperability
creating and maintaining standards
promoting relationships with journal publishers.
Executive Committee (EC)
The Executive Committee (EC) is composed of nine (9) elected members, each with a 3 year term. EC members can serve a maximum of two terms. Within the EC, there are positions for Chair, Secretary and Treasurer that are in charge of leading the EC and by extension the membership. Elections for the EC are held on an annual basis. The EC promotes the ISB’s activity to members and non-members, and contributes to the decisions that are taken on behalf of the biocuration community. Additional activities include reviewing microgrant submissions, assisting with organization of the annual Biocuration conference, preparing materials for the ISB election, and maintaining the website.
Biocuration Career Awards
The Biocuration Career Award is an award given by the International Society for Biocuration for outstanding contributions for the field of biocuration.
Annual Career Award winners
Biannual ISB Exceptional Contribution to Biocuration Award
See also
Biocuration
Digital curation
Metadata
Ontology
References
External references
Computational biology
Biology societies
Bioinformatics organizations | International Society for Biocuration | [
"Biology"
] | 418 | [
"Bioinformatics",
"Computational biology",
"Bioinformatics organizations"
] |
50,142,687 | https://en.wikipedia.org/wiki/Universal%20flu%20vaccine | A universal flu vaccine would be a flu vaccine effective against all human-adapted strains of influenza A and influenza B regardless of the virus sub type, or any antigenic drift or antigenic shift. Hence it should not require modification from year to year in order to keep up with changes in the influenza virus. As of 2024 no universal flu vaccine had been successfully developed, however several candidate vaccines were in development, with some undergoing early stage clinical trial.
Medical uses
New vaccines against currently circulating influenza variants are required every year due to the diversity of flu viruses and variable efficacy of vaccines to prevent them. A universal vaccine would eliminate the need to create a vaccine for each year's variants. The efficacy of a vaccine refers to the protection against a broad variety of influenza strains. Events such as antigenic shift have created pandemic strains such as the H1N1 outbreak in 2009. The research required every year to isolate a potential popular viral strain and create a vaccine to defend against it is a six-month-long process; during that time the virus can mutate, making the vaccines less effective.
If a universal vaccine can be developed which is both effective and safe, it could be manufactured in quantity and eliminate availability and supply issues of current vaccines.
Influenza virus
Human influenza is principally caused by the Influenza A and Influenza B viruses. Both have similar structure, being enveloped RNA virus. Their protein membrane contains the glycoproteins hemagglutinin (HA) and neuraminidase (NA) which are used by the virus to enter a host cell, and subsequently to release newly manufactured virions from the host cell. Each strain of the influenza virus has a different pattern of glycoproteins; the glycoproteins themselves have variability as well.
History
In 2008, Acambis announced work on a universal flu vaccine (ACAM-FLU-ATM) based on the less variable M2 protein component of the flu virus shell. See also H5N1 vaccines.
In 2009, the Wistar Institute in Pennsylvania received a patent for using "a variety of peptides" in a flu vaccine, and announced it was seeking a corporate partner.
In 2010, the National Institute of Allergy and Infectious Diseases (NIAID) of the U.S. NIH announced a breakthrough; the effort targets the stem, which mutates less often than the head of the viral HA.
By 2010 some universal flu vaccines had started clinical trials.
BiondVax identified 9 conserved epitopes of the influenza virus and combined them into a recombinant protein called Multimeric-001. All seven of Biondvax's completed phase 2 human trials demonstrated safety and significant levels of immunogenicity; however in October 2020, results of the phase 3 study were published, indicating no apparent efficacy.
ITS's fp01 includes 6 peptide antigens to highly conserved segments of the PA, PB1, PB2, NP & M1 proteins, and has started phase I trials.
DNA vaccines, such as VGX-3400X (aimed at multiple H5N1 strains), contain DNA fragments (plasmids). Inovio's SynCon DNA vaccines include H5N1 and H1N1 subtypes.
Other companies pursuing the vaccine as of 2009 and 2010 include Theraclone, VaxInnate, Crucell NV, Inovio Pharmaceuticals, Immune Targeting Systems (ITS) and iQur.
In 2019, Distributed Bio completed pre-clinical trials of a vaccine that consists of computationally selected distant evolutionary variants of hemagglutinin epitopes and is expected to begin human trials in 2021.
In recent years, research has concerned use of an antigen for the flu hemagglutinin (HA) stem.
Based on the results of animal studies, a universal flu vaccine may use a two-step vaccination strategy: priming with a DNA-based HA vaccine, followed by a second dose with an inactivated, attenuated, or adenovirus-vector-based vaccine.
Some people given a 2009 H1N1 flu vaccine have developed broadly protective antibodies, raising hopes for a universal flu vaccine.
A vaccine based on the hemagglutinin (HA) stem was the first to induce "broadly neutralizing" antibodies to both HA-group 1 and HA-group 2 influenza in mice.
In July 2011, researchers created an antibody, which targets a protein found on the surface of all influenza A viruses called haemagglutinin.
FI6 is the only known antibody that binds (its neutralizing activity is controversial) to all 16 subtypes of the influenza A virus hemagglutinin and might be the lynchpin for a universal influenza vaccine. The subdomain of the hemagglutinin that is targeted by FI6, namely the stalk domain, was actually successfully used earlier as universal influenza virus vaccine by Peter Palese's research group at Mount Sinai School of Medicine.
Other vaccines are polypeptide based.
Research
A study from the Albert Einstein College of Medicine, where researchers deleted gD-2 from the herpes virus, which is responsible for HSV microbes entering in and out of cells showed as of May 1, 2018 the same vaccine can be used in a modified way to contain hemagglutinin and invoke a special ADCC immune response.
The Washington University School of Medicine in St. Louis and the Icahn School of Medicine in Mount Sinai in New York are using the glycoprotein neuraminidase as a targeted antigen in their research. Three monoclonal antibodies (mAB) were sampled from a patient infected with influenza A H3N2 virus. The antibodies were able to bind to the neuraminidase active site neutralizing the virus across multiple strains. The site remains the same with minimal variability across most of the flu strains. In trials using mice all three antibodies were effective across multiple strains, one antibody was able to protect the mice from all 12 strains tested including human and non-human flu viruses. All mice used in the experiments survived even if the antibody was not administered until 72 hours after the time of infection.
Simultaneously the NIAID is working on a peptide vaccine that is starting human clinical trials in the 2019 flu season. The study will include 10,000 participants who will be monitored for two flu seasons. The vaccine will show efficacy if it is able to reduce the number of influenza cases in all strains.
There have been some clinical trials of the M-001 and H1ssF_3928 universal influenza vaccine candidates. As of August 2020, all seven M-001 trials are completed. Each one of these studies resulted in the conclusion that M-001 is safe, tolerable, and immunogenic. Their pivotal PhaseIII study with 12,400 participants was completed and results of the data analysis were published in October 2020, indicating that the vaccine did not show any statistical difference from the placebo group in reduction of flu illness and severity.
In 2019–2020, a vaccine candidate from Peter Palese's group at Mount Sinai Hospital emerged from a phase 1 clinical trial with positive results. By vaccinating twice with hemagglutinins that have different "heads" but the same membrane-proximal "stalk", the immune system is directed to focus its attention on the conserved stalk.
See also
Universal coronavirus vaccine
References
Further reading
Influenza vaccines
Drug discovery | Universal flu vaccine | [
"Chemistry",
"Biology"
] | 1,523 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
50,143,738 | https://en.wikipedia.org/wiki/WR%2021a | WR 21a is an eclipsing binary star in the constellation Carina. It includes one of the most massive known stars and is one of the most massive binaries.
WR 21a lies near the Westerlund 2 open cluster and likely to be an ejected member.
The distance of WR 21a was not definitively known until Gaia mission. There have been estimates from 2.85 kpc to around 8 kpc, with consequent uncertainties in the system luminosity. The larger distance was preferred because of consistency with the derived orbital parameters.
Every 31 days and 16 hours the two stars in this system revolve around each other. The inclination of the orbit means that only very shallow eclipses are observed and the brightness dips by only about 0.05 magnitudes. There are also even smaller brightness variations attributed to the heartbeat effect where the closest passage of the stars in their eccentric orbits creates brightness changes as the two stars illuminate each other. There may also be tidally-excited oscillations producing further small variations.
The colliding winds of the two stars produce extremely high temperatures and luminous x-ray emission. The system is also bright at radio wavelengths.
References
Wolf–Rayet stars
Spectroscopic binaries
Carina (constellation)
J10255650-5748435
O-type main-sequence stars | WR 21a | [
"Astronomy"
] | 278 | [
"Carina (constellation)",
"Constellations"
] |
50,147,897 | https://en.wikipedia.org/wiki/Janne%20Blichert-Toft | Janne Blichert-Toft is a geochemist, specializing in the use of isotopes with applications in understanding planetary mantle-crust evolution, as well as the chemical composition of matter in the universe. To further this research, Blichert-Toft has developed techniques for high-precision Isotope-ratio mass spectrometry measurements.
Biography
1988 to 1991: Visiting scientist at the Lamont–Doherty Earth Observatory at Columbia University, as part of her degrees
1996: Visiting scientist at University of California at Berkeley
2000: Visiting professor at Harvard University
2003: Associate professor at Caltech
Subsequently, Blichert-Toft was at the Australian National University in 2004, at Cambridge University in 2005, at Tokyo University in 2006, and at the University of Chicago in 2011.
From 2008 to 2015, she was also adjunct faculty and Distinguished Wiess Visiting Scholar at Rice University.
Education
1990: M.Sc., University of Copenhagen
1993: Ph.D., Earth Sciences, University of Copenhagen
1996: École normale supérieure de Lyon, Marie-Curie Post-Doctoral Fellow
2000: Habilitation à Diriger des Recherches (HDR), Université Claude Bernard, Lyon I.
After her Marie-Curie post-doctorate, Blichert-Toft joined the CNRS in 1997 and became Director of Research in 2002 working at the École normale supérieure de Lyon.
Work
After her Marie-Curie Post-Doctorate, Blichert-Toft joined the Centre national de la recherche scientifique (CNRS) in 1997 and became Director of Research in 2002, working at the École normale supérieure de Lyon.
She pioneered the application of hafnium isotopes to the evolution of the Earth and the early solar system.
Publications
Blichert-Toft is currently on the Editorial Board of at least the following three publications:
G-Cubed (Geochemistry, Geophysics, Geosystems) published by the American Geophysical Union
Geochemical Perspectives published by the European Association of Geochemistry
Geochimica et Cosmochimica Acta published by the Geochemical Society
From 2022–2024 she was the geochemistry principal editor of the scientific magazine "Elements", and previously served as Associate Editor for the Geochemical Society's newsletter "Geochemical News". The magazine "Elements" is jointly published by the Mineralogical Society of America, the Mineralogical Society of Great Britain and Ireland, the Mineralogical Association of Canada, the Geochemical Society, The Clay Minerals Society, the European Association of Geochemistry, the International Association of GeoChemistry, the Société Française de Minéralogie et de Cristallographie, the Association of Applied Geochemists, the Deutsche Mineralogische Gesellschaft, the Società Italiana di Mineralogia e Petrologia, the International Association of Geoanalysts, the Polskie Towarzystwo Mineralogiczne (Mineralogical Society of Poland), the Sociedad Española de Mineralogía (Spanish Mineralogical Society), the Swiss Geological Society, the Meteoritical Society, the Japan Association of Mineralogical Sciences and the International Association on the Genesis of Ore Deposits.
Awards
2001: Bronze Medal, Centre national de la recherche scientifique (CNRS)
2005: Prix Etienne Roth du Commissariat à l'énergie atomique (CEA), Académie des Sciences
2010: Geochemistry Fellow of the Geochemical Society and the European Association of Geochemistry
2010: The Medal of the Ecole Normale Supérieure de Lyon
2012: Fellow of the American Geophysical Union
2012: Silver Medal, Centre national de la recherche scientifique (CNRS)
2015: The Danish Geological Society's Steno Medal
2015: Invited Plenary Speaker at the Goldschmidt Conference, Prague
2016: Member of the Royal Danish Academy of Sciences and Letters
2018: The Murchison Medal of the Geological Society of London
2022: The American Geophysical Union's Harry Hess Medal
2022: The BRGM Dolomieu Prize
References
Mass spectrometrists
French volcanologists
French geophysicists
Women geophysicists
French geochemists
Fellows of the American Geophysical Union
Living people
University of Copenhagen alumni
20th-century American women scientists
21st-century American women scientists
Year of birth missing (living people)
Academic staff of the École Normale Supérieure
American women academics
Research directors of the French National Centre for Scientific Research
Murchison Medal winners
Lamont–Doherty Earth Observatory people | Janne Blichert-Toft | [
"Physics",
"Chemistry"
] | 932 | [
"Geochemists",
"Spectrum (physical sciences)",
"Mass spectrometrists",
"French geochemists",
"Mass spectrometry",
"Biochemists"
] |
50,148,165 | https://en.wikipedia.org/wiki/NGC%20136 | NGC 136 is an open cluster in the constellation Cassiopeia. It was discovered by William Herschel on November 26, 1788.
References
Open clusters
Astronomical objects discovered in 1788
0136
Cassiopeia (constellation) | NGC 136 | [
"Astronomy"
] | 46 | [
"Cassiopeia (constellation)",
"Constellations"
] |
50,150,913 | https://en.wikipedia.org/wiki/Nolanea%20claviformis | Nolanea claviformis, or Entoloma claviforme, is a mushroom in the family Entolomataceae. Described as new to science in 2014, it is found in Guyana, where it fruits on humus under Dicymbe corymbosa. The type was collected in the Potaro-Siparuni region, in the Pakaraima Mountains, at an elevation of . The specific epithet claviformis/claviforme (club-shaped) refers to the shape of its stipe.
Although it was originally described under the genus name Nolanea, this is nowadays generally regarded as a sub-genus of Entoloma, and so the current name is Entoloma claviforme.
References
External links
Entolomataceae
Fungi described in 2014
Fungi of Guyana
Fungus species | Nolanea claviformis | [
"Biology"
] | 171 | [
"Fungi",
"Fungus species"
] |
36,814,362 | https://en.wikipedia.org/wiki/Guinier%E2%80%93Preston%20zone | A Guinier–Preston zone, or GP-zone, is a fine-scale metallurgical phenomenon, involving early stage precipitation.
GP-zones are associated with the phenomenon of age hardening, whereby room-temperature reactions continue to occur within a material through time, resulting in changing physical properties. In particular, this occurs in several aluminium series, such as the 6000 and 7000 series alloys.
Physically, GP zones are extremely fine-scaled (on the order of 3–10 nm in size) solute enriched regions of the material, which offer physical obstructions to the motion of dislocations, above that of the solid solution strengthening of the solute components. In 7075 aluminium for example, Zn–Mg clusters precede the formation of equilibrium MgZn2 precipitates.
The zone is named after André Guinier and George Dawson Preston who independently identified the zones in 1938.
References
Further reading
G.D Preston, Structure of age-hardening aluminium–copper alloys, Nature 142 (1938) 570, September 24
Surface science
Metal heat treatments
Strengthening mechanisms of materials | Guinier–Preston zone | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 223 | [
"Metallurgical processes",
"Materials science",
"Surface science",
"Condensed matter physics",
"Strengthening mechanisms of materials",
"Metal heat treatments"
] |
32,630,150 | https://en.wikipedia.org/wiki/List%20of%20ancient%20oceans | This is a list of former oceans that disappeared due to tectonic movements and other geographical and climatic changes. In alphabetic order:
List
Bridge River Ocean, the ocean between the ancient Insular Islands (that is, Stikinia) and North America
Cache Creek Ocean, a Paleozoic ocean between the Wrangellia Superterrane and Yukon-Tanana Terrane
Iapetus Ocean, the Southern hemisphere ocean between Baltica and Avalonia
Kahiltna-Nutotzin Ocean, Mesozoic
Khanty Ocean, the Precambrian to Silurian ocean between Baltica and the Siberian continent
Medicine Hat Ocean, a small Proterozoic ocean basin
Mezcalera Ocean, the ocean between the Guerrero Terrane and Laurentia
Mirovia, the ocean that surrounded the Rodinia supercontinent
Mongol-Okhotsk Ocean, the early Mesozoic ocean between the North China and Siberia cratons
Oimyakon Ocean, the northernmost part of the Mesozoic Panthalassa Ocean
Paleo-Tethys Ocean, the ocean between Gondwana and the Hunic terranes
Pan-African Ocean, the ocean that surrounded the Pannotia supercontinent
Panthalassa, the vast world ocean that surrounded the Pangaea supercontinent, also referred to as the Paleo-Pacific Ocean
Pharusian Ocean, Neoproterozoic
Poseidon Ocean, Mesoproterozoic
Pontus Ocean, the western part of the early Mesozoic Panthalassa Ocean
Proto-Tethys Ocean, Neoproterozoic
Rheic Ocean, the Paleozoic ocean between Gondwana and Laurussia
Slide Mountain Ocean, the Mesozoic ocean between the ancient Intermontane Islands (that is, Wrangellia) and North America
South Anuyi Ocean, Mesozoic ocean related to the formation of the Arctic Ocean
Tethys Ocean, the ocean between the ancient continents of Gondwana and Laurasia
Thalassa Ocean, the eastern part of the early Mesozoic Panthalassa Ocean
Ural Ocean, the Paleozoic ocean between Siberia and Baltica
See also
:Category:Historical oceans
, an ocean that surrounds a global supercontinent
ancient oceans
ancient oceans
Historical oceans
Mesozoic paleogeography
Paleozoic paleogeography
Proterozoic paleogeography | List of ancient oceans | [
"Physics",
"Environmental_science"
] | 493 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
32,631,220 | https://en.wikipedia.org/wiki/Ground%20vibrations | Ground vibrations is a technical term that is being used to describe mostly man-made vibrations of the ground, in contrast to natural vibrations of the Earth studied by seismology. For example, vibrations caused by explosions, construction works, railway and road transport, etc. - all belong to ground vibrations.
General information
Ground vibrations are associated with different types of elastic waves propagating through the ground. These are surface waves, mostly Rayleigh waves, and bulk longitudinal waves and transverse waves (or shear waves) propagating into the ground depth. Typical frequency range for environmental ground vibrations is 1 – 200 Hz. Waves of lower frequencies (below 1 Hz) are usually called microseisms, and they are normally associated with natural phenomenae, e.g. water waves in the oceans. Environmental ground vibrations generated by rail and road traffic may cause annoyance to residents of nearby buildings both directly and via generated structure-borne interior noise. Very strong ground vibrations, e.g. generated by heavy lorries on bumped roads, may even cause structural damage to very close buildings. Magnitudes of ground vibrations are usually described in terms of particle vibration velocity (in mm/s or m/s). Sometimes they are also described in decibels (relative to the reference particle velocity of 10−9 m/s). Typical values of ground vibration particle velocity associated with vehicles passing over traffic calming road humps are in the range of 0.1 – 2 mm/s. Magnitudes of ground vibrations that are considered to be able to cause structural damage to buildings are above 10–20 mm/s.
Ground vibrations from railways
The main sources of ground vibrations generated by railway trains are dynamic forces transmitted from tracks to the ground. These forces are associated with complex processes of interaction of moving train axles with railway tracks supported by the elastic ground. The magnitudes of these forces generally increase with the increase of train speeds. Therefore, the levels of generated ground vibrations may be substantial in the case of high-speed trains. If a train speed becomes larger than Rayleigh wave velocity in the ground, an additional very large increase in generated ground vibrations takes place. This phenomenon is termed ground vibration boom, and it is similar to sonic boom generated by supersonic aircraft.
Ground vibrations from road traffic
The main mechanism responsible for generation of ground vibrations by moving cars and lorries is the dynamic forces associated with vehicle passage over road irregularities, such as bumps, peats, etc. These forces, and hence generated ground vibrations, can be reduced by keeping road surfaces in good condition.
Ground vibrations at construction
The main sources of ground vibrations at construction are pile driving, dynamic compaction, blasting, and operation of heavy construction equipment. These vibrations may harmfully affect surrounding buildings, and their effect ranges from disturbance of residents to visible structural damage.
See also
Love wave
Shear wave
References
Skipp, B.O. (ed), Ground Dynamics and Man-made Processes, The Institution of Civil Engineers, London, 1998.
Krylov, V.V. (ed), Noise and Vibration from High Speed Trains, Thomas Telford Publishing, London, 2001.
Santos, J.A. (ed), Application of Stress-Wave Theory to Piles: Science, Technology and Practice, IOS Press BV, Amsterdam, 2008.
Bull, J.W. (ed), Linear and Non-linear Numerical Analysis of Foundations, Taylor & Francis, New York, Abingdon, 2009.
External links
Ground vibrations at construction
Ground vibrations caused by blasting
Mechanical vibrations
Waves
Seismology | Ground vibrations | [
"Physics",
"Engineering"
] | 713 | [
"Structural engineering",
"Physical phenomena",
"Waves",
"Motion (physics)",
"Mechanics",
"Mechanical vibrations"
] |
32,633,549 | https://en.wikipedia.org/wiki/PTC%20Creo | Creo is a family of Computer-aided design (CAD) apps supporting product design for discrete manufacturers developed by PTC.
Creo runs on Microsoft Windows and provides software for 3D CAD parametric feature solid modeling, 3D direct modeling, 2D orthographic views, Finite Element Analysis and simulation, schematic design, technical illustrations, and viewing and visualization. Creo can also be paired with the Mastercam machining based software.
History
PTC began developing Creo in 2009, and announced it using the code name Project Lightning at PlanetPTC Live, in Las Vegas, in June 2010.
In October 2010, PTC unveiled the product name Creo. PTC released Creo 1.0 in June 2011.
Software and features
Creo Parametric and Creo Elements compete directly with CATIA, Siemens NX/Solid Edge, and SolidWorks. The Creo suite of apps replace and supersede PTC’s products formerly known as Pro/ENGINEER, CoCreate, and ProductView.
See also
Autodesk Inventor
Creo Elements/Pro
Creo Elements/View
I-DEAS
Mastercam
Parametric Technology Corporation
Siemens NX
Solid Edge
SolidWorks
References
External links
Product design
Computer-aided design software for Windows | PTC Creo | [
"Engineering"
] | 252 | [
"Product design",
"Design"
] |
32,634,940 | https://en.wikipedia.org/wiki/Polbase | Polbase (DNA Polymerase Database) is an open repository of DNA polymerase information. Polbase captures information from published research on polymerase activity, and presents it in context with related work. Polbase indexes over 5,000 references from the 1950s to the present and includes hundreds of polymerases and their related mutants. Polbase's collaborative model allows polymerase investigators to complete, correct and validate Polbase's representation of their work.
Content
Polbase features a listing of known polymerases categorized by organism, polymerase family, and selected properties. Each indexed polymerase has its own snapshot page containing links to all its information in the database. All results in Polbase are stored with the relevant experimental details to put them into context. If structure information is available, Polbase links to the polymerase's Protein Data Bank (PDB) entry. All information gathered in Polbase is linked to the original publication where it was reported.
Features
Polymerases by family, organism and properties
Search by author, organism, polymerase name, property, etc.
Browsing by reference
Browsing by author
Browsing by organism
Information sources
Polbase draws information from a variety of sources including PubMed, PDB, and directly from polymerase investigators.
Interconnections
Polbase is connected with various other databases. These include:
The Protein Data Bank
European Bioinformatics Institute
ExPASy Bioinformatics Resource Portal
UniProt
BRENDA
PubMed
Various Scientific Journals
History
Polbase began in March 2009 with a grant from the NIH's SBIR program and was first presented to the public at MIT's DNA and Mutagenesis Meeting
In March 2010 Polbase was presented to a larger audience at the Evolving Polymerases 2010 Conference.
Polbase was also presented in more technical detail at the Rocky 2010 ISMB Conference.
Polbase is described in more detail in the 2012 Nucleic Acids Research Database Issue.
Polbase was built at New England Biolabs by Brad Langhorst and Nicole Nichols with the help of founding collaborators Linda Reha-Krantz, Bill Jack, Cathy Joyce, Stu Linn, Stefan Sarafianos, Sam Wilson, and Roger Woodgate.
References
External links
The DNA Polymerase Database (Polbase)
Enzyme databases
DNA replication | Polbase | [
"Chemistry",
"Biology"
] | 463 | [
"Genetics techniques",
"Enzyme databases",
"Biochemistry databases",
"Protein classification",
"DNA replication",
"Molecular biology techniques",
"Molecular genetics"
] |
55,481,097 | https://en.wikipedia.org/wiki/RNA%20origami | RNA origami is the nanoscale folding of RNA, enabling the RNA to create particular shapes to organize these molecules. It is a new method that was developed by researchers from Aarhus University and California Institute of Technology. RNA origami is synthesized by enzymes that fold RNA into particular shapes. The folding of the RNA occurs in living cells under natural conditions. RNA origami is represented as a DNA gene, which within cells can be transcribed into RNA by RNA polymerase. Many computer algorithms are present to help with RNA folding, but none can fully predict the folding of RNA of a singular sequence.
Overview
In nucleic acids nanotechnology, artificial nucleic acids are designed to form molecular components that can self-assemble into stable structures for use ranging from targeted drug delivery to programmable biomaterials. DNA nanotechnology uses DNA motifs to build target shapes and arrangements. It has been used in a variety of situations, including nanorobotics, algorithmic arrays, and sensor applications. The future of DNA nanotechnology is filled with possibilities for applications.
The success of DNA nanotechnology has allowed designers to develop RNA nanotechnology as a growing discipline. RNA nanotechnology combines the simplistic design and manipulation characteristic of DNA, with the additional flexibility in structure and diversity in function similar to that of proteins. RNA's versatility in structure and function, favorable in vivo attributes, and bottom-up self-assembly is an ideal avenue for developing biomaterial and nanoparticle drug delivery. Several techniques were developed to construct these RNA nanoparticles, including RNA cubic scaffold, templated and non-templated assembly, and RNA origami.
The first work in RNA origami appeared in Science, published by Ebbe S. Andersen of Aarhus University. Researchers at Aarhus University used various 3D models and computer software to design individual RNA origami. Once encoded as a synthetic DNA gene, adding RNA polymerase resulted in the formation of RNA origami. Observation of RNA was primarily done through atomic force microscopy, a technique that allows researchers to look at molecules a thousand times closer than would normally be possible with a conventional light microscope. They were able to form honeycomb shapes, but determined other shapes are also possible.
Cody Geary, a scholar in the field of RNA origami, described the uniqueness of the method of RNA origami. He stated that its folding recipe is encoded in the molecule itself, and determine by its sequence. The sequence gives the RNA origami both its final shape and movements of the structure as it folds. The primary challenge associated with RNA origami stems from the fact RNA folds on its own and can thus easily tangle itself.
Computer-aided design
Computer-aided design of the RNA origami structure requires three main processes; creating the 3D model, writing the 2D structure, and designing the sequence. First, a 3D model is constructed using tertiary motifs from existing databases. This is necessary to ensure the created structure has feasible geometry and strain. The next process is creating the 2D structure describing the strand path and base pairs from the 3D model. This 2D blueprint introduces sequence constraints, creating primary, secondary, and tertiary motifs. The final step is designing sequences compatible with designed structure. Design algorithms can be used to create sequences that can fold into various structures.
The double crossover (DX)
To produce a desired shape, the RNA origami method uses double-crossovers (DX) to arrange the RNA helices in parallel to each other to form a building block. While DNA origami requires the construction of DNA molecules from multiple strands, researchers were able to devise a method in making DX molecules from only one strand for RNA. This was done through adding hairpin motifs to the edges and kissing-loop complexes on internal helices. The addition of more DNA molecules on top of one another creates a junction known as the dovetail seam. This dovetail seam has base pairs that cross between adjacent junctions; thus, the structural seam along the junction becomes sequence-specific. An important aspect of these folding interactions is its folding; the order that interactions form can potentially create a situation in which one interaction blocks another, creating a knot. Because the kissing-loop interactions and dovetail interactions are a half-turn or shorter, they do not create these topological issues.
Comparison with DNA origami
RNA and DNA nanostructures are used for the organization and coordination of important molecular processes. However, there exist several distinct differences between the fundamental structure and applications between the two. Although inspired by the DNA origami techniques established by Paul Rothemund, the process for RNA origami is vastly different. RNA origami is a much newer process than DNA origami; DNA origami has been studied for approximately a decade now, while the study of RNA origami has only recently begun.
In contrast to DNA origami, which involves chemically synthesizing the DNA strands and arranging the strands to form any shape desired with the aid of "staple strands", RNA origami is made by enzymes and subsequently folds into pre-rendered shapes. RNA is able to fold into unique ways in complex structures due to a number of secondary structural motifs, such as conserved motifs and short structural elements. A major determinant for RNA topology is the secondary-structure interaction, which include motifs such as pseudoknots and kissing loops, adjacent helices stacking on one another, hairpin loops with bulge content, and coaxial stacks. This is largely a result of four different nucleotides: adenine (A), cytosine (C), guanine (G) and uracil (U), and ability to form non-canonical base pairs.
There also exist more complex and longer-range RNA tertiary interactions. DNA are unable to forms these tertiary motifs and thereby cannot match the functional capacity of RNA in performing more versatile tasks. RNA molecules that are correctly folded can serve as enzymes, due to positioning metal ions at their active sites; this gives the molecules a diverse array of catalytic abilities. Because of this relationship to enzymes, RNA structures can potentially be grown within living cells and used to organize cellular enzymes into distinct groups.
Additionally, the DNA origami's molecular breakup is not easily incorporated into the genetic material of an organism. However, RNA origami is capable of being written directly as a DNA gene and transcribed using RNA polymerase. Therefore, while DNA origami requires expensive culturing outside of a cell, RNA origami can be produced in mass, cheap quantities directly within cells just by growing bacteria. The feasibility and cost effectiveness of manufacturing RNA in living cells and combined with the extra functionality of RNA structure is promising for the development of RNA origami.
Applications
RNA origami is a new concept and has great potential for applications in nanomedicine and synthetic biology. The method was developed to allow new creations of large RNA nanostructures that create defined scaffolds for combining RNA based functionalities. Because of the infancy of RNA origami, many of its potential applications are still in the process of discovery. Its structures are able to provide a stable basis to allow functionality for RNA components. These structures include riboswitches, ribozymes, interaction sites, and aptamers. Aptamer structures allow the binding of small molecules which gives possibilities for construction of future RNA based nanodevices. RNA origami is further useful in areas such as cell recognition and binding for diagnosis. Additionally, targeted delivery and blood-brain barrier passing have been studied. Perhaps the most important future application for RNA origami is building scaffolds to arrange other microscopic proteins and allow them to work with one another.
References
RNA
Nanotechnology | RNA origami | [
"Materials_science",
"Engineering"
] | 1,591 | [
"Nanotechnology",
"Materials science"
] |
55,485,184 | https://en.wikipedia.org/wiki/ATX-II | ATX-II, also known as neurotoxin 2, Av2, Anemonia viridis toxin 2 or δ-AITX-Avd1c, is a neurotoxin derived from the venom of the sea anemone Anemonia sulcata. ATX-II slows down the inactivation of different voltage-gated sodium channels, including Nav1.1 and Nav1.2, thus prolonging action potentials.
Sources
ATX-II is the main component of the venom of Mediterranean snakelocks sea anemone, Anemonia sulcata. ATX-II is produced by the nematocysts in the sea anemone's tentacles and the anemone uses this venom to paralyze its prey.
Etymology
"ATX-II" is an acronym for "anemone toxin".
Chemistry
Structure
ATX-II is a protein comprising 47 amino acids crosslinked by three disulfide bridges. The molecular mass of the protein is 4,94 kDa (calculated with ProtParam ExPASy).
Family and homology
ATX-II belongs to the sea anemone neurotoxin family. Purification studies of ATX-II and the two other sea anemone neurotoxins, I and III, have revealed the polypeptide nature of these toxins. Toxins I and II are very potent paralyzing toxins that act on crustaceans, fish and mammals and have cardiotoxic and neurotoxic effects. Toxin III has been shown to cause muscular contraction with subsequent paralysis in the crab Carcinus maenas. All three toxins are highly homologous and block neuromuscular transmission in crabs.
Four other sea anemone toxins purified from Condylactis aurantiaca show close sequence similarities with toxins I, II and III of Anemonia sulcata. The effect of these different toxins on Carcinus meanas is visually indistinguishable, namely cramp followed by paralysis and death. However, their mode of action differs. Toxin IV of Condylactis aurantiaca causes a repetitive firing of the excitatory axon for several minutes resulting in muscle contraction without causing a detectable change in the amplitude of the excitatory junction potentials (EJPS). In contrast, the application of Toxin II from Anemonia sulcata results in the increase of the EJPS up to 40 mV causing large action potentials at the muscle fibers. Other toxins with a similar mode of action to ATX-II are α-scorpion toxins. Although both sea anemone and α-scorpion toxins bind to common overlapping elements on the extracellular surface of sodium channels, they belong to distinct families and share no sequence homology. The toxins AFT-II (from Anthopleura fuscoviridis) and ATX-II differ by only one amino acid, L36A, and the protein sequence of BcIII (from Bunodosoma caissarum) is 70% similar to ATX-II.
Target
ATX-II is highly potent at voltage-gated sodium channels subtype 1.1 and 1.2 (Nav1.1 and Nav1.2) with an EC50 of approximately 7 nM when tested in human embryonic kidney 293 cells lines. Moreover, studies suggest that ATX-II interacts with glutamic acid residue (Glu-1613 and 1616 in Nav1.2) on the third and fourth transmembrane loop (S3-S4) of domain IV on the alpha-subunit of neuronal channel Nav1.2 in rats.
The KD of type IIa Na+ channels for ATX II is 76 ± 6 nM. In small and large dorsal root ganglion cells mainly Nav1.1, Nav1.2 and Nav1.6 are sensitive to ATX-II. The binding of the toxin can only occur when the sodium channel is open.
Mode of action
The major action of ATX-II is to delay sodium channel inactivation. Studies using giant crayfish axons and myelinated fibers from frogs indicate that ATX-II acts at low doses, without changing the opening mechanism or steady-state potassium conductance. This mode of action is caused by binding of ATX-II across the extracellular loop. ATX-II slows conformational changes or translocation that are necessary for closing the sodium channel. When applied externally in high concentrations (100 μM range), ATX-II reduces potassium conductance, yet without modifying the kinetic properties of the potassium channel.
ATX-II prolongs the duration of the cardiac action potential, as demonstrated in cultured embryonic chicken cardiac muscle cells. ATX-II also selectively activates A-fibers of peripheral nerves projecting to the sensory neuron of the dorsal root ganglia (DRG) by enhancing resurging currents in DRGs. This mechanism can thereby induce itch-like sensations and pain.
Toxicity
People who got in contact with Anemonia sulcata reported symptoms such as pain and itch. The same symptoms were found in human research subjects after injection of ATX-II into their skin.
In cardiac muscle tissue of various mammals, ATX-II has been shown to produce large and potentially lethal increases in heart rate. The lethal dose of ATX-II for the crab Carcinus maenas is 2 μg/kg.
References
Neurotoxins
Ion channel toxins
Sea anemone toxins | ATX-II | [
"Chemistry"
] | 1,178 | [
"Neurochemistry",
"Neurotoxins"
] |
55,486,277 | https://en.wikipedia.org/wiki/NGC%202032 | NGC 2032 (also known as ESO 56-EN160 and the Seagull Nebula) is an emission nebula in the Dorado constellation and near the supershell LMC-4 and it consists of NGC 2029, NGC 2035 and NGC 2040. It was first discovered by James Dunlop on 27 September 1826, and John Herschel rerecorded it on 2 November 1834. NGC 2032 is located in the Large Magellanic Cloud.
References
ESO objects
2032
Emission nebulae
Supernova remnants
Astronomical objects discovered in 1826
Large Magellanic Cloud
Dorado | NGC 2032 | [
"Astronomy"
] | 121 | [
"Nebula stubs",
"Dorado",
"Astronomy stubs",
"Constellations"
] |
55,486,640 | https://en.wikipedia.org/wiki/Methanosarcina%20sRNAs | sRNA162, sRNA154, sRNA41 are small non-coding RNA (sRNA) identified together with 248 other sRNA candidates by RNA sequencing in methanogenic archaeon Methanosarcina mazei Gö1. These sRNAs were further characterised. It was shown that sRNA162 can interact with both, a cis- and a trans-encoded mRNAs using two distinct domains. The sRNA overlaps the 5′UTR of the MM2442 mRNA and acts as a cis-encoded antisense RNA, and it also regulates MM2441 expression as a trans-encoded sRNA. It exhibits a regulatory role in the metabolic switch between methanol and trimethylamine as carbon and energy source. sRNA154, exclusively expressed under nitrogen deficiency, has a central regulatory role in nitrogen metabolism affecting nitrogenase and glutamine synthetase by masking the ribosome binding site or positively affecting transcript stability. sRNA41, highly expressed during nitrogen sufficiency, is capable to bind several ribosome binding sites independently within a polycistronic mRNA. It was proposed to inhibits translation initiation of all ACDS (acetyl-CoA decarbonylase/synthase complex) genes in N-dependent manner.
See also
Bacterial sRNA involved in nitrogen metabolism: NsiR4
Other archaeal sRNAs:
Pyrobaculum asR3 small RNA
Archaeal H/ACA sRNA
References
Non-coding RNA | Methanosarcina sRNAs | [
"Chemistry"
] | 311 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
55,486,999 | https://en.wikipedia.org/wiki/LysM%20domain | In molecular biology the LysM domain is a protein domain found in a wide variety of extracellular proteins and receptors. The LysM domain is named after the Lysin Motif which was the original name given to the sequence motif identified in bacterial proteins. The region was originally identified as a C-terminal repeat found in the Enterococcus hirae muramidase. The LysM domain is found in a wide range of microbial extracellular proteins, where the LysM domain is thought to provide an anchoring to extracellular polysaccharides such as peptidoglycan and chitin. LysM domains are also found in plant receptors, including NFP, the receptor for Nod factor which is necessary for the root nodule symbiosis between legumes and symbiotic bacteria. The LysM domain is typically between 44 and 65 amino acid residues in length. The structure of the LysM domain showed that it is composed of a pair of antiparallel beta strands separated by a pair of short alpha helices.
See also
Nod factor
References
Protein domains | LysM domain | [
"Biology"
] | 228 | [
"Protein domains",
"Protein classification"
] |
55,487,424 | https://en.wikipedia.org/wiki/Stochastic%20Signal%20Density%20Modulation | Stochastic Signal Density Modulation (SSDM) is a novel power modulation technique primarily used for LED power control. The information is encoded - or the power level is set - using pulses that have pseudo-random widths. The pulses are produced so that, on average, the produced signal will have the desired ratio between high and low states. The main benefit of using SSDM over, for example, Pulse-width modulation (PWM), which is usually the preferred method for controlling LED power, is reduced electromagnetic interference. Figure 1 illustrates a SSDM signal and demonstrates how the average signal density approaches desired value. The pseudo-random pulses in the signal are visible.
SSDM can be seen as a special case of Pulse-density modulation (PDM) or Random Pulse Width Modulation (RPWM).
Principle and signal creation
Producing an SSDM signal requires a Linear-feedback shift register (LFSR) or similar source providing a sequence of pseudo-random numbers, a signal density register, and a comparator. On each clock cycle, the LFSR provides one pseudo-random number. This is compared against the signal density register. The output is set to high if the signal density register is lower than the generated pseudo-random number. If the signal density register is higher than the generated pseudo-random number, the output is low. This is illustrated in figure 2.
When LFSR is used to provide pseudo-random numbers with a maximum possible period, the sequence will contain each of the numbers in the range once (except the zero) and eventually repeat. For a 4-bit LFSR the sequence is [1 2 5 3 7 6 4]. If the signal density register would be set to 3, would the corresponding SSDM signal be repeating the [1 1 0 1 0 0 0] pattern?
Figure 3 demonstrates the signal generation process. The generated sequence of pseudo-random numbers is shown in the first sub-plot and a desired 30% threshold. The lower plot is the produced SSDM signal. It can be seen that the output is high only when the generated pseudo-random numbers are at or below the desired threshold. The SSDM signal will average at 30% of the maximum. Also, the repetitive nature of LFSR is visible in the pseudo-random number sequence and the resulting SSDM signal.
Comparison against other modulation methods
SSDM has significant advantages over PWM when used for power control. In the PWM signal, the frequency at which the output is turned high is constant. Only the width of the pulses is varied. This results in relatively high electromagnetic interference amplitude. In the case of SSDM, as the pulse widths and intervals are not constant, the resulting interference is spread across a wider spectrum, reducing the overall interference amplitude.
The fundamentally different operational principles of SSDM and PWM are illustrated in figure 4. Both signals are shown with two different signal density or pulse-width ratios: 10% and 70%.
Figure 5 illustrates the simulated spectrum of SSDM signal compared to PWM. Both are set to 30% signal density or pulse width and operate on the same frequency. As it can be seen, the amplitude on the lowest frequency component in SSDM is attenuated by more than 20 dB. This also demonstrates that for SSDM most of the signal energy is shifted to higher frequencies.. Some sources cite attenuation of 30 dB
However, due to the higher frequencies present in the signal, the SSDM may demand more from the load-driving circuitry than PWM. If there is any filtering present in the SSDM signal to reduce the higher frequency components, may this also affect the signal density. It's also worth noting that the increased number of individual transitions may also increase the on the MOSFETs used.
Like PWM, Delta-sigma modulation (DSM) produces far higher amplitude on the fundamental frequency and for the four following harmonics than SSDM.
Practical implementation
Hardware support
The SSDM technology is patented by Cypress Semiconductor. Hence the microcontrollers supporting SSDM signal generation using dedicated hardware modules are PSoC devices from Cypress Semiconductor. SSDM module is available in PSoC 1 series devices. With PSoC series 3, 4 and 5 Precision Illumination Signal Modulation (PrISM) modules can be used. PrISM is interchangeable for SSDM.
While it is recognized in US Patent 8129924 that one practical option in a signal generation is using CPLD or FPGA devices, as of October 2017, no such implementations are publicly disclosed.
Considerations
When creating an SSDM signal, regardless of the method (HW, FPGA, Software), some consideration should be put on the frequency at which the signal is created and the other circuitry:
The lowest frequency the signal created will have is determined by the frequency at which the LFSR creates random numbers and the desired resolution as follows: .
When SSDM is used for LED power control, the driving frequency should be higher than the Flicker_fusion_threshold. For PrISM technology, 120 Hz minimum frequency on the output should be sufficient to avoid visible flicker. 300 Hz to guarantee flicker-free operation.
When the frequency of the SSDM signal increases, so does the number of high-frequency components. This may affect the output-driving circuitry, and in some cases, filtering may occur. Filtering may affect the resulting pulse density.
Some implementations do demonstrate the use of a low-pass filter that is set to filter the higher part of the frequencies in the SSDM signal.
The use of SSDM may increase switching losses.
The impact of the higher frequency components in the SSDM signal on other systems should be analyzed.
See also
Pulse-width modulation
Pulse-density modulation
Electromagnetic interference
References
External links
SSDM spectral analysis scripts for Matlab
Cypress Semiconductor AN49262: Modulation Techniques for LED Dimming
Electrical components | Stochastic Signal Density Modulation | [
"Technology",
"Engineering"
] | 1,197 | [
"Electrical engineering",
"Electrical components",
"Components"
] |
55,488,219 | https://en.wikipedia.org/wiki/TypeDB | TypeDB is an open-source, distributed database management system that relies on a user-defined type system to model, manage, and query data.
Overview
The data model of TypeDB is based on primitives from conceptual data modeling, which are implemented in a type system (see § Data and query model). The type system can be extended with user-defined types, type dependencies, and subtyping, which together act as a database schema. The model has been mathematically defined under the name polymorphic entity-relation-attribute model.
To specify schemas and to create, modify, and extract data from the TypeDB database, programmers use the query language TypeQL. The language is noteworthy for its intended resemblance to natural language, following a subject-verb-object statement structure for a fixed set of “key verbs” (see § Examples).
History
TypeDB has roots in the knowledge representation system Grakn (a portmanteau of the words "graph" and "knowledge"), which was initially developed at the University of Cambridge Computer Science Department. Grakn was commercialized in 2017, and development was taken over by Grakn Labs Ltd. Later that year, Grakn was awarded the "Product of the Year" award by the University of Cambridge Computer Science Department.
In 2021, the first version of TypeDB was built from Grakn with the intention of creating a general-purpose database. The query language of Grakn, Graql, was incorporated into TypeDB's query language, TypeQL, at the same time.
TypeDB Cloud, the database-as-a-service edition of TypeDB, was first launched at the end of 2023.
Grakn version history
The initial version of Grakn, version 0.1.1, was released on September 15, 2016.
Grakn 1.0.0 was released on December 14, 2017.
Grakn 2.0.0 was released on April 1, 2021.
TypeDB version history
TypeDB 2.1.0, the first public version of TypeDB, was released on May 20, 2021.
Features
TypeDB is offered in two editions: an open-source edition, called TypeDB Core, and a proprietary edition, called TypeDB Cloud, which provides additional cloud-based management features.
TypeDB features a NoSQL data and querying model, which aims to introduce ideas from type systems and functional programming to database management.
Database architecture
General database features include the following.
Data and query model
TypeDB's data and query model differs from traditional relational database management systems in the following points.
Limitations
By relying on a non-standard data and query model, TypeDB (at present) has no support for the integration of established relational or column-oriented database standards, file formats (such as CSV, Parquet), or the query language SQL. Moreover, TypeDB has no direct facility for working with unstructured data or vector data.
Query language
TypeQL, the query language of TypeDB, acts both as data definition and data manipulation language.
The query language builds on well-known ideas from conceptual modeling, referring to independent types holding objects as entity types, dependent types holding objects as relation types, and types holding values as attribute types. The language is composed of query clauses comprising statements. Statements, especially for data manipulation, usually follow a subject-verb-object structure.
The formal specification of the query language was presented at ACM PODS 2024, where it received the "Best Newcomer" Award.
Examples
The following (incomplete) query creates a type schema using a query clause.
define
person sub entity,
owns name,
plays booking:passenger;
booking sub relation,
relates passenger,
relates flight,
owns booking_date;
name sub attribute,
value string;
...
The following query retrieves objects and values from the database that match the pattern given in the clause.
match
$j isa person, has name $n;
$n contains "Jane";
$b isa booking,
links (passenger: $j, flight: $f);
has booking_date >= 2024-01-01;
$f has flight_time < 120;
$f links (destination: $c);
$c has name "Santiago de Chile";
Licensing
The open-source edition of TypeDB is published under the Mozilla Public License.
References
Bibliography
Graph databases
Free database management systems
Free software programmed in Java (programming language)
2016 software | TypeDB | [
"Mathematics"
] | 920 | [
"Graph databases",
"Mathematical relations",
"Graph theory"
] |
55,494,541 | https://en.wikipedia.org/wiki/Assistance%20for%20airline%20passengers%20with%20disabilities | There are no worldwide uniform standards regulating the provision of assistance for airline passengers with disabilities. American regulations place the responsibility on the airlines, the European Union's rules make the airport responsible for providing the assistance, and in South America there are no regulations at all. The International Air Transport Association (IATA) is concerned about the difficulties caused by inconsistent regulations.
European Union
According to EU regulation 1107/2006, persons with reduced mobility have the right to assistance during airline travel. The assistance is mandated for flights on any airline departing from an airport in the EU or flights to an airport in the EU on an aircraft registered in any EU country. The EU has specific regulations regarding airline passengers with reduced mobility. No passenger may be turned away due to their disability, except for reasons based on safety. Assistance should be provided to these passengers, either through the airport or a third party hired by the airport, and the EU provides guidance in training airport employees in assisting these passengers. The EU recommends that the extra cost of these services be covered by every airline at the airport proportionate to the number of passengers each one carries. Passengers should be compensated for damaged items such as wheelchairs and assistive devices "in accordance with rules of international, Community and national law" Unfortunately the Montreal Convention restricts compensation to 1,131 SDRs (around $1,500), significantly less than the value of many wheelchairs.
United States
The Air Carrier Access Act of 1986 prohibits commercial airlines from discriminating against passengers with disabilities. The act was passed by the U.S. Congress in direct response to a narrow interpretation of Section 504 of the Rehabilitation Act of 1973 by the U.S. Supreme Court in U.S. Department of Transportation (DOT) v. Paralyzed Veterans of America (PVA) (1986). In this case, the Supreme Court held that private, commercial air carriers are not liable under Section 504 because they are not "direct recipients" of federal funding to airports.
Airlines are required to provide passengers with disabilities any assistance they may need in order to travel properly like all other passengers. This includes allowing them with a wheelchair or other guided assistance to board, helping them disembark from a plane upon landing, or connecting these individuals to another flight. Individuals with disabilities are also required to seating accommodation assistance meets their disability-related needs.
The U.S. Department of Transportation does not now include emotional support animals in the Air Carrier Access Act (ACAA), the act that allows service animals to fly on airplanes if they meet requirements. Before December 2020, they did include emotional support animals in their definition of service animals (US Department of Transportation, 2020).
In 2022, it was announced that the U.S. Department of Transportation (DOT) had published the Airline Passengers with Disabilities Bill of Rights. It (as stated by the DOT) "describes the fundamental rights of air travelers with disabilities under the Air Carrier Access Act and its implementing regulation, 14 Code of Federal Regulations (CFR) Part 382."
IATA ticket codes
Specific IATA codes are used on the flight ticket to indicate the kind of assistance the person needs, such as wheelchair assistance inside the terminal, between the terminal and the plane, climbing up/down to/from the plane, and moving within the plane.
References
Aviation law
Accessible transportation | Assistance for airline passengers with disabilities | [
"Physics"
] | 670 | [
"Physical systems",
"Transport",
"Accessible transportation"
] |
43,850,575 | https://en.wikipedia.org/wiki/Ets%20variant%202 | Ets variant 2 is a protein that in humans is encoded by the ETV2 gene. It is a transcription factor and also plays a role in vascular endothelial cell development.
References
Transcription factors | Ets variant 2 | [
"Chemistry",
"Biology"
] | 42 | [
"Protein stubs",
"Gene expression",
"Signal transduction",
"Biochemistry stubs",
"Induced stem cells",
"Transcription factors"
] |
43,851,055 | https://en.wikipedia.org/wiki/Custom%20peptide%20synthesis | Custom peptide synthesis is the commercial production of peptides for use in biochemistry, biology, biotechnology, pharmacology and molecular medicine. Custom peptide synthesis provides synthetic peptides as valuable tools to biomedical laboratories. Synthetic oligopeptides are used extensively in research for structure-function analysis (for example to study protein-protein interfaces), for the development of binding assays, the study of receptor agonist/antagonists or as immunogens for the production of specific antibodies. Generally, peptides are synthesized by coupling the carboxyl group or C-terminus of one amino acid to the amino group or N-terminus of another using automated solid phase peptide synthesis chemistries. However, liquid phase synthesis may also be used for specific needs.
Automated solid phase polypeptide synthesis
Large scale custom peptide synthesis can be carried out either in a liquid solution or in solid phase. In general, peptides shorter than 8 amino acids are prepared more economically by solution chemistry. Peptides larger than 8 residues are generally assembled by solid phase chemistry. Solid phase peptide synthesis (SPPS) can be carried out either manually or in a fully automated fashion. Manual synthesis for short peptides is advantageous as it allows for more flexibility when scaling up and it permits troubleshooting of unexpected problems with more ease. For example, an operator can wash away piperidine during Fmoc deprotection, in the event of a power failure or instrument failure. Furthermore, thermodynamic mixing can be better controlled with a manual approach. On the other hand, large scale fully automated peptide synthesis instruments have the obvious advantage of unattended operation and extensive documentation of the synthesis run. Therefore, automated peptide synthesis is usually selected as the best choice for the synthesis of longer peptides in the mid-scale range.
Commercial peptide synthesis
Peptide synthesis providers are measured by the quality level and the maximum length of the synthesized peptides since it is more difficult to synthesize longer peptides at a high quality. The synthesised peptides must undergo a QC procedure by analytical HPLC and mass spectrometry. Often, amino acid analysis and sequencing is also required.
Applications of commercially synthesized peptides
Biologically active peptides have been integrated into a growing number of active pharmaceutical ingredients (API’s) as well as standalone products such as vasopressin, gonadorelin, leuprolide, and goserelin. Completion of the human genome project resulted in the identification of approximately 30,000 proteins encoded in the human genome and provided many more new target molecules for biomedical researchers to explore. To investigate the possibility of increasing the native potency of a given peptide or protein using a rational design approach, small and large amounts of peptides are needed, Some in the milligram scale. Once a desired activity or potency is identified, larger scale synthesis is need. For this, gram to multi-gram scale may be needed in order to initiate small animal studies. Often, after successful validation, an even larger scale of synthesis may be desired. These can range in scale from hundreds of grams to multi-kilo amounts.
References
Chemical synthesis
Peptides | Custom peptide synthesis | [
"Chemistry"
] | 637 | [
"Biomolecules by chemical classification",
"Chemical synthesis",
"nan",
"Peptides",
"Molecular biology"
] |
43,854,082 | https://en.wikipedia.org/wiki/Direct%20sum%20of%20topological%20groups | In mathematics, a topological group is called the topological direct sum of two subgroups and if the map
is a topological isomorphism, meaning that it is a homeomorphism and a group isomorphism.
Definition
More generally, is called the direct sum of a finite set of subgroups of the map
is a topological isomorphism.
If a topological group is the topological direct sum of the family of subgroups then in particular, as an abstract group (without topology) it is also the direct sum (in the usual way) of the family
Topological direct summands
Given a topological group we say that a subgroup is a topological direct summand of (or that splits topologically from ) if and only if there exist another subgroup such that is the direct sum of the subgroups and
A the subgroup is a topological direct summand if and only if the extension of topological groups
splits, where is the natural inclusion and is the natural projection.
Examples
Suppose that is a locally compact abelian group that contains the unit circle as a subgroup. Then is a topological direct summand of The same assertion is true for the real numbers
See also
References
Topological groups
Topology | Direct sum of topological groups | [
"Physics",
"Mathematics"
] | 234 | [
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Topological groups",
"Spacetime"
] |
43,864,185 | https://en.wikipedia.org/wiki/Aviation%20Security%20in%20Airport%20Development | Aviation Security in Airport Development (ASIAD) is an anti-terrorism program implemented by the Department for Transport in the United Kingdom to incorporate design elements into airports that will impart resistance to bomb blasts. Components such as heat-strengthened laminated glass are used for windows, security barriers, and terminal facades.
Designs employed
Bespoke structural bonding of frame to glass.
Increasing the strength of components for track and door running systems
Maintaining flexibility and ductility of door frame components
Restriction of projectile components when high forces of an explosive event occur
Increasing robustness of drive motors, running gears, and operating systems
Incorporating combinations of multi-laminated glass at varying thicknesses and with anti-shard glass properties
Built-in sensors to identify forced opening, etc
Blast-resistant anti-jump runner systems
Toughened sensor controls
Post-blast retained structural barriers to stop physical attacks, unauthorized or forced entrees, or escapes
References
Aviation security
Airport infrastructure | Aviation Security in Airport Development | [
"Engineering"
] | 187 | [
"Airport infrastructure",
"Aerospace engineering"
] |
56,875,216 | https://en.wikipedia.org/wiki/Ulrike%20Diebold | Ulrike Diebold (born 12 December 1961, in Kapfenberg, Austria) is an Austrian physicist and materials scientist who is a professor of surface science at TU Vienna. She is known for her groundbreaking research on the atomic scale geometry and electronic structure of metal-oxide surfaces.
Early life and education
Diebold was born on 12 December 1961 in Kapfenberg, Austria. She spent much of her high school years reading, skiing, and agonizing over what to major in at the university. She ultimately settled on engineering physics, an area with good job prospects that was also general enough to accommodate a variety of future directions. After completing her diploma in engineering physics (TU Vienna, 1986), she became increasingly enthusiastic about experimental physics while working on her master's thesis, and ultimately completed a Doctor of Technology (Dr. techn.) in this area with Prof. Peter Varga (TU Vienna, 1990).
Career
Diebold's first appointment after graduation was as a post-doctoral research associate in the group of Theodore E. Madey in the department of physics at Rutgers University (1990–1993). It was there that she was first introduced to oxide surfaces, an area that she would later come to refer to as "the love of her scientific life". Her first faculty appointment followed, at Tulane University, New Orleans, USA, where she was an assistant professor (1993–1999), associate professor (1999–2001), and professor of physics (2001–2009), and also an adjunct professor of chemistry (1993–2009). During this time period, she also completed her habilitation in experimental physics (TU Vienna, 1998), held the Yahoo! Founder Chair in Science and Engineering (2006–2009), and was the associate department chair (2002–2009).
In 2005, Diebold and her group were forced to temporarily evacuate from New Orleans, which experienced massive flooding and power outages from the impact of Hurricane Katrina. They were hosted by the group of Theodore E. Madey at Rutgers University during this challenging period.
In 2010 she moved to the Institute of Applied Physics at TU Wien where she is currently a professor of surface science and deputy department head, and retains the title of research professor at Tulane University.. Since 2022 she also serves as Vice President of the Austrian Academy of Sciences.
Research
Ulrike Diebold is well known for her influential work in the fields of surface science, materials and physical chemistry, and condensed matter physics. In particular, she has contributed greatly to the understanding of atomic-scale surface structure and electronic surface structure of metal oxides. For her work, she mainly employs Ultra-high vacuum technology and Scanning Tunneling Microscopy.
Awards and honors
In 2013, Diebold was the sole recipient of Austria's highest research award across all disciplines, the Wittgenstein Award. The award, which comes with substantial unrestricted research funds, is bestowed in support of the notion that scientists should be guaranteed the greatest possible freedom and flexibility in the performance of their research. It enabled Diebold's research activities to flourish without restriction. Other honors include:
2004 Fellow, American Physical Society, "For groundbreaking research on the role of defects in the interplay between bulk and surface properties of transition-metal oxides and on STM imaging of their surface structure.",
2005 Fellow of the American Vacuum Society
2007 Fellow, American Association for the Advancement of Science
2011, 2019 Advanced Grants by the European Research Council, for work on "Microscopic Processes and Phenomena at Oxide Surfaces and Interfaces" (2011), and "Water at Oxide Surfaces: a Fundamental Approach" (2019).
2013 Arthur W. Anderson Award of the American Chemical Society, for Distinguished Service in the Advancement of Surface Chemistry.
2013 Wittgenstein Award
2014 European Academy of Sciences
2014 Elected as a Full Member of the Austrian Academy of Sciences.
2015 Blaise Pascal medal in Materials Sciences by the European Academy of Sciences, for "Surfaces of Metal Oxides, Studied at the Atomic Scale".
2015 Debye Lecturer at Utrecht University, The Netherlands, entitled "Surface Science Studies of an Iron Oxide Model Catalyst" .
2015 21st Annual Schrödinger Lecturer at Trinity College Dublin, Ireland, with the title "An Atomic-Scale View at Oxide Surfaces".
2015 R. Brdička memorial lecturer at the J. Heyrovský Institute of Physical Chemistry, Prague, entitled "Surface Science of Metal Oxides".
2015 Elected to the Leopoldina, the national academy of sciences in Germany,
2019 Science award of the city of Vienna.
2020 Gerhard Ertl Lecture Award
2021 International Honorary Member of the American Academy of Arts and Sciences
2022 Fellow of the Royal Society of Chemistry
Editorial activities
Diebold has served in a number of editorial roles and on a number of advisory boards for scientific journals. These include:
2003–present Surface Science Reports advisory editorial board
2006 – 2007 Journal of Physics: Condensed Matter Surface, Interface and Atomic-Scale Science editorial board
2006 – 2007 Chemical Physics, guest editor of special issue "Doping and Functionalization of Photoactive Semiconducting Metal Oxides" with C. Di Valentin and A. Selloni
2007 – 2010 Open Journal of Physical Chemistry, advisory editorial board
2009 - 2009 Journal of Physics: Condensed Matter "guest editor of special issue on Non-thermal Processes on Surfaces, dedicated to the memory of Theodore E Madey and perspectives on surface science" with Thomas M. Orlando
2016–present npj Quantum Materials, advisory editorial board member
2017 – 2019 ACS Energy Letters, editorial advisory board
2019 – 2021 Physical Review Research, editorial board
2020 – 2021 Science, board of reviewing editors
Personal life
Diebold holds dual citizenship of both Austria and the US. She is married to Gerhard Piringer with whom she has two sons, Thomas (born 1996) and Niklas (born 1999).
References
1961 births
Living people
20th-century Austrian physicists
Austrian women physicists
Austrian physical chemists
Academic staff of TU Wien
Surface science
20th-century American physicists
20th-century American women scientists
21st-century American physicists
21st-century American women scientists
People from Kapfenberg
Fellows of the American Physical Society | Ulrike Diebold | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,240 | [
"Condensed matter physics",
"Surface science"
] |
56,875,243 | https://en.wikipedia.org/wiki/Batchelor%E2%80%93Chandrasekhar%20equation | The Batchelor–Chandrasekhar equation is the evolution equation for the scalar functions, defining the two-point velocity correlation tensor of a homogeneous axisymmetric turbulence, named after George Batchelor and Subrahmanyan Chandrasekhar. They developed the theory of homogeneous axisymmetric turbulence based on Howard P. Robertson's work on isotropic turbulence using an invariant principle. This equation is an extension of Kármán–Howarth equation from isotropic to axisymmetric turbulence.
Mathematical description
The theory is based on the principle that the statistical properties are invariant for rotations about a particular direction (say), and reflections in planes containing and perpendicular to . This type of axisymmetry is sometimes referred to as strong axisymmetry or axisymmetry in the strong sense, opposed to weak axisymmetry, where reflections in planes perpendicular to or planes containing are not allowed.
Let the two-point correlation for homogeneous turbulence be
A single scalar describes this correlation tensor in isotropic turbulence, whereas, it turns out for axisymmetric turbulence, two scalar functions are enough to uniquely specify the correlation tensor. In fact, Batchelor was unable to express the correlation tensor in terms of two scalar functions, but ended up with four scalar functions, nevertheless, Chandrasekhar showed that it could be expressed with only two scalar functions by expressing the solenoidal axisymmetric tensor as the curl of a general axisymmetric skew tensor (reflectionally non-invariant tensor).
Let be the unit vector which defines the axis of symmetry of the flow, then we have two scalar variables, and . Since , it is clear that represents the cosine of the angle between and . Let and be the two scalar functions that describes the correlation function, then the most general axisymmetric tensor which is solenoidal (incompressible) is given by,
where
The differential operators appearing in the above expressions are defined as
Then the evolution equations (equivalent form of Kármán–Howarth equation) for the two scalar functions are given by
where is the kinematic viscosity and
The scalar functions and are related to triply correlated tensor , exactly the same way and are related to the two point correlated tensor . The triply correlated tensor is
Here is the density of the fluid.
Properties
The trace of the correlation tensor reduces to
The homogeneity condition implies that both and are even functions of and .
Decay of the turbulence
During decay, if we neglect the triple correlation scalars, then the equations reduce to axially symmetric five-dimensional heat equations,
Solutions to these five-dimensional heat equation was solved by Chandrasekhar. The initial conditions can be expressed in terms of Gegenbauer polynomials (without loss of generality),
where are Gegenbauer polynomials. The required solutions are
where is the Bessel function of the first kind.
As the solutions become independent of
where
See also
Kármán–Howarth equation
Kármán–Howarth–Monin equation
References
Equations of fluid dynamics
Fluid dynamics
Turbulence | Batchelor–Chandrasekhar equation | [
"Physics",
"Chemistry",
"Engineering"
] | 621 | [
"Equations of fluid dynamics",
"Turbulence",
"Equations of physics",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
56,877,042 | https://en.wikipedia.org/wiki/Prandtl%E2%80%93Batchelor%20theorem | In fluid dynamics, Prandtl–Batchelor theorem states that if in a two-dimensional laminar flow at high Reynolds number closed streamlines occur, then the vorticity in the closed streamline region must be a constant. A similar statement holds true for axisymmetric flows. The theorem is named after Ludwig Prandtl and George Batchelor. Prandtl in his celebrated 1904 paper stated this theorem in arguments, George Batchelor unaware of this work proved the theorem in 1956. The problem was also studied in the same year by Richard Feynman and Paco Lagerstrom and by W.W. Wood in 1957.
Mathematical proof
At high Reynolds numbers, the two-dimensional problem governed by two-dimensional Euler equations reduce to solving a problem for the stream function , which satisfies
where is the only non-zero vorticity component in the -direction of the vorticity vector. As it stands, the problem is ill-posed since the vorticity distribution can have infinite number of possibilities, all of which satisfies the equation and the boundary condition. This is not true if no streamline is closed, in which case, every streamline can be traced back to the boundary where and therefore its corresponding vorticity are prescribed. The difficulty arises only when there are some closed streamlines inside the domain that does not connect to the boundary and one may suppose that at high Reynolds numbers, is not uniquely defined in regions where closed streamlines occur. The Prandtl–Batchelor theorem, however, asserts that this is not the case and is uniquely defined in such cases, through an examination of the limiting process properly.
The steady, non-dimensional vorticity equation in our case reduces to
Integrate the equation over a surface lying entirely in the region where we have closed streamlines, bounded by a closed contour
The integrand in the left-hand side term can be written as since . By divergence theorem, one obtains
where is the outward unit vector normal to the contour line element . The left-hand side integrand can be made zero if the contour is taken to be one of the closed streamlines since then the velocity vector projected normal to the contour will be zero, that is to say . Thus one obtains
This expression is true for finite but large Reynolds number since we did not neglect the viscous term before.
Unlike the two-dimensional inviscid flows, where since with no restrictions on the functional form of , in the viscous flows, . But for large but finite , we can write , and this small corrections become smaller and smaller as we increase the Reynolds number. Thus, in the limit , in the first approximation (neglecting the small corrections), we have
Since is constant for a given streamline, we can take that term outside the integral,
One may notice that the integral is negative of the circulation since
where we used the Stokes theorem for circulation and . Thus, we have
The circulation around those closed streamlines is not zero (unless the velocity at each point of the streamline is zero with a possible discontinuous vorticity jump across the streamline) . The only way the above equation can be satisfied is only if
i.e., vorticity is not changing across these closed streamlines, thus proving the theorem. Of course, the theorem is not valid inside the boundary layer regime. This theorem cannot be derived from the Euler equations.
References
Fluid dynamics | Prandtl–Batchelor theorem | [
"Chemistry",
"Engineering"
] | 711 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
60,310,734 | https://en.wikipedia.org/wiki/Algorithmic%20technique | In mathematics and computer science, an algorithmic technique is a general approach for implementing a process or computation.
General techniques
There are several broadly recognized algorithmic techniques that offer a proven method or process for designing and constructing algorithms. Different techniques may be used depending on the objective, which may include searching, sorting, mathematical optimization, constraint satisfaction, categorization, analysis, and prediction.
Brute force
Brute force is a simple, exhaustive technique that evaluates every possible outcome to find a solution.
Divide and conquer
The divide and conquer technique decomposes complex problems recursively into smaller sub-problems. Each sub-problem is then solved and these partial solutions are recombined to determine the overall solution. This technique is often used for searching and sorting.
Dynamic
Dynamic programming is a systematic technique in which a complex problem is decomposed recursively into smaller, overlapping subproblems for solution. Dynamic programming stores the results of the overlapping sub-problems locally using an optimization technique called memoization.
Evolutionary
An evolutionary approach develops candidate solutions and then, in a manner similar to biological evolution, performs a series of random alterations or combinations of these solutions and evaluates the new results against a fitness function. The most fit or promising results are selected for additional iterations, to achieve an overall optimal solution.
Graph traversal
Graph traversal is a technique for finding solutions to problems that can be represented as graphs. This approach is broad, and includes depth-first search, breadth-first search, tree traversal, and many specific variations that may include local optimizations and excluding search spaces that can be determined to be non-optimum or not possible. These techniques may be used to solve a variety of problems including shortest path and constraint satisfaction problems.
Greedy
A greedy approach begins by evaluating one possible outcome from the set of possible outcomes, and then searches locally for an improvement on that outcome. When a local improvement is found, it will repeat the process and again search locally for additional improvements near this local optimum. A greedy technique is generally simple to implement, and these series of decisions can be used to find local optimums depending on where the search began. However, greedy techniques may not identify the global optimum across the entire set of possible outcomes.,
Heuristic
A heuristic approach employs a practical method to reach an immediate solution not guaranteed to be optimal.
Learning
Learning techniques employ statistical methods to perform categorization and analysis without explicit programming. Supervised learning, unsupervised learning, reinforcement learning, and deep learning techniques are included in this category.
Mathematical optimization
Mathematical optimization is a technique that can be used to calculate a mathematical optimum by minimizing or maximizing a function.
Modeling
Modeling is a general technique for abstracting a real-world problem into a framework or paradigm that assists with solution.
Recursion
Recursion is a general technique for designing an algorithm that calls itself with a progressively simpler part of the task down to one or more base cases with defined results.
Window sliding
The window sliding is used to reduce the use of nested loop and replace it with a single loop, thereby reducing the time complexity.
See also
Algorithm engineering
Algorithm characterizations
Theory of computation
Notes
External links
Algorithmic Design and Techniques - edX
Algorithmic Techniques and Analysis – Carnegie Mellon
Algorithmic Techniques for Massive Data – MIT
Mathematical logic
Theoretical computer science | Algorithmic technique | [
"Mathematics"
] | 677 | [
"Theoretical computer science",
"Applied mathematics",
"Mathematical logic"
] |
60,312,595 | https://en.wikipedia.org/wiki/Cosmological%20lithium%20problem | In astronomy, the lithium problem or lithium discrepancy refers to the discrepancy between the primordial abundance of lithium as inferred from observations of metal-poor (Population II) halo stars in our galaxy and the amount that should theoretically exist due to Big Bang nucleosynthesis+WMAP cosmic baryon density predictions of the CMB. Namely, the most widely accepted models of the Big Bang suggest that three times as much primordial lithium, in particular lithium-7, should exist. This contrasts with the observed abundance of isotopes of hydrogen (1H and 2H) and helium (3He and 4He) that are consistent with predictions. The discrepancy is highlighted in a so-called "Schramm plot", named in honor of astrophysicist David Schramm, which depicts these primordial abundances as a function of cosmic baryon content from standard BBN predictions.
Origin of lithium
Minutes after the Big Bang, the universe was made almost entirely of hydrogen and helium, with trace amounts of lithium and beryllium, and negligibly small abundances of all heavier elements.
Lithium synthesis in the Big Bang
Big Bang nucleosynthesis produced both lithium-7 and beryllium-7, and indeed the latter dominates the primordial synthesis of mass 7 nuclides. On the other hand, the Big Bang produced lithium-6 at levels more than 1000 times smaller.
later decayed via electron capture (half-life 53.22 days) into ,
so that the observable primordial lithium abundance essentially sums primordial and radiogenic lithium from the decay of .
These isotopes
are produced by the reactions
:{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
|}
and destroyed by
:{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
|}
The amount of lithium generated in the Big Bang can be calculated. Hydrogen-1 is the most abundant nuclide, comprising roughly 92% of the atoms in the Universe, with helium-4 second at 8%. Other isotopes including 2H, 3H, 3He, 6Li, 7Li, and 7Be are much rarer; the estimated abundance of primordial lithium is 10−10 relative to hydrogen. The calculated abundance and ratio of 1H and 4He is in agreement with data from observations of young stars.
The P-P II branch
In stars, lithium-7 is made in a proton-proton chain reaction.
:{| border="0"
|- style="height:2em;"
| ||+ || ||→ || ||+ ||
|- style="height:2em;"
| ||+ || ||→ || ||+ || ||+ || ||/ ||
|- style="height:2em;"
| ||+ || ||→ ||2
|}
The P-P II branch is dominant at temperatures of 14 to .
Observed abundance of lithium
Despite the low theoretical abundance of lithium, the actual observable amount is less than the calculated amount by a factor of 3–4. This contrasts with the observed abundance of isotopes of hydrogen (1H and 2H) and helium (3He and 4He) that are consistent with predictions.
Older stars seem to have less lithium than they should, and some younger stars have much more. One proposed model is that lithium produced during a star's youth sinks beneath the star's atmosphere (where it is obscured from direct observation) due to effects the authors describe as "turbulent mixing" and "diffusion," which are suggested to increase or accumulate as the star ages. Spectroscopic observations of stars in NGC 6397, a metal-poor globular cluster, are consistent with an inverse relation between lithium abundance and age, but a theoretical mechanism for diffusion has not been formalized. Though it transmutes into two atoms of helium due to collision with a proton at temperatures above 2.4 million degrees Celsius (most stars easily attain this temperature in their interiors), lithium is more abundant than current computations would predict in later-generation stars.
Lithium is also found in brown dwarf substellar objects and certain anomalous metal-poor stars. Because lithium is present in cooler, less massive brown dwarfs, but is destroyed in hotter red dwarf stars, its presence in the stars' spectra can be used in the "lithium test" to differentiate the two, as both are smaller than the Sun.
Less lithium in Sun-like stars with planets
Sun-like stars without planets have 10 times the lithium as Sun-like stars with planets in a sample of 500 stars. The Sun's surface layers have less than 1% the lithium of the original formation protosolar gas clouds despite the surface convective zone not being quite hot enough to burn lithium. It is suspected that the gravitational pull of planets might enhance the churning up of the star's surface, driving the lithium to hotter cores where lithium burning occurs. The absence of lithium could also be a way to find new planetary systems. However, this claimed relationship has become a point of contention in the planetary astrophysics community, being frequently denied but also supported.
Higher than expected lithium in metal-poor stars
Certain metal-poor stars also contain an abnormally high concentration of lithium. These stars tended to orbit massive objects—neutron stars or black holes—whose gravity evidently pulls heavier lithium to the surface of a hydrogen-helium star, causing more lithium to be observed.
Proposed solutions
Possible solutions fall into three broad classes.
Astrophysical solutions
Considering the possibility that BBN predictions are sound, the measured value of the primordial lithium abundance should be in error and astrophysical solutions offer revision to it. For example, systematic errors, including ionization correction and inaccurate stellar temperatures determination could affect Li/H ratios in stars. Furthermore, more observations on lithium depletion remain important since present lithium levels might not reflect the initial abundance in the star. In summary, accurate measurements of the primordial lithium abundance is the current focus of progress, and it could be possible that the final answer does not lie in astrophysical solutions.
Some astronomers suggest that the velocities of nucleons do not follow a Maxwell-Boltzmann distribution. They test the framework of Tsallis non-extensive statistics. Their result suggest that is a possible new solution to the cosmological lithium problem.
Nuclear physics solutions
When one considers the possibility that the measured primordial lithium abundance is correct and based on the Standard Model of particle physics and the standard cosmology, the lithium problem implies errors in the BBN light element predictions. Although standard BBN rests on well-determined physics, the weak and strong interactions are complicated for BBN and therefore might be the weak point in standard BBN calculation.
Firstly, incorrect or missing reactions could give rise to the lithium problem. For incorrect reactions, major thoughts lie within revision to cross section errors and standard thermonuclear rates according to recent studies.
Second, starting from Fred Hoyle's discovery of a resonance in carbon-12, an important factor in the triple-alpha process, resonance reactions, some of which might have evaded experimental detection or whose effects have been underestimated, become possible solutions to the lithium problem.
BBC Science Focus wrote in 2023 that "recent research seems to completely discount" such theories; the magazine held that mainstream lithium nucleosynthesis calculations are probably correct.
Solutions beyond the Standard Model
Under the assumptions of all correct calculation, solutions beyond the existing Standard Model or standard cosmology might be needed.
Dark matter decay and supersymmetry provide one possibility, in which decaying dark matter scenarios introduce a rich array of novel processes that can alter light elements during and after BBN, and find the well-motivated origin in supersymmetric cosmologies. With the fully operational Large Hadron Collider (LHC), much of minimal supersymmetry lies within reach, which would revolutionize particle physics and cosmology if discovered; however, results from the ATLAS experiment in 2020 have excluded many supersymmetric models.
Changing fundamental constants can be one possible solution, and it implies that first, atomic transitions in metals residing in high-redshift regions might behave differently from our own. Additionally, Standard Model couplings and particle masses might vary, and variation in nuclear physics parameters would be needed.
Nonstandard cosmologies indicate variation of the baryon to photon ratio in different regions. One proposal is a result of large-scale inhomogeneities in cosmic density, different from homogeneity defined in the cosmological principle. However, this possibility requires a large amount of observations to test it.
See also
Big Bang
Halo nucleus
Isotopes of lithium
List of unsolved problems in physics
Lithium burning
References
Lithium
Big Bang
Nucleosynthesis | Cosmological lithium problem | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,950 | [
"Nuclear fission",
"Cosmogony",
"Big Bang",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion"
] |
57,292,984 | https://en.wikipedia.org/wiki/Epidemiology%20in%20Country%20Practice | Epidemiology in Country Practice is a book by William Pickles (1885–1969), a rural general practitioner (GP) physician in Wensleydale, North Yorkshire, England, first published in 1939. The book reports on how careful observations can lead to correlations between transmission of infective disease between families, farms and villages.
It contains the detailed observational studies of a 1928 epidemic of catarrhal jaundice and a 1929 epidemic of Bornholm disease which were published in the British Medical Journal (BMJ) in 1930 and 1933 respectively.
Background
William Pickles first realised the possibilities for epidemiological studies for a GP after he read James Mackenzie's The Principles of Diagnosis and Treatment in Heart Affections in the 1920s.
With the assistance of his wife Gertie, who kept the charts, Pickles recorded his observations on a 1928 epidemic of catarrhal jaundice and a 1929 epidemic of Bornholm Disease in his district. His findings were published in the British Medical Journal in 1930 and 1933 respectively and in 1935 he presented them at the Royal Society of Medicine. The work was praised by Major Greenwood who wrote that Pickles's work would mark a "new era in epidemiology". His observations led to new understandings of the transmission of infective disease within families, farms and villages.
Research and content
Epidemiology in Country Practice contains Pickles's observational studies and a number of articles previously published in medical journals, including the detailed observational studies of the 1928 epidemic of jaundice and the 1929 epidemic of Bornholm disease. The book has been described as more of an essay on epidemiology than a book filled with masses of data.
Most of the research was done between 1929 and 1939. From 1937, in order to work on the book, Pickles kept evening surgeries "to a minimum and often there were no patients at all". He methodically reported patterns of prevalent diseases in his area; however, his data collection and publications lacked the consent processes now considered necessary to avoid identification of individuals afflicted by epidemics, particularly in small communities where recognition of persons is deemed easier. Pickles was well acquainted with the eight villages he looked after, and once, as he looked down upon Wensleydale from the top of a hill, realised that he knew everyone in the village and most on first-name basis.
The book begins with a personal appeal by Pickles for GPs to consider the importance of observations, followed by eight chapters that cover cases such as "influenza, measles, scarlet fever, whooping-cough, and mumps", as a well as jaundice and myalgia. One story is that of a "gypsy" who accompanied her sick husband into the village in a caravan. Her husband was suffering from typhoid and Pickles was able to trace the source of the disease to a faulty water pump that the wife used to wash her laundry. In the book, he compares the case to the work of one of his heroes, William Budd, who carried out similar observations.
The book also describes an epidemic of catarrhal jaundice that resulted in 250 cases out of a population of almost 6,000, many of which were children. The exact incubation period was not known and ranged from 3 to 40 days. After two years of keeping records and with assistance from the Ministry of Health, Pickles was able to show that the incubation period was 26–35 days. He cross-referenced the evidence with smaller studies in neighbouring villages and in one case was able to trace 13 infections to a single country maid who was determined to attend a fete despite Pickles attending to her in her sickbed the same morning. One of the cases was another young woman and her male friend who, according to his (the man's) sister, often went "in the back door in the evenings, and helps her wash up", causing Pickles to observe that "studies in epidemiology sometimes reveal romances."
Publication history
The book was first published by John Wright & Sons of Bristol in 1939. It had a preface by Major Greenwood, professor of epidemiology and vital statistics at the University of London. In April 1941, during the Second World War, the entire stock of the book, unbound sheets and the type were completely destroyed by enemy action but such was the demand for the book that in 1949 it was reissued in virtually identical form.
In 1970, a limited edition was published with profits going to the Royal College of General Practitioners (RCGP) appeal. The book was subsequently reprinted by Devonshire Press of Torquay in 1972 and in later editions by the RCGP (1984: ).
Reception and legacy
The book was described by John Horder in 1969 as a "classic", that "makes it all sound too easy and one wonders why no one had thought of it all before". Pickles's obituary in the British Medical Journal in 1969 declared that it had "received excellent notices" and in 2004, J.A. Reid portrayed it as "a seminal book that has been read and assessed during past decades by many public health students and practitioners". Later, RCGP president Denis Pereira Gray described it as "a masterpiece recognised throughout the world" and that practice-based research should be modelled on Pickles's thorough and accurate recording.
The book facilitated the link between research and primary care, resulting in the modern expansion of surveillance practices for improvements in public health. In addition, it revealed that general practitioners could carry out "world class research" in the community.
References
Further reading
Pemberton, John. (1970) Will Pickles of Wensleydale: The life of a country doctor. London: Bles.
External links
Bookseller's site including several images of the book
Medical books
1939 non-fiction books
Epidemiology | Epidemiology in Country Practice | [
"Environmental_science"
] | 1,203 | [
"Epidemiology",
"Environmental social science"
] |
58,816,065 | https://en.wikipedia.org/wiki/Rodent%20Research%20Hardware%20System | NASA's Rodent Research Hardware System provides a research platform aboard the International Space Station for long-duration experiments on rodents in space. Such experiments will examine how microgravity affects the rodents, providing information relevant to human spaceflight, discoveries in basic biology, and knowledge that can help treat human disease on Earth.
Background
Based on the recommendations from the National Academies of Sciences, Engineering, and Medicine report Recapturing a Future for Space Exploration: Life and Physical Sciences Research for a New Era (2011). The report included a recommendation that NASA establish a rodent research facility aboard the International Space Station designated as a national laboratory “as soon as possible” to enable high-priority, long duration rodent studies. The goal was to conduct studies of durations up to 6 months. As mice and rats have life spans of at most 5 years the “studies on these rodents in space have the potential to extrapolate important implications for humans living in space well beyond six months."
The Rodent Research Hardware System was developed by scientists and engineers at NASA's Ames Research Center in Moffett Field, California. In the past short-term rodent experiments transported to space on various vehicle including the Space Shuttle. This is the first "permanent" laboratory for rodent research. The system was developed based on what was learned from the Animal Enclosure Module that flew aboard 27 Space Shuttle missions between 1983 and 2011. The first Rodent Research Hardware System was delivered to the ISS by SpaceX CRS-4.
Design
The system has 4 major components. The Transporter is used to safely house the rodents while being transported from Earth to the space station. This is also referred to as the Animal Enclosure Module-Transporter(AEM-T). As the trip from Earth can take up to 10 days an Environmental Control and Life Support System(ECLSS) is required. This is provided by the Animal Enclosure Module-ECLSS(AEM-E). The Animal Access Unit provides containment while transferring of rodents between the Transporter and the Habitat; and the Habitat that provides long-term housing for rodents aboard the station. The Habitat component operate in an EXPRESS Rack facility aboard the station. Crew members will use the access module to examine the rodents closely during the study and to transfer them between habitats as needed. Each habitat module provides as many as 10 mice or six rats with all of the basics they need to live comfortably aboard the station including water, food, lighting and fresh air. Rodents easily can move around the living space by grasping grids that line the floor and walls. The modules include data downlink capability that enables monitoring of environmental conditions such as temperature. A visible light and infrared video system allows the crew in space and scientists and veterinarians on the ground to monitor behavior and overall health of the rodents on a daily basis.
Missions
Rodent Research-1 (RR1)
Delivered on 21 September 2014 to the ISS by SpaceX CRS-4. Mission was a validation of the operational capabilities of the hardware to support rodent research provides valuable information applicable to future long-term space missions. Rodent Research-1 was a joint operation between NASA and CASIS. The experiments involved 20 mice; 10 NASA mice and 10 CASIS mice. This was the first time rodents were transported to the ISS aboard an uncrewed commercial vehicle. Lasting 37 days, Rodent Research-1 was the longest duration spaceflight rodent study to date conducted in a NASA facility. The Bone Densitometer was also delivered on this mission to be used in later missions.
Rodent Research-2 (RR2)
Delivered on 14 April 2015 to the ISS by SpaceX CRS-6. The research was sponsored by the Center for the Advancement of Science in Space (CASIS) and Novartis Institute for Biomedical Research. The primary objective of the research was to monitor the effects of the space environment on the musculoskeletal and neurological systems of mice as model organisms of human health and disease. In addition to the primary research focus other organ systems, including whole blood, brain, heart, lungs, kidney/adrenal glands, liver, spleen, and small intestines, were also studied for molecular and morphological changes as a function of duration of spaceflight exposure. The study included 40 mice, 20 that were flown to the ISS and 20 as controls that remained on Earth. The study lasted 37 days. The Bone Densitometer Validation experiment was used in support of RR-2.
Rodent Research-3 (RR3)
Delivered on 8 April 2016 to the ISS by SpaceX CRS-8. The research was sponsored by the International Space Station U.S. National Laboratory in partnership with Eli Lilly and Company. The primary objective was to countermeasure against muscle atrophy. The study assessed myostatin inhibition to prevent skeletal muscle atrophy and weakness in mice. Twenty mice were flown for this experiment and the study lasted 33 days. As part of the study astronauts successfully completed a functional assessment of grip strength in mice on the orbiting laboratory. This was the first time a grip strength meter has been used for rodent research on orbit, and the data gathered will be used to assess the efficacy of the anti-myostatin treatments in preventing muscle loss in space.
Rodent Research-4 (RR4)
Delivered on 19 February 2017 to the ISS by SpaceX CRS-10. The research was sponsored by the United States Department of Defense (DoD) Space Test Program and the Center for the Advancement of Science in Space (CASIS), manager of the ISS National Laboratory. The primary objective of the study was to better understand bone healing and bone tissue regeneration and to study the impacts of microgravity on these processes. The study also intended to gauge certain agents capable of inducing bone healing and regeneration in spaceflight. The study lasted 28 days. NASA studies in space involving mice require housing mice at densities higher than recommended in the Guide for the Care and Use of Laboratory Animals. For this reason all previous NASA missions in which mice were co-housed, involved female mice. This spaceflight study examining bone healing, male mice are required for optimal experimentation. To ensure valid results from this first NASA study involving male mice an additional study on the housing density was done. The study included 80 mice, 40 that were flown to the ISS and 40 as controls that remained on Earth. Some of the results of this study have been published in the journal of Life Sciences in Space Research focusing on the impact of launch into space on bone fracture healing.
Rodent Research-5 (RR5)
Delivered on 3 June 2017 to the ISS by SpaceX CRS-11. The research was sponsored by the Center for the Advancement of Science in Space (CASIS) in partnership with the University of California at Los Angeles. The primary objective of the study was to evaluate a new strategy to mitigate one of the negative effects of living in space (bone degradation). All the mice were periodically injected with either a control treatment or an experimental treatment that contains NELL1, a protein that when expressed can help regulate bone-remodeling. The study is based on research on NELL1 done by a group led by Dr. Chia Soo, a UCLA professor of plastic and reconstructive surgery and orthopedic surgery. The experiments involved 40 mice that were flown to the ISS. On 3 July 2017 twenty of the mice were returned to Earth live. This was the first time the Transporter unit was used to carry mice from the ISS back to Earth alive. The entire study lasted 30 days.
Rodent Research-9 (RR9)
Delivered on 14 August 2017 to the ISS by SpaceX CRS-12. The research was sponsored by the National Aeronautics and Space Administration's Space Life and Physical Sciences program. This is the first Rodent Research mission that is dedicated to NASA-sponsored science experiments. Previous missions on the ISS involved commercial and other government agency experiments selected by the Center for Advancement of Science in Space (CASIS). The mission consisted of three separate experiments led by principal investigators Michael Delp, Xiao Wen Mao, and Jeffrey Willey. Delp's investigation was designed to study the effects of long duration spaceflight on fluid shifts and increased fluid pressures in the head, Mao's was to examine the impact of spaceflight on the vessels that supply blood to the eyes, and Willey's was designed to study the extent of knee and hip joint degradation caused by prolonged exposure to weightlessness. The flight lasted 33 days.
Rodent Research-6 (RR6)
Delivered on 15 December 2017 to the ISS by SpaceX CRS-13. The research was sponsored by the Center for the Advancement of Science in Space (CASIS) in partnership with Novartis and NanoMedical Systems. The primary objective of the study was to evaluate a novel therapeutic drug delivery chip in microgravity. The nanochannel drug delivery chip delivered the drug formoterol, used in the management of asthma and other medical conditions, to achieve a constant and reliable dosage. The experiments involved 40 mice that were flown to the ISS. On 13 January 2018 twenty of the mice were returned to Earth alive. The remaining 20 mice were studied for and additional 30 days. The study lasted 60 days.
Rodent Research-7 (RR7)
Delivered on 29 June 2018 to the ISS by SpaceX CRS-15. The research was the second mission sponsored by the National Aeronautics and Space Administration's Space Life and Physical Sciences program. The primary objective of the study was to study the impact of the space environment on the gut microbiota of mice. The importance of this study is that disruption of the normal microbiota communities in the digestive tract has been linked to multiple health problems: including the intestinal, immune, mental, and metabolic health. The experiments involved 20 mice that were flown to the ISS. On 3 August 2018 ten of the mice were returned to Earth alive. The entire study lasted 77 days.
Rodent Research-8 (RR8)
Delivered on 8 December 2018 to the ISS by SpaceX CRS-16. Strangely it did not appear on the list of science payloads for the mission. The experiment was blamed for delaying the launch due to mold being discovered on the food for the mice. The research is sponsored by the National Laboratory in partnership with Center for the Advancement of Science in Space (CASIS) and Taconic Biosciences. The primary objective of the study is to study the physiology of aging and the effect of age on disease progression using groups of young and old mice. The study will consist of 2 groups of 20 mice each. Half of each group will be 10–16 weeks old (young group), the other half will be 30–52 weeks old (old group). Half of each group will be returned to Earth alive after about 30 days. The remaining mice will be euthanized and cryogenically preserved for study back on Earth. This is also been designated as Rodent Research Reference Mission-1 (RRR-1). For this mission the samples gathered will be made available for other researchers through a proposals submitted to CASIS.
Rodent Research-10 (RR10)
This mission is scheduled to fly to the ISS on SpaceX CRS-17. The research is sponsored by NASA Research Office - Space Life and Physical Sciences. The primary objective of the study is to examine the CDKN1a/p21 pathway and its role in the arresting bone regeneration in microgravity. The study consisted of 20 mice, 10 of are transgenic CDKN1a/p21-Null mice. The study is expected to last up to 35 days.
Rodent Research-11 (RR11)
This mission is scheduled to fly to the ISS on SpaceX CRS-17. The research was sponsored by NASA Research Office - Space Life and Physical Sciences. The primary objective of the study is to study how MicroRNA related to vascular health in microgravity. The study consisted of 20 mice to be flown to the ISS and 20 mice that remained on the ground as controls. After approximately 30 days, the 20 mice on the ISS will be returned alive.
References
Space-flown life
Animals in space
Space medicine
Spaceflight health effects | Rodent Research Hardware System | [
"Chemistry",
"Biology"
] | 2,495 | [
"Animal testing",
"Space-flown life",
"Animals in space"
] |
58,816,925 | https://en.wikipedia.org/wiki/Oncogenesis%20%28journal%29 | Oncogenesis is a peer-reviewed open access medical journal covering the molecular biology of cancer. It was established in 2012 by Douglas R. Green as a sister journal to Oncogene, of which Green was then editor-in-chief. New articles are published exclusively online by Springer Nature on a weekly basis. The editor-in-chief is Jan Paul Medema (University of Amsterdam). According to the Journal Citation Reports, the journal has a 2020 impact factor of 7.485, ranking it 40th out of 242 journals in the category "Oncology".
References
External links
Nature Research academic journals
Oncology journals
Molecular and cellular biology journals
Online-only journals
Weekly journals
English-language journals
Academic journals established in 2012
Open access journals | Oncogenesis (journal) | [
"Chemistry"
] | 151 | [
"Molecular and cellular biology journals",
"Molecular biology"
] |
58,817,607 | https://en.wikipedia.org/wiki/Hydrothermal%20vent%20microbial%20communities | The hydrothermal vent microbial community includes all unicellular organisms that live and reproduce in a chemically distinct area around hydrothermal vents. These include organisms in the microbial mat, free floating cells, or bacteria in an endosymbiotic relationship with animals. Chemolithoautotrophic bacteria derive nutrients and energy from the geological activity at Hydrothermal vents to fix carbon into organic forms. Viruses are also a part of the hydrothermal vent microbial community and their influence on the microbial ecology in these ecosystems is a burgeoning field of research.
Hydrothermal vents are located where the tectonic plates are moving apart and spreading. This allows water from the ocean to enter into the crust of the earth where it is heated by the magma. The increasing pressure and temperature forces the water back out of these openings, on the way out, the water accumulates dissolved minerals and chemicals from the rocks that it encounters. There are generally three kinds of vents that occur and are all characterized by its temperature and chemical composition. Diffuse vents release clear water typically up to 30 °C. White smoker vents emit a milky-coloured water between 200-330 °C, and black smoker vents generally release water hotter than the other vents between 300-400 °C. The waters from black smokers are darkened by the precipitates of sulfide that are accumulated. Due to the absence of sunlight at these ocean depths, energy is provided by chemosynthesis where symbiotic bacteria and archaea form the bottom of the food chain and are able to support a variety of organisms such as Riftia pachyptila and Alvinella pompejana. These organisms use this symbiotic relationship in order to use and obtain the chemical energy that is released at these hydrothermal vent areas.
Environmental properties
Although there is a large variation in temperatures at the surface of the water with the seasonal changes in depths of the thermocline, the temperatures underneath the thermocline and the waters near the deep sea are relatively constant. No changes are caused by seasonal effects or annual changes. These temperatures stay in the range of 0–3 °C with the exception of the waters immediately surrounding the hydrothermal vents, which can get as high as 407 °C. These waters are prevented from boiling due to the high pressure at those depths.
With increasing depth, the high pressure begins to take effect. The pressure increases by about 10 megapascals (MPa) for every kilometre of vertical distance. This means that hydrostatic pressure can reach up to 110 MPa at the depths of the trenches.
Salinity remains relatively constant within the deep seas around the world, at 35 parts per thousand.
Although there is very little light in the hydrothermal vent environment, photosynthetic organisms have been found. However, the energy that the majority of organisms use comes from chemosynthesis. The organisms use the minerals and chemicals that come out of the vents.
Adaptations
Extreme conditions in the hydrothermal vent environment mean that microbial communities that inhabit these areas need to adapt to them. Microbes that live here are known to be hyperthermophiles, microorganisms that grow at temperatures above 90 °C. These organisms are found where the fluids from the vents are expelled and mixed with the surrounding water. These hyperthermophilic microbes are thought to contain proteins that have extended stability at higher temperatures due to intramolecular interactions but the exact mechanisms are not clear yet. The stabilization mechanisms for DNA are not as unknown and the denaturation of DNA are thought to be minimized through high salt concentrations, more specifically Mg, K, and PO4 which are highly concentrated in hyperthermophiles. Along with this, many of the microbes have proteins similar to histones that are bound to the DNA and can offer protection against the high temperatures. Microbes are also found to be in symbiotic relationships with other organisms in the hydrothermal vent environment due to their ability to have a detoxification mechanism which allows them to metabolize the sulfide-rich waters which would otherwise be toxic to the organisms and the microbes.
Microbial biogeochemistry
Introduction
Microbial communities at hydrothermal vents mediate the transformation of energy and minerals produced by geological activity into organic material. Organic matter produced by autotrophic bacteria is then used to support the upper trophic levels. The hydrothermal vent fluid and the surrounding ocean water is rich in elements such as iron, manganese and various species of sulfur including sulfide, sulfite, sulfate, elemental sulfur from which they can derive energy or nutrients. Microbes derive energy by oxidizing or reducing elements. Different microbial species use different chemical species of an element in their metabolic processes. For example, some microbe species oxidize sulfide to sulfate and another species will reduce sulfate to elemental sulfur. As a result, a web of chemical pathways mediated by different microbial species transform elements such as carbon, sulfur, nitrogen, and hydrogen, from one species to another. Their activity alters the original chemical composition produced by geological activity of the hydrothermal vent environment.
Carbon cycle
Geological activity at hydrothermal vents produce an abundance of carbon compounds. Hydrothermal vent plumes contain high concentrations of methane and carbon monoxide with methane concentration reaching 107 times of the surrounding ocean water. Deep ocean water is also a large reservoir of carbon and concentration of carbon dioxide species such as dissolved CO2 and HCO3− around 2.2mM. The bountiful carbon and electron acceptors produced by geological activity support an oasis of chemoautotrophic microbial communities that fix inorganic carbon, such as CO2, using energy from sources such as oxidation of sulfur, iron, manganese, hydrogen and methane. These bacteria supply a large portion of organic carbon that support heterotrophic life at hydrothermal vents.
Carbon fixation
Carbon fixation is the incorporation of inorganic carbon into organic matter. Unlike the surface of the planet where light is a major source of energy for carbon fixation, hydrothermal vent chemolithotrophic bacteria rely on chemical oxidation to obtain the energy required. Fixation of CO2 is observed in members of Gammaproteobacteria, Campylobacterota, Alphaproteobacteria, and members of Archaea domain at hydrothermal vents. Four major metabolic pathways for carbon fixation found in microbial vent communities include the Calvin–Benson–Bassham (CBB) cycle, reductive tricarboxylic acid (rTCA) cycle, 3-hydroxypropionate (3-HP) cycle and reductive acetyl coenzyme A (acetyl-CoA) pathway.
Carbon fixation metabolic pathways
Calvin–Benson–Bassham cycle (CBB)
The Calvin-Benson-Bassham (CBB) cycle is the most common CO2 fixation pathway found among autotrophs. The key enzyme is ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO). RuBisCO has been identified in members of the microbial community such as Thiomicrospira, Beggiatoa, zetaproteobacterium, and gammaproteobacterial endosymbionts of tubeworms, bivalves, and gastropods.
Reductive carboxylic acid cycle (rTCA)
The Reductive Carboxylic Acid Cycle (rTCA) is the second most commonly found carbon fixation pathway at hydrothermal vents. rTCA cycle is essentially a reversed TCA or Kreb cycle heterotrophs use to oxidize organic matter. Organism that use the rTCA cycle prefer to inhabit anoxic zones in the hydrothermal vent system because some enzymes in the rTCA cycle are sensitive to the presence of O2. It is found in sulfate reducing deltaproteobacterium such as some members of Desulfobacter, Aquificales and Aquifex and Thermoproteales.
3-HP and 3-HP/4-HB cycles
The key enzymes of 3-HP and 3-HP/4-HB cycles are acetyl-CoA/propionyl-CoA carboxylase, malonyl-CoA reductase and propionyl-CoA synthase. Most of the organisms that use this pathway are mixotrophs with the ability to use organic carbon in addition to carbon fixation.
Reductive acetyl CoA pathway
The Reductive Acetyl CoA pathway has only been found in chemoautotrophs. This pathway does not require ATP as the pathway is directly coupled to the reduction of H2. Organisms that have been found with this pathway prefer H2 rich areas. Species include deltaproteobacterium such as Dulfobacterium autotrophicum, acetogens and methanogenic Archaea.
Methane metabolism
Hydrothermal vents produce high quantities of methane which can originate from both geological and biological processes. Methane concentrations in hydrothermal vent plumes can exceed 300μM in concentration depending on the vent. In comparison, the vent fluid contains 106 – 107 times more methane than the surrounding deep ocean water, of which methane ranges between 0.2-0.3nM in concentration. Microbial communities use the high concentrations of methane as an energy source and a source of carbon. Methanotrophy, where a species uses methane both as an energy and carbon source, have been observed with the presence of gammaproteobacteria in the Methylococcaceae lineages. Methanotrophs convert methane into carbon dioxide and organic carbon. They are typically characterized by the presence of intercellular membranes and microbes with intercellular membranes were observed to make up 20% of the microbial mat at hydrothermal vents.
Methane oxidation
Energy generation via methane oxidation yields the next best source of energy after sulfur oxidation. It has been suggested that microbial oxidation facilitates rapid turnover at hydrothermal vents, thus much of the methane is oxidize within short distance of the vent. In hydrothermal vent communities, aerobic oxidation of methane is commonly found in endosymbiotic microbes of vent animals. Anaerobic oxidation of methane (AOM) is typically coupled to reduction of sulfate or Fe and Mn as terminal electron acceptors as these are most plentiful at hydrothermal vents. AOM is found to be prevalent in marine sediments at hydrothermal vents and may be responsible for consuming 75% of methane produced by the vent. Species that perform AOM include Archaea of phylum Thermoproteota (formerly Crenarchaeota) and Thermococcus.
Methanogenesis
Production of methane through methanogenesis can be from degradation of hydrocarbons, from reaction of carbon dioxide or other compounds like formate. Evidence of methanogenesis can be found alongside of AOM in sediments. Thermophilic methanogens are found to grow in Hydrothermal vent plumes at temperatures between 55 °C to 80 °C. However, autotropic methanogenesis performed by many thermophilic species require H2 as an electron donor so microbial growth is limited by H2 availability. Genera of thermophilic methanogens found at hydrothermal vents include Methanocaldococcus, Methanothermococcus, and Methanococcus.
Sulfur cycle
Microbial communities at hydrothermal vent convert sulfur such as H2S produced by geological activity into other forms such as sulfite, sulfate, and elemental sulfur for energy or assimilation into organic molecules. Sulfide is plentiful at Hydrothermal Vents, with concentrations from one to tens of mM, whereas the surrounding ocean water usually only contains a few nano molars.
Sulfur oxidation
Reduced sulfur compounds such as H2S produced by the hydrothermal vents are a major source of energy for sulfur metabolism in microbes. Oxidation of reduced sulfur compounds into forms such as sulfite, thiosulfate, and elemental sulfur is used to produce energy for microbe metabolism such as synthesis of organic compounds from inorganic carbon. The major metabolic pathways used for sulfur oxidation includes the SOX pathway and dissimilatory oxidation. The Sox pathway is a multi enzyme pathway capable of oxidizing sulfide, sulfite, elemental sulfur, and thiosulfate to sulfate. Dissimilatory oxidation converts sulfite to elemental sulfur. Sulfur oxidizing species include and the genera of Thiomicrospira, Halothiobacillus, Beggiatoa, Persephonella, and Sulfurimonas. Symbiotic species of the class Gammaproteobacteria and the phlym Campylobacterota can also oxidize sulfur.
Sulfur reduction
Sulfur reduction uses sulfate as an electron acceptor for the assimilation of sulfur. Microbes that perform sulfate reduction typically use hydrogen, methane or organic matter as an electron donor. Anaerobic oxidation of methane (AOM) often use sulfate as electron acceptor. This method is favoured by organisms living in highly anoxic areas of the hydrothermal vent, thus are one of the predominant processes that occur within the sediments. Species that reduce sulfate have been identified in Archaea and members of Deltaproteobacteria such as Desulfovibrio, Desulfobulbus, Desulfobacteria, and Desulfuromonas at hydrothermal vents.
Nitrogen cycle
Deep ocean water contains the largest reservoir of nitrogen available to hydrothermal vents, with around 0.59 mM of dissolved nitrogen gas. Ammonium is the dominant species of dissolved inorganic nitrogen, and can be produced by water mass mixing below hydrothermal vents and discharged in vent fluids. Quantities of available ammonium vary between vents depending on the geological activity and microbial composition. Nitrate and nitrite concentrations are depleted in hydrothermal vents compared to the surrounding seawater.
The study of the Nitrogen cycle in hydrothermal vent microbial communities still requires more comprehensive research. However, isotope data suggests that microorganisms influence dissolved inorganic nitrogen quantities and compositions, and all pathways of the nitrogen cycle are likely to be found at hydrothermal vents. Biological nitrogen fixation is important to provide some of the biologically available nitrogen to the nitrogen cycle, especially at unsedimented hydrothermal vents. Nitrogen is fixed by many different microbes including methanogen in the orders Methanomicrobiales, Methanococcales, and Methanobacteriales. Thermophilic microbes have been found to be able to fix nitrogen at higher temperatures such as 92 °C. Nitrogen fixation may be especially prevalent in microbial mats and particulate material where biologically available levels of nitrogen are low, due to high microbe density and anaerobic environment which allow the function of nitrogenase, a nitrogen-fixing enzyme. Evidence has also been detected of assimilation, nitrification, denitrification, anammox, mineralization and dissimilatory nitrate reduction to ammonium. For example, sulfur oxidizing bacteria like Begiatoa species perform denitrification and reduce nitrate to oxidize H2S. Nitrate assimilation is performed by symbiotic species of Riftia pachyptila tubeworm.
Bacterial diversity
The most abundant bacteria in hydrothermal vents are chemolithotrophs. These bacteria use reduced chemical species, most often sulfur, as sources of energy to reduce carbon dioxide to organic carbon. The chemolithotrophic abundance in a hydrothermal vent environment is determined by the available energy sources; different temperature vents have different concentrations of nutrients, suggesting large variation between vents. In general, large microbial populations are found in warm vent water plumes (25 °C), the surfaces exposed to warm vent plumes and in symbiotic tissues within certain vent invertebrates in the vicinity of the vent.
Sulfur-oxidizing
These bacteria use various forms of available sulfur (S−2, S0, S2O3−2) in the presence of oxygen. They are the predominant population in the majority of hydrothermal vents because their source of energy is widely available, and chemosynthesis rates increase in aerobic conditions. The bacteria at hydrothermal vents are similar to the types of sulfur bacteria found in other H2S-rich environments - except Thiomicrospira has replaced Thiobacillus. Other common species are Thiothrix and Beggiatoa, which is of particular importance because of its ability to fix nitrogen.
Methane-oxidizing
Methane is a substantial source of energy in certain hydrothermal vents, but not others: methane is more abundant in warm vents (25 °C) than hydrogen. Many types of methanotrophic bacteria exist, which require oxygen and fix CH4, CH3NH2, and other C1 compounds, including CO2 and CO, if present in vent water. These type of bacteria are also found in Riftia trophosome, indicating a symbiotic relationship. Here, methane-oxidizing bacteria refers to methanotrophs, which are not the same as methanogens: Methanococcus and Methanocaldococcus jannaschii are examples methanogens, which are found in hydrothermal vents; whereas Methylocystaceae are methanotrophs, which have been discovered in hydrothermal vent communities as well.
Hydrogen-oxidizing
Little is known about microbes that use hydrogen as a source of energy, however, studies have shown that they are aerobic, and also symbiotic with Riftia (see below). These bacteria are important in the primary production of organic carbon because the geothermally-produced H2 is taken up for this process. Hydrogen-oxidizing and denitrifying bacteria may be abundant in vents where NO3−-containing bottom seawater mixes with hydrothermal fluid. Desulfonauticus submarinus is a hydrogenotroph that reduces sulfur-compounds in warm vents and has been found in tube worms R. pachyptila and Alvinella pompejana.
Iron- and manganese-oxidizing
These bacteria are commonly found in iron and manganese deposits on surfaces exposed intermittently to plumes of hydrothermal and bottom seawater. However, due to the rapid oxidation of Fe2+ in neutral and alkaline waters (i.e. freshwater and seawater), bacteria responsible for the oxidative deposition of iron would be more commonly found in acidic waters. Manganese-oxidizing bacteria would be more abundant in freshwater and seawater compared to iron-oxidizing bacteria due to the higher concentration of available metal.
Ecology
Symbiotic relationships
Symbiotic chemosynthesis is an important process for hydrothermal vent communities. At warm vents, common symbionts for bacteria are deep-sea clams, Calpytogena magnifica, mussels such as Bathyomodiolus thermophilus and pogonophoran tube worms, Riftia pachyptila, and Alvinella pompejana. The trophosome of these animals are specified organs for symbionts that contains valuable molecules for chemosynthesis. These organisms have become so reliant on their symbionts that they have lost all morphological features relating to ingestion and digestion, though the bacteria are provided with H2S and free O2. Additionally, methane-oxidizing bacteria have been isolated from C. magnifica and R. pachyptila, which indicate that methane assimilation may take place within the trophosome of these organisms.
Phyla and genera
To illustrate the incredible diversity of hydrothermal vents, the list below is a cumulative representation of bacterial phyla and genera, in alphabetical order. As shown, proteobacteria appears to be the most dominant phyla present in deep-sea vents.
Actinomycetota
Aquificota
Hydrogenobacter and Aquifex
Chloroflexota
Chlorobiota
Chlorobium
Deferribacterota
Gemmatimonadota
Nitrospirota
Nitrospinota
Leptospirillum ferriphilum
Bacillota
Acetogen: Clostridium
Pseudomonadota
Acidithiobacillia
Alphaproteobacteria
Paracoccus
Betaproteobacteria
Thiobacillus
Sideroxydans lithotrophicus
Gammaproteobacteria - major symbionts
Allochromatium
Thiomicrospira
Thioalkalivibrio
Methylococcaceae
Beggiatoa
Thioploca
Zetaproteobacteria
Mariprofundus ferrooxydans
Campylobacterota
Sulfurovum lithotrophicum
Sulfurimonas paralvinellae
Nitratifactor salsuginis
Hydrogenimonas thermophila
Thiovulum
Thermodesulfobacteriota - sulfate-reducing, make up more than 25% of the bacterial community
Desulfovibrio
Desulfobulbus
Desulfuromonas
DNA repair
Microbial communities inhabiting deep-sea hydrothermal vent chimneys appear to be highly enriched in genes that encode enzymes employed in DNA mismatch repair and homologous recombination. This finding suggests that these microbial communities have evolved extensive DNA repair capabilities to cope with the extreme DNA damaging conditions in which they exist.
Viruses and deep-sea hydrothermal vents
Viruses are the most abundant life in the ocean, harboring the greatest reservoir of genetic diversity. As their infections are often fatal, they constitute a significant source of mortality and thus have widespread influence on biological oceanographic processes, evolution and biogeochemical cycling within the ocean. Evidence has been found, however, to indicate that viruses found in vent habitats have adopted a more mutualistic than parasitic evolutionary strategy in order to survive the extreme and volatile environment in which they exist.
Deep-sea hydrothermal vents were found to have large numbers of viruses, indicating high viral production. Samples from the Endeavour Hydrothermal Vents off the southwest coast of British Columbia showed that active venting black smokers had viral abundances from 1.45×105 to 9.90×107 per mL, with a drop-off in abundance found in the hydrothermal-vent plume (3.5×106 per mL) and outside the venting system (2.94×106 per mL). The high density of viruses and therefore of viral production (in comparison to surrounding deep-sea waters) implies that viruses are a significant source of microbial mortality at the vents. As in other marine environments, deep-sea hydrothermal viruses affect the abundance and diversity of prokaryotes and therefore impact microbial biogeochemical cycling by lysing their hosts to replicate.
However, in contrast to their role as a source of mortality and population control, viruses have also been postulated to enhance survival of prokaryotes in extreme environments, acting as reservoirs of genetic information. The interactions of the virosphere with microorganisms under environmental stresses is therefore thought to aide microorganism survival through dispersal of host genes through Horizontal Gene Transfer.Each second, "there's roughly Avogadro's number of infections going on in the ocean, and every one of those interactions can result in the transfer of genetic information between virus and host." — Curtis Suttle Temperate phages (those not causing immediate lysis) can sometimes confer phenotypes that improve fitness in prokaryotes [7] The lysogenic life-cycle can persist stably for thousands of generations of infected bacteria and the viruses can alter the host's phenotype by enabling genes (a process known as lysogenic conversion) which can therefore allow hosts to cope with different environments. Benefits to the host population can also be conferred by expression of phage-encoded fitness-enhancing phenotypes.
A review of viral work at hydrothermal vents published in 2015 stated that vents harbour a significant proportion of lysogenic hosts and that a large proportion of viruses are temperate, indicating that the vent environments may provide an advantage to the prophage.
One study of virus-host interactions in diffuse-flow hydrothermal vent environments found that the high incidence of lysogenic hosts and large populations of temperate viruses was unique in its magnitude and that these viruses are likely critical to the systems' ecology of prokaryotes. The same study's genetic analysis found that 51% of the viral metagenome sequences were unknown (lacking homology to sequenced data), with high diversity across vent environments but lower diversity for specific vent sites, which indicates high specificity for viral targets.
A metagenomic analysis of deep-sea hydrothermal vent viromes showed that viral genes manipulated bacterial metabolism, participating in metabolic pathways as well as forming branched pathways in microbial metabolism which facilitated adaptation to the extreme environment.
An example of this was associated with the sulfur-consuming bacterium SUP05. A study found that 15 of 18 viral genomes sequenced from samples of vent plumes contained genes closely related to an enzyme that the SUP05 chemolithoautotrophs use to extract energy from sulfur compounds. The authors concluded that such phage genes (auxiliary metabolic genes) that are able to enhance the sulfur oxidation metabolism in their hosts could provide selective advantages to viruses (continued infection and replication). The similarity in viral and SUP05 genes for the sulfur metabolism implies an exchange of genes in the past and could implicate the viruses as agents of evolution.
Another metagenomic study found that viral genes had relatively high proportions of metabolism, vitamins and cofactor genes, indicating that viral genomes encode auxiliary metabolic genes. Coupled with the observations of a high proportion of lysogenic viruses, this indicates that viruses are selected to be integrated pro-viruses rather than free floating viruses and that the auxiliary genes can be expressed to benefit both the host and the integrated virus. The viruses enhance fitness by boosting metabolism or offering greater metabolic flexibility to their hosts. The evidence suggests that deep-sea hydrothermal vent viral evolutionary strategies promote prolonged host integration, favoring a form of mutualism rather than classic parasitism.
As hydrothermal vents outlets for sub-seafloor material, there is also likely a connection between vent viruses and those in the crust.
See also
Marine microorganism
Movile Cave
Guaymas Basin
Hydrogen sulfide chemosynthesis - system of generating energy used in hydrothermal vents
References
Organisms living on hydrothermal vents
Microbiology | Hydrothermal vent microbial communities | [
"Chemistry",
"Biology"
] | 5,473 | [
"Microbiology",
"Organisms by adaptation",
"Organisms by habitat",
"Microscopy",
"Organisms living on hydrothermal vents"
] |
58,826,759 | https://en.wikipedia.org/wiki/Belnacasan | Belnacasan (VX-765) is a drug developed by Vertex Pharmaceuticals which acts as a potent and selective inhibitor of the enzyme caspase 1. This enzyme is involved in inflammation and cell death, and consequently blocking its action may be useful for various medical applications, including treatment of epilepsy, arthritis, aiding recovery from heart attack and slowing the progression of Alzheimer's disease. Belnacasan is an orally active prodrug, being converted in the body to the active drug VRT-043198 (O-desethyl-belnacasan). However while belnacasan has proved well tolerated in human clinical trials, it has not shown sufficient efficacy to be approved for use for any of the applications suggested to date, though research continues into possible future uses of this or similar drugs.
References
Enzyme inhibitors
Lactones
Carboxamides
Pyrrolidines
Chloroarenes
Anilines
Tert-butyl compounds | Belnacasan | [
"Chemistry"
] | 201 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
53,941,569 | https://en.wikipedia.org/wiki/Self-cleaning%20surfaces | Self-cleaning surfaces are a class of materials with the inherent ability to remove any debris or bacteria from their surfaces in a variety of ways. The self-cleaning functionality of these surfaces are commonly inspired by natural phenomena observed in lotus leaves, gecko feet, and water striders to name a few. The majority of self-cleaning surfaces can be placed into three categories:
superhydrophobic
superhydrophilic
photocatalytic.
History
The first instance of a self-cleaning surface was created in 1995. Paz et al. created a transparent titanium dioxide (TiO2) film that was used to coat glass and provide the ability for the glass to self-clean. The first commercial application of this self-cleaning surface, Pilkington Activ, was developed by Pilkington glass in 2001. This product implements a two-stage cleaning process. The first stage consists of photocatalysis of any fouling matter on the glass. This stage is followed by the glass becoming superhydrophilic and allowing water to wash away the catalyzed debris on the surface of the glass. Since the creation of self-cleaning glass, titanium dioxide has also been used to create self-cleaning nanoparticles that can be incorporated into other material surfaces to allow them to self-clean.
Surface characteristics
The ability of a surface to self-clean commonly depends on the hydrophobicity or hydrophilicity of the surface. Whether cleaning aqueous or organic matter from a surface, water plays an important role in the self-cleaning process. Specifically, the contact angle of water on the surface is an important characteristic that helps determine the ability of a surface to self-clean. This angle is affected by the roughness of the surface and the following models have been developed to describe the "stickiness" or wettability of a self-cleaning surface.
Young's model
Young and colleagues proposed Young's model of wetting that relates the contact angle of a water droplet on a flat surface to the surface energies of the water, the surface, and the surrounding air. This model is typically an oversimplification of a water droplet on an ideally flat surface. This model has been expanded upon to consider surface roughness as a factor in predicting water contact angle on a surface. Young's model is described by the following equation:
= Contact angle of water on the surface
= Surface energy of the surface-air interface
= Surface energy of surface-liquid interface
= Surface energy of liquid-air interface
Wenzel's model
When a water droplet is on a surface that is not flat and the surface topographical features lead to a surface area that is larger than that of a perfectly flat version of the same surface, the Wenzel model is a more accurate predictor of the wettability of this surface. Wenzel's model is described by the following equation:
= Contact angle of water predicted by Wenzel's model
= Ratio of surface area of rough surface to the surface area of a flat projection of the same surface
Cassie-Baxter's model
For more complex systems that are representative of water-surface interactions in nature, the Cassie-Baxter model is used. This model takes into consideration the fact that a water droplet may trap air between itself and the surface that it is on. The Cassie-Baxter model is described by the following equation:
= Contact angle of water predicted by Cassie-Baxter's model
= Liquid-air fraction, the fraction of the liquid droplet that is in contact with air
Mechanisms
Use of water
Control over surface wettability is a critical aspect of self-cleaning surfaces. Both superhydrophobic and superhydrophilic surfaces have been used as self-cleaning materials.
Superhydrophobic
Superhydrophobic surfaces can be created in a number of different ways including plasma or ion etching, crystal growth on a material surface, and nanolithography to name a few. All of these processes create nano-topographical features which imbue a surface with superhydrophobicity. The ultimate goal in developing superhydrophobic surfaces is to recreate the self-cleaning properties of the Lotus Leaf that has the inherent ability to repel all water in nature. The basis for superhydrophobic self-cleaning is the ability of these surfaces to prevent water from spreading out when in contact with the surface. This is reflected in a water contact angle nearing 180 degrees. Superhydrophobic self-cleaning surfaces also have low sliding angles which allows for water that is collected on the surface to easily be removed, commonly by gravity. While superhydrophobic surfaces are great for removing any water-based debris, these surfaces likely will not be able to clean away other types of fouling matter such as oil.
Superhydrophilic
Superhydrophilicity allows for surfaces to clean away a wide variety of dirt or debris. This mechanism is very different than the aforementioned superhydrophobic surfaces. For superhydrophilic self-cleaning surfaces, cleaning occurs because water on the surface is able to spread out to a great degree (extremely low water contact angle) to get between any fouling debris and the surface to wash away the debris.
Photocatalysis
One of the most commonly used self-cleaning products, titanium dioxide, utilizes a unique self-cleaning mechanism that combines an initial photocatalytic step and subsequent superhydrophilicity. A titanium dioxide coating, typically on glass windows, when exposed to UV light, will generate free electrons that will interact with oxygen and water in the air to create free radicals. These free radicals will in turn breakdown any fouling organic matter deposited on the surface of the glass. Titanium dioxide also changes the normally hydrophobic glass to a superhydrophilic surface. Thus, when rainfall occurs, instead of water beading up on the window surface and instantly falling down the glass, rain drops will rapidly spread out on the hydrophilic surface. The water will then move down the surface of the window, as a film rather than a droplet, essentially acting like a squeegee to remove surface debris.
Joule Heating
Heating of the surfaces via passing current through a conductive transparent film has been shown to repel and remove contamination. It has been used in inkjet printers to reduced ink contamination on sensor windows.
Electric curtain
Cleaning surfaces in environments without water has been a challenge. Electric curtain devices were designed to remove particles by creating electric fields on the surface and carrying away particles due to their charged nature. It has been used in solar panels as well as 3D printers.
In nature
Plants
Lotus leaf
The lotus flower has been known as a symbol of purity in some Asian cultures. The lotus leaves (Nelumbo nucifera) are water-repellent and poorly adhesive which keep them free from contamination or pollution even being immersed in dirty water. This ability, called self-cleaning, shields the plant from dirt and pathogens and plays a vital role in providing resistance towards invading microbes. Indeed, numerous spores and conidia of pathogenic life forms, mainly fungi, need water for germination and taint leaves within the first signs of water. It had been a curiosity how lotus flower could remain clean even in muddy water, until German botanists, Barthlott and Neinhuis, introduced the unique dual structure of leaves with the help of a scanning electron microscopy (SEM). Papillose epidermis cells carpet the exterior of a plant, particularly the leaf. These cells generate papillae or microasperties which make surface very rough. On top of microscale roughness, the papillae surface is superimposed with nanoscale asperities consisting of three-dimensional (3-D) hydrophobic hydrocarbons: epicuticular waxes. Basically, the plant cuticle is a composite material composed of a network of cutin and low surface energy waxes, designed at different hierarchical levels. The various leveled surface of lotus leaves is made out of convex cells (looks like bumps) and a much smaller layer of waxy tubules. The water beads on plant leaves rest on the apex of the nanofeatures since air is enclosed in the valley of convex cells which minimizes the contact area of water droplet. Hence, the lotus leaves represent remarkable superhydrophobicity. Static contact angle and contact angle hysteresis of the lotus leaf are determined around 164° and 3°, respectively. With small tilting angles, water droplets on the leaf roll off and take any dirt or contaminant along, leading to self-cleaning. The ability of drops to form and roll off, depends not only on hydrophobicity, but also on contact angle hysteresis.
In plants world, the lotus leaf is not the only example of natural superhydrophobic surfaces. For instance, taro (Colocasia esculenta) leaves were found to exhibit self-cleaning behavior, too. They have a binary roughness built up by averagely 10 μm elliptical protrusions in diameter and nano-sized pins. India canna (Cannageneralis bailey) leaves and the rice leaves (whatever the kind of rice) also represent superhydrophobicity, arising from the hierarchical surface morphology.
Nepenthes pitcher plants
The Nepenthes carnivorous pitcher, widespread in a lot of countries such as India, Indonesia, Malaysia and Australia, possesses a superhydrophilic surface, on which wetting angle approaches to zero to create uniform water film. Therefore, it increases the slipperiness of the surface and the prey slides off from its rims (peristome). Surface topography of Nepenthes rim demonstrates multiple scale radial ridges. The second order ridges are quite small in size and generated by straight rows of overlapping epidermidis cells. The surface of epidermidis cells are smooth and wax-free. The absence of wax crystals and microscopic roughness enhance the hydrophilicity and capillary forces, in doing so, water can swiftly wet the surface of rim.
Animals
Butterfly wings
Butterfly wings possess not only ultra-hydrophobic trait but also directional adhesive characteristics. If the water bead is along the radial outward (RO) direction from the body’s central axis, it rolls off and cleans the dirt away, leading to self-cleaning. On the other hand, if droplets stand against the opposite direction, they are pinned at the surface, leading adhesion and securing the flight stability of the butterfly by preventing deposit of dirt on the wings near the center of the body. SEM micrographs of wings exhibit hierarchy along the RO direction, arising from aligned microgrooves, covered by fine lamella-stacking nanostripes.
Water striders (Gerris remigis)
Water striders (Gerris remigis), most commonly called Jesus bugs, have an extraordinary ability that lets them walk on the water. In a fashion similar to superhydrophobic plants, their legs are highly water repellent due to their hierarchical morphology. They are built up with hydrophobic waxy microhairs, microsetae, and each hair is covered with nanogrooves. As a result, air is entrapped between micro- and nanohairs, which repels water. Feng et al. measured how deep the leg can dip into water and the contact angle of the leg. They found the contact angle at least 168° and the maximum depth reported 4.38 ± 0.02 mm.
Gecko feet
Gecko feet are the most famous reversible adhesion mechanism in nature. The anti-fouling ability of feet allows geckos to run on dusty ceilings and corners without the accumulation of dirt on their feet. In 2000, Autumn et al. revealed the origin of gecko’s strong adhesion by investigating the surface features of the toes under electron microscope. They observed a hierarchical morphology of each foot which is composed of millions of small hair called setae. Moreover, each setae is composed of a smaller hair, and each hair is tailed with a flat spatula and these spatulae are bonded by the van der Waals forces. This surface feature, regardless of the surface type (hydrophobic, hydrophilic, dry, wet, rough etc.), enables geckos to stick the surface. In addition to strong adhesion, the gecko foot has a unique self-cleaning property which does not require water as the lotus leaf.
Shark skin
Shark skin is another example of antifouling, self-cleaning and low adhesion surfaces. This hydrophobic surface allows sharks to maneuvers fast in water. Shark skin is composed of periodically arranged diamond-shape dermal denticles, superimposed with triangular riblets.
Fabrication and characterization
To fabricate synthetic self-cleaning surfaces, there are a variety of methods used to obtain the desired nanotopography and then characterize surface nanostructure and wettability.
Templating strategies
Templating utilizes a mold to add nanostructure to a polymer. Molds can come from a variety of sources including natural sources, such as the lotus leaf, due to their self-cleaning properties.
Nanocasting
Nanocasting is a method based on soft lithography which uses elastomeric molds to make nano-structured surfaces. For example, polydimethylsiloxane (PDMS) was cast over the lotus leaf and used to make a negative PDMS template. PDMS was then coated with an anti-stick monolayer of trimethylchlorosilane and used to make a positive PDMS template from the first. As the natural lotus leaf structure enables pronounced self-cleaning ability, this templating technique was able to replicate the nanostructure, resulting in a surface wettability similar to the lotus leaf. Further, the ease of this methodology enables translation to mass replication of nano-structured surfaces.
Imprint nanolithography
Imprint nanolithography also utilizes templates, pressing a hard mold into a polymer above the polymer glass transition temperature (Tg). Thus, the driving forces for this type of fabrication are heat and high pressure. Porous templates consisting of aluminum with anodized aluminum oxide (a hard mold) were used to imprint polystyrene. To achieve this, the polystyrene was heated well above its Tg to 130 degrees Celsius and pressed against the template. The template was then removed by dissolving the aluminum and producing either nanoemboss or nanofiber surfaces. Increasing the aspect ratio of the nanofibers disrupted the uniform hexagonal pattern and caused the fibers to form bundles. Ultimately, the longest nanofibers resulted in the greatest surface roughness, which significantly decreased surface wettability.
Capillary nanolithography
Similar to imprint nanolithography, capillary nanolithography employs a patterned elastomeric mold. However, instead of utilizing high pressure, when the temperature is raised above the Tg, capillary forces enable the polymer to fill the voids within the mold. Suh and Jon used molds made from (PUA). These were placed on spin coated, water-soluble polymer, polyethylene glycol (PEG), which was raised above PEG's Tg. This study found that the addition of nanotopography increased the contact angle, and this increase was dependent on the height of the nanotopography. Often, this technique produces a meniscus on the tip of the protruding nanostructures, characteristic of capillary action. The mold can later be dissolved away. Combinatorial lithography approaches are also used. One study used capillarity to fill PDMS molds with PUA, first partially curing the polymer resin with UV light. After microstructures were formed, pressure was applied to fabricate nanostructures, and UV curing was used again. This study is a good example of the use of hierarchical structures to increase surface hydrophobicity.
Photolithography or X-ray lithography
Photolithography and X-ray lithography have been used to etch substrates, often silicon. A resist, or photosensitive material, is coated onto a substrate. A mask is applied above the resist that often consists of gold or other compounds that absorb X-rays. The region exposed to light either becomes soluble in a photoresist developer (e.g. radical species) or insoluble in a photoresist developer (e.g. crosslinked species), ultimately resulting in a patterned surface. X-ray sources are beneficial over UV-visible light sources as the shorter wavelengths enable production of smaller features.
Other fabrication strategies
Plasma treatment
Plasma treatment of surfaces is essentially a dry etching of the surface. This is achieved by filling a chamber with gas, such as oxygen, fluorine, or chlorine, and accelerating ions species from an ion source through plasma. The ion acceleration towards the surface forms deep grooves within the surface. In addition to the topography, plasma treatment can also provide surface functionalization by using different gases to deposit different elements on surfaces. Surface roughness is dependent on the duration of plasma etching.
Chemical deposition
Generally, chemical deposition uses liquid or vapor phases to deposit inorganic materials or halides onto surfaces as thin films. Reagents are supplied in the appropriate stoichiometric amounts to react on the surface. Types of chemical deposition include chemical vapor deposition, chemical bath deposition, and electrochemical deposition. These methodologies produce thin crystalline nanostructures. For example, brucite-type cobalt hydroxide crystalline surfaces were produced by chemical bath deposition and coated with lauric acid. These surfaces had individual nanofiber tips with diameters of 6.5 nm, ultimately resulting in a contact angle as high as 178 degrees.
Surface characterization methods
Scanning electron microscopy (SEM)
SEM is used to examine morphology of fabricated surfaces, enabling the comparison of natural surfaces with synthetic surfaces. The size of nanotopography can be measured. To prepare samples for SEM, surfaces are often sputter coated using platinum, gold/palladium, or silver, which reduces sample damage and charging and improves edge resolution.
Contact angle
As described above, contact angle is used to characterize surface wettability. A droplet of solvent, typically water for hydrophobic surfaces, is placed perpendicular to the surface. The droplet is imaged and the angle between the solid/liquid and liquid/vapor interfaces is measured. Samples are considered to be superhydrophobic when the contact angle is greater than 150 degrees. Refer to section on Wenzel and Cassie-Baxter models for information on the different behaviors of droplets on topographical surfaces. For drops to roll effectively on a superhydrophobic surface, Contact angle hysteresis is an important consideration. Low levels of contact angle hysteresis will enhance the self-cleaning effect of a superhydrophobic surface.
Atomic force microscopy (AFM)
Atomic-force microscopy is used to study the local roughness and mechanical properties of a surface. AFM is also used to characterize adhesion and friction properties for micro- and nano-patterned superhydrophobic surfaces. Results can be used to fit a curve to the surface topography and determine the radius of curvature of nanostructures.
Biomimetic synthetic surfaces
Biomimicry is the imitation, or mimicry, of biological systems, models, or structures, in synthetic areas. Oftentimes, biological materials can produce structures, that have properties and qualities far exceeding what synthetic materials can achieve. Biomimicry is being used to create comparable properties in synthetic materials, particularly in wettability and self-cleaning abilities of self-cleaning surfaces.
Superhydrophobic biomimetic surfaces
There are several biological surfaces that have superhydrophobic properties far superior to any synthetic materials: lotus leaves, rice leaves, cicadia wings, and butterfly wings.
Lotus leaf
Researchers have been using carbon nanotubes (CNTs) to mimic the papillae of lotus leaves. CNT nanoforests can be made using chemical vapor deposition techniques. CNT’s can be applied on a surface to modify its water contact angle. Lau et al. created vertical CNT forests with a polytetrafluroethylene (PTFE) coating that was both stable and superhydrophobic with an advancing and receding contact angle of 170° and 160°. Jung and Bhushan have created a superhydrophobic surface by spray coating CNTs with an epoxy resin. The spacing and alignment of the CNTs have been shown to impact the degree of hydrophobicity a surface has. Sun et al., have found that CNTs aligned vertically with a medium spacing display the best hydrophobic properties. Small and large spacing shows increased drop spreading, while horizontal orientation may even display hydrophilic properties.
Glass silica beads in an epoxy resin, and the electrochemical deposition of gold into dendritic structures has also created synthetic biomimetic surfaces similar to lotus leaves.
Rice leaves
Carbon nanotubes have also been used to create surfaces similar to rice leaves. Similar to the lotus leaf, a hierarchical structure provides the hydrophobicity of rice leaf. Unlike the lotus leaf, rice leaves have an anisotropic structure. When CNT’s are made to mimic rice leaf papillae patterns, the contact angle to differ along the CNT direction or perpendicular. Sun et al. observed anisotropic dewetting of this CNT film. They then hypothesized and tested a three-dimensional anisotropic CNT array, which in fact exhibited anisotropic dewetting depending on the CNT spacing.
Cicadia wing
Cicadia wings have a surface of hexagonally close packed nanopillars that have been shown to have self-cleaning properties. Similarly templated nanopatterned silica arrays have been shown to have hydrophobic, anti-reflective, and self-cleaning properties. These silica arrays begin as non-close packed monolayers, and are patterned in a series of etching steps involving chlorine and oxygen reactive ion etching, and a hydrofluoric acid wash. These properties have implicated that this surface pattern may prove to be useful in solar cell applications. Biomimetic materials based on the cicadia wing have also been made from polytetrafluoroethylene films with carbon/epoxy supports treated with argon and oxygen ion beams. A nanoimprint patterned surface based on the cicadia wings has been made by electrochemically templating and aluminum sheet with alumina oxide, and using this template to pattern a polymer surface.
Butterfly wing
Butterfly wings also exhibit anisotropic self-cleaning, superhydrophobic properties. The butterfly wings exhibit anisotropy on a one dimensional level, compared to the other biological materials, which exhibited the anisotropy on a two dimensional level. Butterfly wings are composed of overlapping layers of scales, that have the best self-cleaning properties in the radial directions. This anisotropic interface my prove important for fluid controllable interfaces. Alumina layers patterned from the original butterfly wing have been used to mimic the structure and properties of the butterfly wings. Additionally, butterfly wing mimetic structures have used to fabricate anatase titania photoanodes. Butterfly wing structures have also been made using layer-by-layer sol-gel-based deposition and soft lithography molding.
Gecko feet
Gecko feet are hydrophobic, but that is not the only property that assists in their self-cleaning nature. Estrada and Lin created polypropylene, polyethylene, and polycaprolactone nanofibers using a porous template. These nanofiber geometries were shown to be self-cleaning in fiber dimensions of 5, 0.6, and 0.2 microns. However, a hydrophobic surface alone does not explain the perpetually clean toe pad of the gecko, even in dry environments, where water is not available for self-cleaning. This resulting fouling is a common problem for reversible adhesives modeled after the gecko toe pad. Digital hyperextension, or a movement of the toe with each gecko step, contributes to the self-cleaning. A surface or system that mimics this dynamic self-cleaning process has yet to be developed.
Hydrophilic biomimetic surfaces
Snail shell
Snail shell is an aragonite-protein composite, with a hierarchical groove structure. The regular roughness of the structure creates a hydrophilic structure, a thin layer of water trapped on the surface, that doesn’t allow oil to attach to the snail shell, thereby keeping the shell clean. These surface properties of snail shell have inspired the use of similar surface patterns on ceramic tiles and ceramic structures by the INAX corporation, which applies these techniques to kitchens and bathrooms.
Fish scale
Fish scales are calcium phosphate composites coated with a mucus layer. Fish scale properties have been mimicked by polyacrylamide hydrogels, which are both hydrophilic and mimic the mucus’ retention of water. Additionally, fish scales have been used as a template for a casting technique, and as a model for a lithography and chemical etching techniques on silicon wafers that exhibited oleophobic contact angles of oil in water of 163° and 175°, respectively.
Shark skin
Molded and laser-ablated shark skin replicas have been fabricated, and shown to be oleophobic in water. The molded replicas use a negative made of polyvinylsiloxane dental wax and the positive replica was made of epoxy. These replicas have also shown that the structure of shark skin reduces the fluid drag caused by turbulent flow. The fluid dynamic properties of sharkskin have been mimicked in swimsuit, nautical, and aerospace applications.
Super hydrophilic biomimetic surfaces
Pitcher plant
Wong et al. developed a surface inspired by the system on the pitcher plant. This surface, named “slippery liquid-infused porous surfaces” (SLIPS) is a micro- or nano-porous substrate, with a lubricating liquid locked in place. For the system to work, the lubricating liquid must fully wet the substrate, the solid must be preferentially wetted by the lubricating substrate when compared to the repelling substrate, and the lubricating and encroaching liquid must be immiscible. Although the concept of SLIPS was biomimetic of the pitcher plant, it is not superhydrophilic with a contact angle of 116°, though it does repel blood and oil.
References
Surface science | Self-cleaning surfaces | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,512 | [
"Condensed matter physics",
"Surface science"
] |
53,946,357 | https://en.wikipedia.org/wiki/Stokes%20problem | In fluid dynamics, Stokes problem also known as Stokes second problem or sometimes referred to as Stokes boundary layer or Oscillating boundary layer is a problem of determining the flow created by an oscillating solid surface, named after Sir George Stokes. This is considered one of the simplest unsteady problems that has an exact solution for the Navier–Stokes equations. In turbulent flow, this is still named a Stokes boundary layer, but now one has to rely on experiments, numerical simulations or approximate methods in order to obtain useful information on the flow.
Flow descriptionLagerstrom, Paco Axel. Laminar flow theory. Princeton University Press, 1996.
Consider an infinitely long plate which is oscillating with a velocity in the direction, which is located at in an infinite domain of fluid, where is the frequency of the oscillations. The incompressible Navier–Stokes equations reduce to
where is the kinematic viscosity. The pressure gradient does not enter into the problem. The initial, no-slip condition on the wall is
and the second boundary condition is due to the fact that the motion at is not felt at infinity. The flow is only due to the motion of the plate, there is no imposed pressure gradient.
SolutionLandau, Lev Davidovich, and Evgenii Mikhailovich Lifshitz. "Fluid mechanics." (1987).
The initial condition is not required because of periodicity. Since both the equation and the boundary conditions are linear, the velocity can be written as the real part of some complex function
because .
Substituting this into the partial differential equation reduces it to ordinary differential equation
with boundary conditions
The solution to the above problem is
The disturbance created by the oscillating plate travels as the transverse wave through the fluid, but it is highly damped by the exponential factor. The depth of penetration of this wave decreases with the frequency of the oscillation, but increases with the kinematic viscosity of the fluid.
The force per unit area exerted on the plate by the fluid is
There is a phase shift between the oscillation of the plate and the force created.
Vorticity oscillations near the boundary
An important observation from Stokes' solution for the oscillating Stokes flow is that vorticity oscillations are confined to a thin boundary layer and damp exponentially when moving away from the wall. This observation is also valid for the case of a turbulent boundary layer. Outside the Stokes boundary layer – which is often the bulk of the fluid volume – the vorticity oscillations may be neglected. To good approximation, the flow velocity oscillations are irrotational outside the boundary layer, and potential flow theory can be applied to the oscillatory part of the motion. This significantly simplifies the solution of these flow problems, and is often applied in the irrotational flow regions of sound waves and water waves.
Fluid bounded by an upper wall
If the fluid domain is bounded by an upper, stationary wall, located at a height , the flow velocity is given by
where .
Fluid bounded by a free surface
Suppose the extent of the fluid domain be with representing a free surface. Then the solution as shown by Chia-Shun Yih in 1968 is given by
where
Flow due to an oscillating pressure gradient near a plane rigid plate
The case for an oscillating far-field flow, with the plate held at rest, can easily be constructed from the previous solution for an oscillating plate by using linear superposition of solutions. Consider a uniform velocity oscillation far away from the plate and a vanishing velocity at the plate . Unlike the stationary fluid in the original problem, the pressure gradient here at infinity must be a harmonic function of time. The solution is then given by
which is zero at the wall y = 0, corresponding with the no-slip condition for a wall at rest. This situation is often encountered in sound waves near a solid wall, or for the fluid motion near the sea bed in water waves. The vorticity, for the oscillating flow near a wall at rest, is equal to the vorticity in case of an oscillating plate but of opposite sign.
Stokes problem in cylindrical geometry
Torsional oscillation
Consider an infinitely long cylinder of radius exhibiting torsional oscillation with angular velocity where is the frequency. Then the velocity approaches after the initial transient phase to
where is the modified Bessel function of the second kind. This solution can be expressed with real argument as:
where
and are Kelvin functions and is to the dimensionless oscillatory Reynolds number defined as , being the kinematic viscosity.
Axial oscillation
If the cylinder oscillates in the axial direction with velocity , then the velocity field is
where is the modified Bessel function of the second kind.
Stokes–Couette flow
In the Couette flow, instead of the translational motion of one of the plate, an oscillation of one plane will be executed. If we have a bottom wall at rest at and the upper wall at is executing an oscillatory motion with velocity , then the velocity field is given by
The frictional force per unit area on the moving plane is and on the fixed plane is .
See also
Rayleigh problem
References
Fluid dynamics | Stokes problem | [
"Chemistry",
"Engineering"
] | 1,093 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
53,950,069 | https://en.wikipedia.org/wiki/Level%20shifter | In digital electronics, a level shifter, also called level converter or logic level shifter, or voltage level translator, is a circuit used to translate signals from one logic level or voltage domain to another, allowing compatibility between integrated circuits with different voltage requirements, such as TTL and CMOS. Modern systems use level shifters to bridge domains between processors, logic, sensors, and other circuits. In recent years, the three most common logic levels have been 1.8V, 3.3V, and 5V, though levels above and below these voltages are also used.
Types of level shifter
Uni-directional – All input pins are dedicated to one voltage domain, all output pins are dedicated to the other.
Bi-directional with Dedicated ports – Each voltage domain has both input and output pins, but the data direction of a pin does not change.
Bi-directional with external direction indicator – When an external signal is changed, inputs become outputs and vice versa.
Bi-directional, auto-sensing – A pair of I/O spanning voltage domains can act as either inputs or outputs depending on external stimulus without the need for a dedicated direction control pin.
Hardware implementation
Fixed function level shifter ICs - These ICs provide several different types of level shift in fixed function devices. Often lumped into 2-bit, 4-bit, or 8-bit level shift configurations offered with various VDD1 and VDD2 ranges, these devices translate logic levels without any additional integrated logic or timing adjustment.
Configurable mixed-signal ICs (CMICs) – Level shifter circuitry can also be implemented in a CMIC. The no-code programmable nature of CMICs allows designers to implement fully customizable level shifters with the added option to integrate configurable logic or timing adjustments in the same device.
Power Management ICs realize level shifters using differential signaling. The differential pair steers the current in one of its two legs which then drives a latch in a different voltage domain and level shifts the voltage.
Applications of level shifters
Since level shifters are used to resolve the voltage incompatibility between various parts of a system, they have a wide range of applications as well. Level shifters are widely used in interfacing legacy devices and also in SD cards, SIM cards, CF cards, audio codecs and UARTs.
Level shifters are also widely used in gate driver circuits used in power management ICs. In these applications, level shifter translates the control logic signal to high voltages used in driving power MOSFETs.
See also
Line level
References
External links
Voltage Level Translation Guide, Texas Instruments.
IC examples from three different logic families
74AXC1T45, 1-bit bidirectional with direction control, dual-supply of 0.65V-3.6V translated to 0.65V-3.6V, available only in SMD packages. 4-bit 74AXC4T245 and 8-bit 74AXC8T245 exist too.
74LVC1T45, 1-bit bidirectional with direction control, dual-supply of 1.65V-5.5V translated to 1.65V-5.5V, available only in SMD packages. 2-bit 74LVC2T45 and 8-bit 74LVC8T245 exist too.
4504B, 6-bit unidirectional, dual-supply of 5V TTL or 5V-18V CMOS translated to 5V-18V CMOS, available in DIP or SMD packages.
Digital electronics | Level shifter | [
"Engineering"
] | 730 | [
"Electronic engineering",
"Digital electronics"
] |
48,731,045 | https://en.wikipedia.org/wiki/Immune-related%20response%20criteria | The immune-related response criteria (irRC) is a set of published rules that define when tumors in cancer patients improve ("respond"), stay the same ("stabilize"), or worsen ("progress") during treatment, where the compound being evaluated is an immuno-oncology drug. Immuno-oncology, part of the broader field of cancer immunotherapy, involves agents which harness the body's own immune system to fight cancer. Traditionally, patient responses to new cancer treatments have been evaluated using two sets of criteria, the WHO criteria and the response evaluation criteria in solid tumors (RECIST). The immune-related response criteria, first published in 2009, arose out of observations that immuno-oncology drugs would fail in clinical trials that measured responses using the WHO or RECIST Criteria, because these criteria could not account for the time gap in many patients between initial treatment and the apparent action of the immune system to reduce the tumor burden.
Background
Part of the process of determining the effectiveness of anti-cancer agents in clinical trials involves measuring the amount of tumor shrinkage such agents can generate. The WHO Criteria, developed in the 1970s by the International Union Against Cancer and the World Health Organization, represented the first generally agreed specific criteria for the codification of tumor response evaluation. These criteria were first published in 1981. The RECIST criteria, first published in 2000, revised the WHO criteria primarily to clarify differences that remained between research groups. Under RECIST tumour size was measured unidimensionally rather than bidimensionally, fewer lesions were measured, and the definition of 'progression' was changed so that it was no longer based on the isolated increase of a single lesion. RECIST also adopted a different shrinkage threshold for definitions of tumour response and progression. For the WHO Criteria it had been >50% tumour shrinkage for a Partial Response and >25% tumour increase for Progressive Disease. For RECIST it was >30% shrinkage for a Partial Response and >20% increase for Progressive Disease. One outcome of all these revisions was that more patients who would have been considered 'progressors' under the old criteria became 'responders' or 'stable' under the new criteria. RECIST and its successor, RECIST 1.1 from 2009, is now the standard measurement protocol for measuring response in cancer trials.
The key driver in the development of the irRC was the observation that, in studies of various cancer therapies derived from the immune system such as cytokines and monoclonal antibodies, the looked-for Complete and Partial Responses as well as Stable Disease only occurred after an increase in tumor burden that the conventional RECIST Criteria would have dubbed 'Progressive Disease'. Basically, RECIST failed to take account of the delay between dosing and an observed anti-tumour T cell response, so that otherwise 'successful' drugs - that is, drugs which ultimately prolonged life - failed in clinical trials. This led various researchers and drug developers interested in cancer immunotherapy such as Axel Hoos at Bristol-Myers Squibb (BMS) to start discussing whether a new set of response criteria ought to be developed specifically for immmuno-oncology drugs. Their ideas, first flagged in a key 2007 paper in the Journal of Immunotherapy, evolved into the immune-related response criteria (irRC), which was published in late 2009 in the journal Clinical Cancer Research.
The criteria
The developers of the irRC based their criteria on the WHO Criteria but modified it:
Measurement of tumour burden. In the irRC, tumour burden is measured by combining 'index' lesions with new lesions. Ordinarily tumour burden would be measured simply with a limited number of 'index' lesions (that is, the largest identifiable lesions) at baseline, with new lesions identified at subsequent timepoints counting as 'Progressive Disease'. In the irRC, by contrast, new lesions are simply a change in tumour burden. The irRC retained the bidirectional measurement of lesions that had originally been laid down in the WHO Criteria.
Assessment of immune-related response. In the irRC, an immune-related Complete Response (irCR) is the disappearance of all lesions, measured or unmeasured, and no new lesions; an immune-related Partial Response (irPR) is a 50% drop in tumour burden from baseline as defined by the irRC; and immune-related Progressive Disease (irPD) is a 25% increase in tumour burden from the lowest level recorded. Everything else is considered immune-related Stable Disease (irSD). The thinking here is that even if tumour burden is rising, the immune system is likely to 'kick in' some months after first dosing and lead to an eventual decline in tumour burden for many patients. The 25% threshold allows this apparent delay to be accounted for.
Evidence of usefulness
The initial evidence cited by the creators of the irRC that their criteria were useful lay in the two Phase II melanoma trials described in the Clinical Cancer Research paper. The drug being trialled was a monoclonal antibody called ipilimumab, then under development at BMS with Axel Hoos as the medical lead. The drug targeted an immune checkpoint called CTLA-4, known as a key negative regulator of T cell activity. By blocking CTLA-4, ipilimumab was designed to potentiate antitumor T-cell responses. In the Phase IIs, which encompassed 227 treated patients and evaluated patients using the irRC, around 10% of these patients would have been deemed to have Progressive Disease by the WHO Criteria but actually experienced irPRs or irSDs, consistent with a response to ipilimumab.
The Phase III clinical failure of Pfizer's tremelimumab anti-CTLA-4 monoclonal antibody, which competed with ipilimumab, provided the first large-scale evidence of the utility of the irRC. The Pfizer study used conventional response criteria, and an early interim analysis found no survival advantage for the treated patients, leading to the termination of the trial in April 2008.
However within a year of this development, Pfizer's investigators were beginning to notice a separation of survival curves between treatment and control groups. Tremelimumab's competitor, ipilimumab, which was trialled in Phase III using the irRC, went on to gain FDA approval in 2011, indicated for unresectable stage III or IV melanoma, after a 676-patient study that compared ipilimumab plus an experimental vaccine called gp100 with the vaccine alone. The median overall survival for the ipilimumab+vaccine group was 10 months versus only 6.4 months for the vaccine. Marketed as Yervoy, ipilimumab subsequently became a blockbuster for BMS.
Key people
The 2009 paper which described the new irRC had twelve authors, all associated with the ipilimumab clinical trials used as examples - Jedd Wolchok of Memorial Sloan Kettering Cancer Center, Axel Hoos and Rachel Humphrey of Bristol-Myers Squibb, Steven O'Day and Omid Hamid of the Angeles Clinic in Santa Monica, Ca., Jeffrey Weber of the University of South Florida, Celeste Lebbé of Hôpital Saint-Louis in Paris, Michele Maio of University Hospital of Siena, Michael Binder of Medical University of Vienna, Oliver Bohnsack of a Berlin-based clinical informatics firm called Perceptive Informatics, Geoffrey Nichol of the antibody engineering company Medarex (which had originally developed ipilimumab) and Stephen Hodi of the Dana–Farber Cancer Institute in Boston.
References
Immunology
Oncology | Immune-related response criteria | [
"Biology"
] | 1,588 | [
"Immunology"
] |
48,736,429 | https://en.wikipedia.org/wiki/Jussi%20Karlgren | Jussi Karlgren is a Swedish computational linguist, research scientist at Spotify, and co-founder of text analytics company Gavagai AB. He holds a PhD in computational linguistics from Stockholm University, and the title of docent (adjoint professor) of language technology at Helsinki University.
Jussi Karlgren is known for having pioneered the application of computational linguistics to stylometry, for having first formulated the notion of a recommender system, and for his continued work in bringing non-topical features of text to the attention of the information access research field.
Karlgren's research is focused on questions relating to information access, genre and stylistics, distributional pragmatics, and evaluation of information access applications and distributional models.
Karlgren is of half Finnish descent and is fluent in Finnish.
References
Karlgren's publications at Google Scholar
Jussi Karlgren's publication page at the Swedish Institute of Computer Science
Year of birth missing (living people)
Living people
Linguists from Sweden
Computational linguistics
Stockholm University alumni
Academic staff of the University of Helsinki
Scientists at PARC (company)
Swedish people of Finnish descent
Spotify people | Jussi Karlgren | [
"Technology"
] | 230 | [
"Natural language and computing",
"Computational linguistics"
] |
38,231,808 | https://en.wikipedia.org/wiki/Finite%20lattice%20representation%20problem | In mathematics, the finite lattice representation problem, or finite congruence lattice problem, asks whether every finite lattice is isomorphic to the congruence lattice of some finite algebra.
Background
A lattice is called algebraic if it is complete and compactly generated. In 1963, Grätzer and Schmidt proved that every algebraic lattice is isomorphic to the congruence lattice of some algebra. Thus there is essentially no restriction on the shape of a congruence lattice of an algebra. The finite lattice representation problem asks whether the same is true for finite lattices and finite algebras. That is, does every finite lattice occur as the congruence lattice of a finite algebra?
In 1980, Pálfy and Pudlák proved that this problem is equivalent to the problem of deciding whether every finite lattice occurs as an interval in the subgroup lattice of a finite group. For an overview of the group theoretic approach to the problem, see Pálfy (1993) and Pálfy (2001).
This problem should not be confused with the congruence lattice problem.
Significance
This is among the oldest unsolved problems in universal algebra. Until it is answered, the theory of finite algebras is incomplete since, given a finite algebra, it is unknown whether there are, a priori, any restrictions on the shape of its congruence lattice.
References
Further reading
External links
Finite Congruence Lattice Problem
Algebraic structures
Lattice theory
Mathematical problems
Unsolved problems in mathematics | Finite lattice representation problem | [
"Mathematics"
] | 295 | [
"Unsolved problems in mathematics",
"Mathematical structures",
"Order theory",
"Lattice theory",
"Mathematical objects",
"Fields of abstract algebra",
"Algebraic structures",
"Mathematical problems"
] |
40,990,955 | https://en.wikipedia.org/wiki/Factorial%20moment%20measure | In probability and statistics, a factorial moment measure is a mathematical quantity, function or, more precisely, measure that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both. Moment measures generalize the idea of factorial moments, which are useful for studying non-negative integer-valued random variables.
The first factorial moment measure of a point process coincides with its first moment measure or intensity measure, which gives the expected or average number of points of the point process located in some region of space. In general, if the number of points in some region is considered as a random variable, then the moment factorial measure of this region is the factorial moment of this random variable. Factorial moment measures completely characterize a wide class of point processes, which means they can be used to uniquely identify a point process.
If a factorial moment measure is absolutely continuous, then with respect to the Lebesgue measure it is said to have a density (which is a generalized form of a derivative), and this density is known by a number of names such as factorial moment density and product density, as well as coincidence density, joint intensity
, correlation function or multivariate frequency spectrum The first and second factorial moment densities of a point process are used in the definition of the pair correlation function, which gives a way to statistically quantify the strength of interaction or correlation between points of a point process.
Factorial moment measures serve as useful tools in the study of point processes as well as the related fields of stochastic geometry and spatial statistics, which are applied in various scientific and engineering disciplines such as biology, geology, physics, and telecommunications.
Point process notation
Point processes are mathematical objects that are defined on some underlying mathematical space. Since these processes are often used to represent collections of points randomly scattered in space, time or both, the underlying space is usually d-dimensional Euclidean space denoted here by Rd, but they can be defined on more abstract mathematical spaces.
Point processes have a number of interpretations, which is reflected by the various types of point process notation. For example, if a point belongs to or is a member of a point process, denoted by N, then this can be written as:
and represents the point process being interpreted as a random set. Alternatively, the number of points of N located in some Borel set B is often written as:
which reflects a random measure interpretation for point processes. These two notations are often used in parallel or interchangeably.
Definitions
n th factorial power of a point process
For some positive integer , the -th factorial power of a point process on is defined as:
where is a collection of not necessarily disjoint Borel sets in , which form an -fold Cartesian product of sets denoted by:
The symbol denotes an indicator function such that is a Dirac measure for the set . The summation in the above expression is performed over all -tuples of distinct points, including permutations, which can be contrasted with the definition of the n-th power of a point process. The symbol denotes multiplication while the existence of various point process notation means that the n-th factorial power of a point process is sometimes defined using other notation.
n th factorial moment measure
The n th factorial moment measure or n th order factorial moment measure is defined as:
where the E denotes the expectation (operator) of the point process N. In other words, the n-th factorial moment measure is the expectation of the n th factorial power of some point process.
The n th factorial moment measure of a point process N is equivalently defined by:
where is any non-negative measurable function on , and the above summation is performed over all tuples of distinct points, including permutations. Consequently, the factorial moment measure is defined such that there are no points repeating in the product set, as opposed to the moment measure.
First factorial moment measure
The first factorial moment measure coincides with the first moment measure:
where is known, among other terms, as the intensity measure or mean measure, and is interpreted as the expected number of points of found or located in the set
Second factorial moment measure
The second factorial moment measure for two Borel sets and is:
Name explanation
For some Borel set , the namesake of this measure is revealed when the th factorial moment measure reduces to:
which is the -th factorial moment of the random variable .
Factorial moment density
If a factorial moment measure is absolutely continuous, then it has a density (or more precisely, a Radon–Nikodym derivative or density) with respect to the Lebesgue measure and this density is known as the factorial moment density or product density, joint intensity, correlation function, or multivariate frequency spectrum. Denoting the -th factorial moment density by , it is defined in respect to the equation:
Furthermore, this means the following expression
where is any non-negative bounded measurable function defined on .
Pair correlation function
In spatial statistics and stochastic geometry, to measure the statistical correlation relationship between points of a point process, the pair correlation function of a point process is defined as:
where the points . In general, whereas corresponds to no correlation (between points) in the typical statistical sense.
Examples
Poisson point process
For a general Poisson point process with intensity measure the -th factorial moment measure is given by the expression:
where is the intensity measure or first moment measure of , which for some Borel set is given by:
For a homogeneous Poisson point process the -th factorial moment measure is simply:
where is the length, area, or volume (or more generally, the Lebesgue measure) of . Furthermore, the -th factorial moment density is:
The pair-correlation function of the homogeneous Poisson point process is simply
which reflects the lack of interaction between points of this point process.
Factorial moment expansion
The expectations of general functionals of simple point processes, provided some certain mathematical conditions, have (possibly infinite) expansions or series consisting of the corresponding factorial moment measures. In comparison to the Taylor series, which consists of a series of derivatives of some function, the nth factorial moment measure plays the roll as that of the n th derivative the Taylor series. In other words, given a general functional f of some simple point process, then this Taylor-like theorem for non-Poisson point processes means an expansion exists for the expectation of the function E, provided some mathematical condition is satisfied, which ensures convergence of the expansion.
See also
Factorial moment
Moment
Moment measure
References
Point processes
Spatial analysis | Factorial moment measure | [
"Physics",
"Mathematics"
] | 1,373 | [
"Point (geometry)",
"Spatial analysis",
"Point processes",
"Space",
"Spacetime"
] |
40,994,286 | https://en.wikipedia.org/wiki/Leo%20Pharma | LEO Pharma A/S is a multinational Danish pharmaceutical company, founded in 1908, with a presence in about 100 countries. Its headquarters are in Ballerup, near Copenhagen The company is 100% integrated into a private foundation owned by the LEO Foundation. LEO Pharma develops and markets products for dermatology, bone remodeling thrombosis and coagulation. In 1945, it was the first producer of penicillin outside the US and UK.
History
Formation & the 20th Century
In 1908, pharmacists August Kongsted and Anton Antons bought the LEO Pharmacy in Copenhagen, Denmark. With the purchase, they established 'Københavns Løveapoteks kemiske Fabrik', today known as LEO Pharma. LEO Pharma celebrated its centennial in 2008. Flags bearing the LEO logo were flying in every country where LEO products are available, more than a hundred flags in total. Today, LEO Pharma has an ever growing pipeline with over 4,800 specialists focusing on dermatology and thrombosis.
1912 – The company launched its own Aspirin headache tablet
1917 – The company exported Denmark's first drug, Digisolvin
1940 – The company launched its own heparin product.
1958 – Patent filed for bendrofluazide.
1962 – The company launched Fucidin to be used to treat staphylococcus infections.
21st Century & onwards
In 2015, the company announced it would acquire Astellas Pharmas dermatology business for $725 million.
In 2018, the company acquired Bayer's dermatology unit for an undisclosed amount.
In April 2022, the company appointed Christophe Bourdon as its new CEO. Prior to this, he served as the CEO of Orphazyme A/S.
In January 2023, the company started extensive layoffs (of about 300 of its current employees, or ~5% of the workforce) as a part of major restructuring and reorganization in anticipation of a possibly planned IPO. Because of slimming down of the company's R&D program, new early-stage drug candidates will have to be sourced externally.
In August 2023, it was announced LEO Pharma had entered into a definitive agreement to acquire key assets of the Basking Ridge-headquartered biopharma company, Timber Pharmaceuticals, for $36 million. This transaction included TMB-001, a topical isotretinoin ointment currently under development for the treatment of moderate to severe subtypes of Congenital Ichthyosis (CI), which has no treatment options.
In September 2023, the company announced the implementation of a new capital structure with over 4 billion Danish kroner (approximately $587 million) allocated for business development and mergers and acquisitions. The company is focused on acquiring assets aimed at treating rare dermatological diseases with unmet medical needs.
In February 2024, LEO Pharma announced a net loss of 3.6 billion Danish kroner (equivalent to $528 million) for 2023 due to non-recurring project impairments, tax asset adjustments, and rising interest expenses. It also reported that it had cut its operating costs by 14% and increased its revenues by 7% in 2023.
Controversies
LEO Pharma, along with 21 other Danish companies, was accused of bribery and corruption in connection with the Oil-for-Food Programme that came to light in 2005. The accusation was that LEO Pharma had acted outside the UN system during the first Gulf War by bribing employees in the relief program and thereby helping Saddam Hussein. LEO Pharma quickly settled with the police and paid 8.5 million. The new CEO quickly cracked down on corruption both abroad and internally. This can affect employee flexibility and cause delays in production. In Berlingske Business on June 6, 2015, Gitte Aabo speaks about her personal responsibility and that LEO is ready for a few years of lower earnings, which is a possible consequence of her intervention in employee relations.
References
Pharmaceutical companies of Denmark
Companies based in Ballerup Municipality
Pharmaceutical companies established in 1908
Danish companies established in 1908 | Leo Pharma | [
"Chemistry"
] | 850 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
40,996,300 | https://en.wikipedia.org/wiki/Central%20differencing%20scheme | In applied mathematics, the central differencing scheme is a finite difference method that optimizes the approximation for the differential operator in the central node of the considered patch and provides numerical solutions to differential equations. It is one of the schemes used to solve the integrated convection–diffusion equation and to calculate the transported property Φ at the e and w faces, where e and w are short for east and west (compass directions being customarily used to indicate directions on computational grids). The method's advantages are that it is easy to understand and implement, at least for simple material relations; and that its convergence rate is faster than some other finite differencing methods, such as forward and backward differencing. The right side of the convection-diffusion equation, which basically highlights the diffusion terms, can be represented using central difference approximation. To simplify the solution and analysis, linear interpolation can be used logically to compute the cell face values for the left side of this equation, which is nothing but the convective terms. Therefore, cell face values of property for a uniform grid can be written as:
Steady-state convection diffusion equation
The convection–diffusion equation is a collective representation of diffusion and convection equations, and describes or explains every physical phenomenon involving convection and diffusion in the transference of particles, energy and other physical quantities inside a physical system:
where is diffusion coefficient and is the property.
Formulation of steady-state convection diffusion equation
Formal integration of steady-state convection–diffusion equation over a control volume gives
This equation represents flux balance in a control volume. The left side gives the net convective flux, and the right side contains the net diffusive flux and the generation or destruction of the property within the control volume.
In the absence of source term equation, one becomes
Continuity equation:
Assuming a control volume and integrating equation 2 over control volume gives:
Integration of equation 3 yields:
It is convenient to define two variables to represent the convective mass flux per unit area and diffusion conductance at cell faces, for example:
Assuming , we can write integrated convection–diffusion equation as:
And integrated continuity equation as:
In a central differencing scheme, we try linear interpolation to compute cell face values for convection terms.
For a uniform grid, we can write cell face values of property as
On substituting this into integrated convection-diffusion equation, we obtain:
And on rearranging:
Different aspects of central differencing scheme
Conservativeness
Conservation is ensured in central differencing scheme since overall flux balance is obtained by summing the net flux through each control volume taking into account the boundary fluxes for the control volumes around nodes 1 and 4.
Boundary flux for control volume around node 1 and 4
because
Boundedness
Central differencing scheme satisfies first condition of boundedness.
Since from continuity equation, therefore;
Another essential requirement for boundedness is that all coefficients of the discretised equations should have the same sign (usually all positive). But this is only satisfied when (peclet number) because for a unidirectional flow () is always positive if
Transportiveness
It requires that transportiveness changes according to magnitude of peclet number i.e. when pe is zero is spread in all directions equally and as Pe increases (convection > diffusion) at a point largely depends on upstream value and less on downstream value. But central differencing scheme does not possess transportiveness at higher pe since Φ at a point is average of neighbouring nodes for all Pe.
Accuracy
The Taylor series truncation error of the central differencing scheme is second order.
Central differencing scheme will be accurate only if Pe < 2.
Owing to this limitation, central differencing is not a suitable discretisation practice for general purpose flow calculations.
Applications of central differencing schemes
They are currently used on a regular basis in the solution of the Euler equations and Navier–Stokes equations.
Results using central differencing approximation have shown noticeable improvements in accuracy in smooth regions.
Shock wave representation and boundary-layer definition can be improved on coarse meshes.
Advantages
Simpler to program, requires less computer time per step, and works well with multigrid acceleration techniques
Has a free parameter in conjunction with the fourth-difference dissipation, which is needed to approach a steady state.
More accurate than the first-order upwind scheme if the Peclet number is less than 2.
Disadvantages
Somewhat more dissipative
Leads to oscillations in the solution or divergence if the local Peclet number is larger than 2.
See also
Finite difference method
Finite difference
Taylor series
Taylor theorem
Convection–diffusion equation
Diffusion
Convection
Peclet number
Linear interpolation
Symmetric derivative
Upwind differencing scheme for convection
References
Further reading
Computational Fluid Dynamics: The Basics with Applications – John D. Anderson,
Computational Fluid Dynamics volume 1 – Klaus A. Hoffmann, Steve T. Chiang,
External links
One-Dimensional_Steady-State_Convection_and_Diffusion#Central_Difference_Scheme
Finite Differences
Central Difference Methods
A Conservative Finite Difference Scheme for Poisson–Nernst–Planck Equations
Computational fluid dynamics
Finite differences
Numerical differential equations | Central differencing scheme | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,050 | [
"Mathematical analysis",
"Computational fluid dynamics",
"Finite differences",
"Computational physics",
"Fluid dynamics"
] |
41,000,306 | https://en.wikipedia.org/wiki/Scandium%20triiodide | Scandium triiodide, also known as scandium iodide, is an inorganic compound with the formula ScI3 and is classified as a lanthanide iodide. This salt is a yellowish powder. It is used in metal halide lamps together with similar compounds, such as caesium iodide, because of their ability to maximize emission of UV and to prolong bulb life. The maximized UV emission can be tuned to a range that can initiate photopolymerizations.
Scandium triiodide adopts a structure similar to that of iron trichloride (FeCl3), crystallizing into a rhombohedral lattice. Scandium has a coordination number of 6, while iodine has a coordination number of 3 and is trigonal pyramidal.
The purest scandium triiodide is obtained through direct reaction of the elements:
2 Sc + 3 I2 → 2 ScI3
Alternatively, but less effectively, one can produce anhydrous scandium triiodide by dehydrating ScI3(H2O)6.
Further information
Tomasz Mioduski, Cezary Gumiski, and Dewen Zeng "Rare Earth Metal Iodides and Bromides in Water and Aqueous Systems. Part 1. Iodides" Journal of Physical and Chemical Reference Data 2012, vol. 41, 013104-1 to 013104-63.
References
Scandium compounds
Iodides
Metal halides | Scandium triiodide | [
"Chemistry"
] | 310 | [
"Inorganic compounds",
"Metal halides",
"Salts"
] |
41,005,189 | https://en.wikipedia.org/wiki/Peter%20Fulde | Peter Fulde (6 April 1936 – 11 April 2024) was a German physicist working in condensed matter theory and quantum chemistry.
Biography
Fulde received a PhD degree at the University of Maryland in 1963. After spending more than one year as a postdoc with Michael Tinkham in Berkeley, he returned in 1965 to Germany where he obtained a chair for theoretical physics in 1968 at the Johann Wolfgang Goethe University in Frankfurt/M. From 1971 to 1974 he was in charge of the theory group of the Institute Max von Laue-Paul Langevin in Garching. In 1971 he became a director at the Max Planck Institute for Solid State Research in Stuttgart where he served until 1993 when he became the founding director of the Max Planck Institute for the Physics of Complex Systems in Dresden. After his retirement in 2007 he became president of the Asia Pacific Center for Theoretical Physics and a faculty member at POSTECH in Pohang (Korea). He directed the center until 2013.
Fulde has made numerous contributions to condensed matter physics including superconductivity and correlated electrons in molecules and solids. Particularly known is the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phase which may occur when fermions with imbalanced populations are paired.
Fulde was a founding member of the Berlin-Brandenburg Academy of Sciences and Humanities (former Preussische Akadamie der Wissenschaften). He was a member of the German Academy of Sciences Leopoldina and Deutsche Akademie für Technikwissenschaften (acatech). Among his awards are the Order of Merit of the Free State of Saxony (2007), the Tsungming Tu Award of the National Science Council of Taiwan (2009) and the Marian-Smoluchowski-Emil-Warburg-Award of the German and Polish Physical Societies (2011).
He was an Honorary Citizen of the Province Gyeongsangbuk do of the Republic of Korea (2014) and of the City of Pohang (2016).
Fulde died in Dresden, Saxony on 11 April 2024, at the age of 88.
Selected publications
P. Fulde “Crystal fields”, Chapter 17 in "Handbook on the physics and chemistry of rare-earth", ed. by K. A. Gschneider and L. Eyring; North-Holland Publ. Comp. (1978)
"Electron Correlations in Molecules and Solids", 480 pp. (Springer, Heidelberg 1991, 1993, 1995)
"Correlated Electrons in Quantum Matter", 535 pp. (World Scientific, Singapore, 2012); ebook ; pbk
References
See also
Fulde-Ferrell-Larkin-Ovchinnikov phase
1936 births
2024 deaths
20th-century German physicists
Recipients of the Order of Merit of the Free State of Saxony
Members of the German National Academy of Sciences Leopoldina
University of Hamburg alumni
University of Maryland, College Park alumni
Max Planck Institute directors
Scientists from Wrocław
Academic staff of Goethe University Frankfurt
Academic staff of Pohang University of Science and Technology
German physicists
Condensed matter physicists
People from the Province of Lower Silesia | Peter Fulde | [
"Physics",
"Materials_science"
] | 633 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
41,006,269 | https://en.wikipedia.org/wiki/Regasification | Regasification is a process of converting liquefied natural gas (LNG) at −162 °C (−260 °F) temperature back to natural gas at atmospheric temperature. LNG gasification plants can be located on land as well as on floating barges, i.e. a Floating Storage and Regasification Unit (FSRU). Floating barge mounted plants have the advantage that they can be towed to new offshore locations for better usage in response to changes in the business environment. In a conventional regasification plant, LNG is heated by sea water to convert it to natural gas / methane gas.
Byproducts
In addition to regasification, many valuable industrial byproducts can be produced using cold energy of LNG. Cold energy of LNG utilisation, for extracting liquid oxygen and nitrogen gas from air, makes LNG-regasification plants more viable when they are located near integrated steel plants and/or urea plants. Cold energy of LNG usage in lieu of massive and energy intensive cryogenic refrigeration units in natural-gas processing plants is also more viable economically. The natural gas processed with cold energy of LNG and the imported LNG can be readily injected into a conventional natural gas distribution system to reach the ultimate consumers.
The cold energy of LNG can be used for cooling the exhaust fluid of the gas turbine which is working in closed joule cycle with Argon gas as fluid. Thus near 100% conversion efficiency to electricity is achieved for the LNG/natural gas consumed by the gas turbine as its exhaust heat is fully used/absorbed for the gasification of LNG.
However, the abundant availability of natural gas, and the mature technology and its acceptability in using the LNG directly (without regasification) in road and rail vehicles would lead to lesser demand for LNG regasification plants.
See also
Gas-to-liquids
Existing regasification terminals
Liquid air
CNG carrier
Cryogenic energy storage
References
External links
Dynamic depressurisation calculations LNG regasification unit
Global LNG Regasification Markets
Liquefied natural gas
Fuel gas
Petroleum industry | Regasification | [
"Chemistry"
] | 435 | [
"Petroleum industry",
"Petroleum",
"Chemical process engineering"
] |
52,588,167 | https://en.wikipedia.org/wiki/High%20temperature%20hydrogen%20attack | High temperature hydrogen attack (HTHA), also called hot hydrogen attack or methane reaction, is a problem which concerns steels operating at elevated temperatures (typically above ) in hydrogen-rich atmospheres, such as refineries, petrochemical and other chemical facilities and, possibly, high pressure steam boilers. It is not to be confused with hydrogen embrittlement.
If a steel is exposed to very hot hydrogen, the high temperature enables the hydrogen molecules to dissociate and to diffuse into the alloy as individual diffusible atoms. There are two stages to the damage:
First, dissolved carbon in the steel reacts with the surface hydrogen and escapes into the gas as methane. This leads to superficial decarburization and a loss of strength in the surface. Initially, the damage is not visible.
Second, the reduction in the concentration of dissolved carbon creates a driving force which dissolves the carbides in the steel. This leads to a loss of strength deeper in the steel and is more serious. At the same time, some hydrogen atoms diffuse into the steel and combine with carbon to form tiny pockets of methane at internal surfaces, such as grain boundaries and defects. This methane gas cannot diffuse out of the metal, and collects in the voids at high pressure and initiates cracks in the steel. This selective leaching of carbon is a more serious loss of strength and ductility.
HTHA can be managed by using a different steel alloy, one where the carbides with other alloying elements, such as chromium and molybdenum, are more stable than iron carbides. Surface oxide layers are ineffective as a protection as they are immediately reduced by the hydrogen, forming water vapour.
Later-stage damage in the steel component can be seen using ultrasonic examination, which detects the large defects created by methane pressure. These large defects in a stressed component are usually the cause of failure in service: which is usually catastrophic as hot flammable hydrogen gas escapes rapidly.
See also
2010 Tesoro Anacortes Refinery disaster
Corrosion engineering
Hydrogen embrittlement
Hydrogen safety
References
Corrosion
Hydrogen
Materials degradation | High temperature hydrogen attack | [
"Chemistry",
"Materials_science",
"Engineering"
] | 433 | [
"Metallurgy",
"Materials science",
"Corrosion",
"Electrochemistry",
"Electrochemistry stubs",
"Materials degradation",
"Physical chemistry stubs",
"Chemical process stubs"
] |
52,588,462 | https://en.wikipedia.org/wiki/Verwey%20transition | The Verwey transition is a low-temperature phase transition in the mineral magnetite associated with changes in its magnetic, electrical, and thermal properties. It typically occurs near a temperature of 120 K but is observed at a range of temperatures between 80 and 125 K, although the spread is generally tight around 118-120 K in natural magnetites. Upon warming through the Verwey transition temperature (), the magnetite crystal lattice changes from a monoclinic structure insulator to the metallic cubic inverse spinel structure that persists at room temperature. The phenomenon is named after Evert Verwey, a Dutch chemist who first recognized, in the 1940s, the connection between the structural transition and the changes in the physical properties of magnetite. This was the first metal-insulator transition to be found.
The Verwey transition is near in temperature, but distinct from, a magnetic isotropic point in magnetite, at which the first magnetocrystalline anisotropy constant changes sign from positive to negative.
The temperature and physical expression of the Verwey transition are highly sensitive to the stress state of magnetite and the stoichiometry. Non-stoichiometry in the form of metal cation substitution or partial oxidation can lower the transition temperature or suppress it entirely.
See also
References
Phase transitions
Paleomagnetism
Rock magnetism
Magnetism | Verwey transition | [
"Physics",
"Chemistry"
] | 286 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Statistical mechanics",
"Matter"
] |
47,034,171 | https://en.wikipedia.org/wiki/Construction%20of%20an%20irreducible%20Markov%20chain%20in%20the%20Ising%20model | Construction of an irreducible Markov Chain is a mathematical method used to prove results related the changing of magnetic materials in the Ising model, enabling the study of phase transitions and critical phenomena.
The Ising model, a mathematical model in statistical mechanics, is utilized to study magnetic phase transitions and is a fundamental model of interacting systems. Constructing an irreducible Markov chain within a finite Ising model is essential for overcoming computational challenges encountered when achieving exact goodness-of-fit tests with Markov chain Monte Carlo (MCMC) methods.
Markov bases
In the context of the Ising model, a Markov basis is a set of integer vectors that enables the construction of an irreducible Markov chain. Every integer vector can be uniquely decomposed as , where and are non-negative vectors. A Markov basis satisfies the following conditions:
(i) For all , there must be and .
(ii) For any and any , there always exist satisfy:
and
for l = 1,...,k.
The element of is moved. An aperiodic, reversible, and irreducible Markov Chain can then be obtained using Metropolis–Hastings algorithm.
Persi Diaconis and Bernd Sturmfels showed that (1) a Markov basis can be defined algebraically as an Ising model and (2) any generating set for the ideal , is a Markov basis for the Ising model.
Construction of an irreducible Markov Chain
To obtain uniform samples from and avoid inaccurate p-values, it is necessary to construct an irreducible Markov chain without modifying the algorithm proposed by Diaconis and Sturmfels.
A simple swap of the form , where is the canonical basis vector, changes the states of two lattice points in y. The set Z denotes the collection of simple swaps. Two configurations are -connected by Z if there exists a path between y and y′ consisting of simple swaps .
The algorithm proceeds as follows:
with
for
The algorithm can now be described as:
(i) Start with the Markov chain in a configuration
(ii) Select at random and let .
(iii) Accept if ; otherwise remain in y.
Although the resulting Markov Chain possibly cannot leave the initial state, the problem does not arise for a 1-dimensional Ising model. In higher dimensions, this problem can be overcome by using the Metropolis-Hastings algorithm in the smallest expanded sample space .
Irreducibility in the 1-Dimensional Ising Model
The proof of irreducibility in the 1-dimensional Ising model requires two lemmas.
Lemma 1: The max-singleton configuration of for the 1-dimension Ising model is unique (up to location of its connected components) and consists of singletons and one connected component of size .
Lemma 2: For and , let denote the unique max-singleton configuration. There exists a sequence such that:
and
for
Since is the smallest expanded sample space which contains , any two configurations in can be connected by simple swaps Z without leaving . This is proved by Lemma 2, so one can achieve the irreducibility of a Markov chain based on simple swaps for the 1-dimension Ising model.
It is also possible to get the same conclusion for a dimension 2 or higher Ising model using the same steps outlined above.
References
Lattice models
Markov chain Monte Carlo | Construction of an irreducible Markov chain in the Ising model | [
"Physics",
"Materials_science"
] | 694 | [
"Statistical mechanics",
"Condensed matter physics",
"Lattice models",
"Computational physics"
] |
47,034,961 | https://en.wikipedia.org/wiki/Optimistic%20knowledge%20gradient | In statistics The optimistic knowledge gradient is a approximation policy proposed by Xi Chen, Qihang Lin and Dengyong Zhou in 2013. This policy is created to solve the challenge of computationally intractable of large size of optimal computing budget allocation problem in binary/multi-class crowd labeling where each label from the crowd has a certain cost.
Motivation
The optimal computing budget allocation problem is formulated as a Bayesian Markov decision process(MDP) and is solved by using the dynamic programming (DP) algorithm where the Optimistic knowledge gradient policy is used to solve the computationally intractable of the dynamic programming (DP) algorithm.
Consider a budget allocation issue in crowdsourcing. The particular crowdsourcing problem we considering is crowd labeling. Crowd labeling is a large amount of labeling tasks which are hard to solve by machine, turn out to easy to solve by human beings, then we just outsourced to an unidentified group of random people in a distributed environment.
Methodology
We want to finish this labeling tasks rely on the power of the crowd hopefully. For example, suppose we want to identify a picture according to the people in a picture is adult or not, this is a Bernoulli labeling problem, and all of us can do in one or two seconds, this is an easy task for human being. However, if we have tens of thousands picture like this, then this is no longer the easy task any more. That's why we need to rely on crowdsourcing framework to make this fast. Crowdsourcing framework of this consists of two steps. Step one, we just dynamically acquire from the crowd for items. On the other sides, this is dynamic procedure. We don't just send out this picture to everyone and we focus every response, instead, we do this in quantity. We are going to decide which picture we send it in the next, and which worker we are going to hire in the crowd in the next. According to his or her historical labeling results. And each picture can be sent to multiple workers and every worker can also work on different pictures. Then after we collect enough number of labels for different picture, we go to the second steps where we want to infer true label of each picture based on the collected labels. So there are multiple ways we can do inference. For instance, the simplest we can do this is just majority vote. The problem is that no free lunch, we have to pays for worker for each label he or she provides and we only have a limited project budget. So the question is how to spend the limited budget in a smart way.
Challenges
Before showing the mathematic model, the paper mentions what kinds of challenges we are facing.
Challenge 1
First of all, the items have a different level of difficulty to compute the label, in a previous example, some picture are easy to classify. In this case, you will usually see very consistent labels from the crowd. However, if some pictures are ambiguous, people may disagree with each other resulting in highly inconsistent labelling. So we may allocate more resources on this ambiguous task.
Challenge 2
And another difficulty we often have is that the worker are not perfect, sometimes this worker are not responsible, they just provide the random label, therefore, of course, we would not spend our budget on this no reliable workers. Now the problem is both the difficulty of the pictures and the reliability of the worker we completely unknown at the beginning. We can only estimate them during the procedure. Therefore, we are naturally facing to exploration and exploitation, and our goal is to give a reasonable good policy to spend money to the right way–maximize the overall accuracy of final inferred labels.
Mathematical model
For the mathematical model, we have the K items, , and total budget is T and we assume each label cost 1 so we are going to have T labels eventually. We assume each items has true label which positive or negative, this binomial cases and we can extended to multiple class, labeling cases, this a singular idea. And the positive set is defined as the set of items whose true label is positive. And also defined a soft-label, for each item which number between 0 and 1, and we define as underlying probability of being labeled as positive by a member randomly picked from a group of perfect workers.
In this first case, we assume for every worker is perfect, it means they all reliable, but being perfect doesn’t means this worker gives the same answer or right answer. It just means they will try their best to figure out the best answer in their mind, and suppose everyone is perfect worker, just randomly picked one of them, and with probability, we going to get a guy who believe this one is positive. That is how we explain . So we are assume a label is drawn from Bernoulli(), and must be consistent with the true label, which means is greater or equal to 0.5 if and only if this item is positive with a true positive label. So our goal is to learn H*, the set of positive items. In other word, we want to make an inferred positive set H based on collected labels to maximize:
It can also be written as:
step1: Bayesian decision process
Before show the Bayesian framework, the paper use an example to mention why we choose Bayesian instead of frequency approach, such that we can propose some posterior of prior distribution on the soft-label . We assume each is drawn from a known Beta prior:
And the matrix:
So we know that the Bernoulli conjugate of beta, so once we get a new label for item i, we going to update posterior distribution, the beta distribution by:
Depending on the label is positive or negative.
Here is the whole procedure in the high level, we have T stage, . And in current stage we look at matrix S, which summarized the posterior distribution information for all the
We are going to make a decision, choose the next item to label , .
And depending what the label is positive or negative, we add a matrix to getting a label:
Above all, this is the whole framework.
step2: Inference on positive set
When the t labels are collected, we can make an inference about the positive set Ht based on posterior distribution given by St
So here become the Bernoulli selection problem, we just take to look at the probability of being positive or being negative conditional to see is greater than 0.5 or not, if it is greater than 0.5, then we prove this item into the current infer positive set so this is a cost form for current optimal solution based on the information in .
After know what is optimal solution, then the paper show what is the optimal value. Plug in the optimal function,
This function is just a single function which choose the larger one between the conditional probability of being positive and being negative. Once we get one more label for item i, we take a difference between this value, before and after we get a new label, we can see this conditional probability can actually simplify as follows:
The positive item being positive only depends on the beta posterior, therefore, if only the function of parameter of beta distribution function are a and b, as
One more label for this particular item, we double change the posterior function, so all of this items can be cancel except 1, so this is the change for whole accuracy and we defined as stage-wise reward: improvement the inference accuracy by one more sample. Of course this label have two positive value, we’ve get positive label or negative label, take average for this two, get expect reward. We just choose item to be label such that the expect reward is maximized using Knowledge Gradient:
They are multiple items, let us know how do we break the ties. If we break the tie deterministically, which means we choose the smallest index. We are going to have a problem because this is not consistent which means the positive stage does not converge to the true positive stage .
So we can also try to break the ties randomly, it works, however, we will see the performance is almost like uniform sampling, is the best reward. The writer’s policy is kinds of more greedy, instead of choosing the average in stage once reward, we can actually calculate the larger one, the max of the two stage possible reward, so Optimistic Knowledge Gradient:
And we know under optimistic knowledge gradient, the final inference accuracy converge to 100%. Above is based on every worker is perfect, however, in practice, workers are not always responsible. So if in imperfect workers, we assume K items, .
The probability of item being labeled as positive by a perfect worker.
M workers, ,
The probability of worker giving the same label as a perfect worker. Distribution of the label from worker to item :
And the action space is that
where , label matrix:
It is difficult to calculate, so we can use Variational Bayesian methods of
References
Mathematical optimization
Markov models | Optimistic knowledge gradient | [
"Mathematics"
] | 1,818 | [
"Mathematical optimization",
"Mathematical analysis"
] |
47,035,981 | https://en.wikipedia.org/wiki/Lead%20tin%20telluride | Lead tin telluride, also referred to as PbSnTe or Pb1−xSnxTe, is a ternary alloy of lead, tin and tellurium, generally made by alloying either tin into lead telluride or lead into tin telluride. It is a IV-VI narrow band gap semiconductor material.
The band gap of Pb1−xSnxTe is tuned by varying the composition(x) in the material. SnTe can be alloyed with Pb (or PbTe with Sn) in order to tune the band gap from 0.29 eV (PbTe) to 0.18 eV (SnTe). It is important to note that unlike II-VI chalcogenides, e.g. cadmium, mercury and zinc chalcogenides, the band gap in Pb1−xSnxTe does not changes linearly between the two extremes. In contrast, as the composition (x) is increased, the band gap decreases, approaches zero in the concentration regime (0.32–0.65 corresponding to temperature 4-300 K, respectively) and further increases towards bulk band gap of SnTe. Therefore, the lead tin telluride alloys have narrower band gaps than their end point counterparts making lead tin telluride an ideal candidate for mid infrared, 3–14 μm opto-electronic application.
Properties
Lead tin telluride is p-type semiconductor at 300 K. The hole concentration increases as the tin content is increased resulting in an increase in electrical conductivity. For composition range x = 0 to 0.1, electrical conductivity decreases with increase in temperature up to 500 K and increases beyond 500 K. For composition range, x ≥ 0.25, electrical conductivity decreases with increases in temperature.
The Seebeck coefficient of Pb1−xSnxTe decreases with increases in Sn content at 300 K.
For composition x > 0.25, thermal conductivity of Pb1−xSnxTe increases with increase in Sn content. Thermal conductivity values decreases with increase in temperature over the entire composition range, x > 0.
For Pb1−xSnxTe, the optimum temperature corresponding to maximum thermoelectric power factor increases with increase in composition x. The pseudo binary alloy of Lead tin telluride acts as a thermoelectric material over 400–700 K temperature range.
Lead tin telluride has a positive temperature coefficient i.e. for a given composition x, band gap increases with temperature. Therefore, temperature stability has to be maintained while working with lead tin telluride based laser. However, the advantage is that the operating wavelength of the laser can simply be tuned by varying the operating temperature.
The optical absorption coefficient of lead tin telluride is typically ~750 cm−1 as compared to ~50 cm−1 for the extrinsic semiconductors such as doped silicon. The higher optical coefficient value not only ensures higher sensitivity but also reduces the spacing required between individual detector elements to prevent optical cross talk making integrated circuit technology easily accessible.
Application
Due to tunable narrow band gap and relatively higher operating temperature of lead tin telluride as compared to mercury cadmium telluride, it has been a material of choice for commercial applications in IR sources, band-pass filters and IR detectors. It has found applications as photovoltaic devices for sensing radiation in 8-14 μm window.
Single Crystal Pb1−xSnxTe diode lasers have been employed for detection of gaseous pollutants like sulfur dioxide.
Lead tin tellurides have been used in thermoelectric devices.
References
Tellurides
Lead alloys
Tin alloys
IV-VI semiconductors | Lead tin telluride | [
"Chemistry"
] | 763 | [
"Lead alloys",
"Semiconductor materials",
"IV-VI semiconductors",
"Tin alloys",
"Alloys"
] |
34,240,222 | https://en.wikipedia.org/wiki/Computational%20astrophysics | Computational astrophysics refers to the methods and computing tools developed and used in astrophysics research. Like computational chemistry or computational physics, it is both a specific branch of theoretical astrophysics and an interdisciplinary field relying on computer science, mathematics, and wider physics. Computational astrophysics is most often studied through an applied mathematics or astrophysics programme at PhD level.
Well-established areas of astrophysics employing computational methods include magnetohydrodynamics, astrophysical radiative transfer, stellar and galactic dynamics, and astrophysical fluid dynamics. A recently developed field with interesting results is numerical relativity.
Research
Many astrophysicists use computers in their work, and a growing number of astrophysics departments now have research groups specially devoted to computational astrophysics. Important research initiatives include the US Department of Energy (DoE) SciDAC collaboration for astrophysics and the now defunct European AstroSim collaboration. A notable active project is the international Virgo Consortium, which focuses on cosmology.
In August 2015 during the general assembly of the International Astronomical Union a new
commission C.B1 on Computational Astrophysics was inaugurated, therewith recognizing the importance of astronomical discovery by computing.
Important techniques of computational astrophysics include particle-in-cell (PIC) and the closely related particle-mesh (PM), N-body simulations, Monte Carlo methods, as well as grid-free (with smoothed particle hydrodynamics (SPH) being an important example) and grid-based methods for fluids. In addition, methods from numerical analysis for solving ODEs and PDEs are also used.
Simulation of astrophysical flows is of particular importance as many objects and processes of astronomical interest such as stars and nebulae involve gases. Fluid computer models are often coupled with radiative transfer, (Newtonian) gravity, nuclear physics and (general) relativity to study highly energetic phenomena such as supernovae, relativistic jets, active galaxies and gamma-ray bursts and are also used to model stellar structure, planetary formation, evolution of stars and of galaxies, and exotic objects such as neutron stars, pulsars, magnetars and black holes. Computer simulations are often the only means to study stellar collisions, galaxy mergers, as well as galactic and black hole interactions.
In recent years the field has made increasing use of parallel and high performance computers.
Tools
Computational astrophysics as a field makes extensive use of software and hardware technologies. These systems are often highly specialized and made by dedicated professionals, and so generally find limited popularity in the wider (computational) physics community.
Hardware
Like other similar fields, computational astrophysics makes extensive use of supercomputers and computer clusters . Even on the scale of a normal desktop it is possible to accelerate the hardware. Perhaps the most notable such computer architecture built specially for astrophysics is the GRAPE (gravity pipe) in Japan.
As of 2010, the biggest N-body simulations, such as DEGIMA, do general-purpose computing on graphics processing units.
Software
Many codes and software packages, exist along with various researchers and consortia maintaining them. Most codes tend to be
n-body packages or fluid solvers of some sort. Examples of n-body codes include ChaNGa, MODEST, nbodylab.org and Starlab.
For hydrodynamics there is usually a coupling between codes, as the motion of the fluids usually has some other effect (such as gravity, or radiation) in astrophysical situations. For example, for SPH/N-body there is GADGET and SWIFT; for grid-based/N-body RAMSES, ENZO, FLASH, and ART.
AMUSE , takes a different approach (called Noah's Ark) than the other packages by providing an interface structure to a large number of publicly available astronomical codes for addressing stellar dynamics, stellar evolution, hydrodynamics and radiative transport.
See also
Millennium Simulation, Eris, and Bolshoi cosmological simulation are astrophysical supercomputer simulations
Plasma modeling
Computational physics
Theoretical astronomy and theoretical astrophysics
Center for Computational Relativity and Gravitation
University of California High-Performance AstroComputing Center
References
Further reading
Beginner/intermediate level:
Astrophysics with a PC: An Introduction to Computational Astrophysics, Paul Hellings. Willmann-Bell; 1st English ed edition.
Practical Astronomy with your Calculator, Peter Duffett-Smith. Cambridge University Press; 3rd edition 1988.
Advanced/graduate level:
Numerical Methods in Astrophysics: An Introduction (Series in Astronomy and Astrophysics): Peter Bodenheimer, Gregory P. Laughlin, Michal Rozyczka, Harold. W Yorke. Taylor & Francis, 2006.
Open cluster membership probability based on K-means clustering algorithm, Mohamed Abd El Aziz & I. M. Selim & A. Essam, Exp Astron., 2016
Automatic Detection of Galaxy Type From Datasets of Galaxies Image Based on Image Retrieval Approach, Mohamed Abd El Aziz, I. M. Selim & Shengwu Xiong Scientific Reports 7, 4463, 2017
Journals (Open Access):
Living Reviews in Computational Astrophysics
Computational Astrophysics and Cosmology
Astrophysics
Computational physics
Computational fields of study | Computational astrophysics | [
"Physics",
"Astronomy",
"Technology"
] | 1,045 | [
"Computational fields of study",
"Astrophysics",
"Computational physics",
"Computing and society",
"Astronomical sub-disciplines"
] |
34,243,712 | https://en.wikipedia.org/wiki/Beryllium%20borohydride | Beryllium borohydride is an inorganic compound with the chemical formula .
Preparation
Beryllium borohydride is formed by the reaction of beryllium hydride with diborane in an ether solution.
It can also be formed by the reaction of beryllium chloride and lithium borohydride in a sealed tube at 120 °C:
Structure
The chemical formula of beryllium borohydride can be written as . The crystal structure is made up of a helical polymer of and structure units. The borohydride ions, , adopt a tetrahedral geometry. Beryllium is 6-coordinate and adopts a distorted trigonal prismatic geometry.
Application
The purest beryllium hydride is obtained by the reaction of triphenylphosphine, , with beryllium borohydride, at 180 °C:
References
Beryllium compounds
Borohydrides | Beryllium borohydride | [
"Chemistry"
] | 194 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
34,244,592 | https://en.wikipedia.org/wiki/Mapper%282%29 | Mapper(2) is a database of transcription factor binding sites in multiple genomes.
See also
Transcription factor
References
External links
http://genome.ufl.edu/mapperdb (not available)
Biological databases
Gene expression | Mapper(2) | [
"Chemistry",
"Biology"
] | 50 | [
"Gene expression",
"Bioinformatics",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Biological databases"
] |
34,245,192 | https://en.wikipedia.org/wiki/Political%20demography | Political demography is the study of the relationship between politics and population change. Population change is driven by classic demographic mechanisms – birth, death, age structure, and migration.
However, in political demography, there is always scope for assimilation as well as boundary and identity change, which can redraw the boundaries of populations in a way that is not possible with biological populations. Typically, political-demographic projections can account for both demographic factors and transitions caused by social change. A notable leader in the area of sub-state population projection is the World Population Program of the International Institute of Applied Systems Analysis (IIASA) in Laxenburg, Austria.
Some of the issues which are studied in the context of political demography are: surges of young people in the developing world, significantly increasing aging in the developed world, and the impact of increasing urbanization. Political demographers study issues like population growth in a political context. A population's growth is impacted by the relative balance of variables like mortality, fertility and immigration.
Many of the present world's most powerful nations are aging quickly, largely as a result of major decreases in fertility rates and major increases in life expectancies. As the labor pools in these nations shrink, and spending on the elderly increases, their economies are likely to slow down. By 2050, the workforce in Japan and Russia is predicted to decrease by more than 30 percent, while the German workforce is expected to decline by 25 percent by that year. The governments of these countries have made financial commitments to the elderly in their populations which will consume huge percentages of their national GDP. For example, based on current numbers, more than 25% of the national GDPs of Japan, France and Germany will be consumed by these commitments by 2040.
Political demography and evolution
Differential reproductive success is the mechanism through which evolution takes place. For much of human history this occurred through migrations and wars of conquest, with disease and mortality through famine and war affecting the power of empires, tribes and city-states. Differential fertility also played a part, though typically reflected resource availability rather than cultural factors. Though culture has largely usurped this role, some claim that differential demography continues to affect cultural and political evolution.
Uneven transition, democratization and globalization
The demographic transition from the late eighteenth century onwards opened up the possibility that significant change could occur within and between political units. Though the writings of Polybius and Cicero in classical times bemoaned the low fertility of the patrician elite as against their more fecund barbarian competitors, differential fertility has probably only recently emerged as a central aspect of political demography.
This has come about due to medical advances which have lowered infant mortality while conquest migrations have faded as a factor in world history. Differences in immunity levels to infectious diseases between populations also play no major role in our age of modern medicine and widespread exposure to a common disease pool.
It is not so much the trajectory of demographic transition that counts as the fact that it has become more intense and uneven in the late twentieth century as it has spread into the developing world. Uneven transitions lend themselves to differential growth rates between contending groups. These changes are in turn, magnified by democratization, which entrenches majority rule and privileges the power of numbers in politics as never before.
Indeed, in many new democracies riven by ethnic and religious conflicts, elections are akin to censuses while groups seek to 'win the census'. Ethnic parties struggle to increase their constituencies through pronatalism ('wombfare'), oppose family planning, and contest census and election results.
Ethnic, national and civilizational conflict
One branch of political demography examines how differences in population growth between nation-states, religions, ethnic groups and civilizations affects the balance of power between these political actors. For instance, Ethiopia was projected to have a larger population than Russia in 2020, and while there were 3.5 Europeans per African in 1900, there will be four Africans for each European in 2050. Population has always counted for national power to some degree and it is unlikely that these changes will leave the world system unaffected.
The same dynamic can be witnessed within countries due to differential ethnic population growth. Irish Catholics in Northern Ireland increased their share of the population through higher birthrates and the momentum of a youthful age structure from 35 to nearly 50 percent of the total between 1965 and 2011. Similar changes, also affected by in- and out-migration, have taken place in, amongst others, the United States (Hispanics), Israel-Palestine (Jews and Arabs), Kosovo (Albanians), Lebanon (Shia, with decline of Christians) and Nagorno-Karabakh (Armenians).
In the US, the growth of Hispanics and Asians, and Hispanics' youthful age profile as against whites, has the potential to tilt more states away from the Republican Party. On the other hand, the fertility advantage of conservative over liberal white voters is significant and rising, thus the Republicans are poised to win a larger share of the white vote - especially over the very long run of 50 to 100 years.
According to London-based scholar Eric Kaufmann, the high birth rates of religious fundamentalists as against seculars and moderates has contributed to an increase in religious fundamentalism and decrease of moderate religion within religious groups, as in Israel, the US and the Muslim Middle East. Kaufmann, armed with empirical from a number of countries, also posits that this will be further bolstered by the higher retention rates of religious fundamentalists, with individuals in religiously fundamentalist households less likely to become religiously non-observant than others. See also .
Age structure and politics
Youth bulges
A second avenue of inquiry considers age structures: be these 'youth bulges' or aging populations. Young populations are associated with a ratio of dependents to producers: a high proportion of the population under age 16 puts pressure on resources. A 'youth bulge' of those in the 16-30 bracket creates a different set of problems.
A large population of adolescents entering the labor force and electorate strains at the seams of the economy and polity, which were designed for smaller populations. This creates unemployment and alienation unless new opportunities are created quickly enough - in which case a 'demographic dividend' accrues because productive workers outweigh young and elderly dependents. Yet the 16-30 age range is associated with risk-taking, especially among males.
In general, youth bulges in developing countries are associated with higher unemployment and, as a result, a heightened risk of violence and political instability. For some, the transition to more mature age structures is almost a sine qua non for democratization.
Population aging
Population aging presents the obverse effect: older populations are less risk-taking and less prone to violence and instability. However, like those under-16, they place great strain on the social safety net, especially in countries committed to old-age provision and high-quality medical care.
Some observers believe that the advent of a much older planet, courtesy of below-replacement fertility in Europe, North America, China and much of the rest of Asia and Latin America, will produce a 'geriatric peace'. Others are concerned that population aging will bankrupt the welfare state and handicap western liberal democracies' ability to project power abroad to defend their interests. A more cautious climate could also herald slower economic growth, less entrepreneurship and reduced productivity in mature democracies.
However, some argue that older people in the developed world have much higher productivity, human capital and better health than their counterparts in developing countries, so the economic effects of population aging will be largely mitigated.
Other branches of political demography
Other areas in political demography address the political impact of skewed sex ratios (typically caused by female infanticide or neglect), urbanization, global migration, and the links between population, environment and conflict
Emerging discipline
The study of political demography is in its early stages and can be traced back to the works of figures such as Jack Goldstone, whom is often considered to be the father of Political Demography. Since 2000 the subject has drawn the attention of policymakers and journalists and is now emerging as an academic subfield. Panels on political demography appear at demography conferences such as the Population Association of America (PAA) and European Association for Population Studies (EAPS). There is now a political demography section at the International Studies Association. A number of important international conferences have also taken place since 2006 on the subject.
See also
Natalism
Religious demography
Quiverfull
Jack Goldstone
Philip Longman
Myron Weiner
Ben Wattenberg
World population
Demographic engineering
References
External links
The Political Demography of Ethnicity, Nationalism and Religion Eric Kaufmann's website
Webcast of book launch of Political Demography, at Woodrow Wilson Center, Jan. 10, 2012 - featuring Jack Goldstone, Eric Kaufmann, Mark Haas, Elizabeth Leahy, and chaired by Geoff Dabelko
Demography and Security: The Politics of Population Change, conference at Weatherhead Center, Harvard University, May 7-8, 2009
International Studies Association, Political Demography Section
Shall the Religious Inherit the Earth?: Religiosity, Fertility and Politics
Ruy Teixeira US political demographics website
William Frey US political demographics site
demography
Demography
Population | Political demography | [
"Environmental_science"
] | 1,891 | [
"Demography",
"Environmental social science"
] |
34,248,077 | https://en.wikipedia.org/wiki/Bel%E2%80%93Robinson%20tensor | In general relativity and differential geometry, the Bel–Robinson tensor is a tensor defined in the abstract index notation by:
Alternatively,
where is the Weyl tensor. It was introduced by Lluís Bel in 1959. The Bel–Robinson tensor is constructed from the Weyl tensor in a manner analogous to the way the electromagnetic stress–energy tensor is built from the electromagnetic tensor. Like the electromagnetic stress–energy tensor, the Bel–Robinson tensor is totally symmetric and traceless:
In general relativity, there is no unique definition of the local energy of the gravitational field. The Bel–Robinson tensor is a possible definition for local energy, since it can be shown that whenever the Ricci tensor vanishes (i.e. in vacuum), the Bel–Robinson tensor is divergence-free:
References
Tensors in general relativity
Differential geometry | Bel–Robinson tensor | [
"Physics",
"Engineering"
] | 168 | [
"Tensors",
"Physical quantities",
"Tensor physical quantities",
"Tensors in general relativity",
"Relativity stubs",
"Theory of relativity"
] |
34,248,218 | https://en.wikipedia.org/wiki/Buoyant%20density%20centrifugation | Buoyant density centrifugation (also isopycnic centrifugation or equilibrium density-gradient centrifugation) uses the concept of buoyancy to separate molecules in solution by their differences in density.
Implementation
Historically a cesium chloride (CsCl) solution was often used, but more commonly used density gradients are sucrose or Percoll. This application requires a solution with high density and yet relatively low viscosity, and CsCl suits it because of its high solubility in water, high density owing to the large mass of Cs, as well as low viscosity and high stability of CsCl solutions.
The sample is put on top of the solution, and then the tube is spun at a very high speed for an extended time, at times lasting days. The CsCl molecules become densely packed toward the bottom, so a continuous gradient of layers of different densities (and CsCl concentrations) form. Since the original solution was approximately the same density, they go to a level where their density and the CsCl density are the same, to which they form a sharp, distinctive band.
Isotope separation
This method very sharply separates molecules, and is so sharp that it can even separate different molecular isotopes from one another. It has been utilized in the Meselson-Stahl experiment.
DNA separation
Buoyant density of the majority of DNA is 1.7g/cm3 which is equal to the density of 6M CsCl solution. Buoyant density of DNA changes with its GC content. The term "satellite DNA" refers to small bands of repetitive DNA sequences with distinct base composition floating above (A+T rich) or below (G+C rich) the main component DNA.
See also
Isopycnic
Satellite DNA
References
Further reading
Separation processes
Laboratory techniques | Buoyant density centrifugation | [
"Chemistry"
] | 371 | [
"nan",
"Separation processes"
] |
42,452,496 | https://en.wikipedia.org/wiki/Descent%20algebra | In algebra, Solomon's descent algebra of a Coxeter group is a subalgebra of the integral group ring of the Coxeter group, introduced by .
The descent algebra of the symmetric group
In the special case of the symmetric group Sn, the descent algebra is given by the elements of the group ring such that permutations with the same descent set have the same coefficients. (The descent set of a permutation σ consists of the indices i such that σ(i) > σ(i+1).) The descent algebra of the symmetric group Sn has dimension 2n-1. It contains the peak algebra as a left ideal.
References
Reflection groups | Descent algebra | [
"Physics"
] | 139 | [
"Euclidean symmetries",
"Reflection groups",
"Symmetry"
] |
46,192,622 | https://en.wikipedia.org/wiki/Faradaic%20impedance | In electrochemistry, faradaic impedance is the resistance and capacitance acting jointly at the surface of an electrode of an electrochemical cell. The cell may be operating as either a galvanic cell generating an electric current or inversely as an electrolytic cell using an electric current to drive a chemical reaction. In the simplest nontrivial case faradaic impedance is modeled as a single resistor and single capacitor connected in parallel, as opposed say to in series or as a transmission line with multiple resistors and capacitors.
Mechanism
The resistance arises from the prevailing limitations on availability (local concentration) and mobility of the ions whose motion between the electrolyte and the electrode constitutes the faradaic current. The capacitance is that of the capacitor formed by the electrolyte and the electrode, separated by the Debye screening length and giving rise to the double-layer capacitance at the electrolyte-electrode interface. When the supply of ions does not meet the demand created by the potential the resistance increases, the effect being that of a constant current source or sink, and the cell is then said to be polarized at that electrode. The extent of polarization, and hence the faradaic impedance, can be controlled by varying the concentration of electrolyte ions and the temperature, by stirring the electrolyte, etc. The chemistry of the electrolyte-electrode interface is also a crucial factor.
Electrodes constructed as smooth planar sheets of metal have the least surface area. The area can be increased by using a woven mesh or porous or sintered metals. In this case faradaic impedance may be more appropriately modeled as a transmission line consisting of resistors in series coupled by capacitors in parallel.
Dielectric spectroscopy
Over the past two decades faradaic impedance has emerged as the basis for an important technique in a form of spectral analysis applicable to a wide variety of materials. This technique depends on the capacitive component of faradaic impedance. Whereas the resistive component is independent of frequency and can be measured with DC, the impedance of the capacitive component is infinite at DC (zero admittance) and decreases inversely with frequency of an applied AC signal. Varying this frequency while monitoring the faradaic impedance provides a method of spectral analysis of the composition of the materials at the electrode-electrolyte interface, in particular their electric dipole moment in the role of dielectric of a capacitor. The technique yields insights into battery design, the performance of novel fuel cell designs, biomolecular interactions, etc.
See also
Dielectric spectroscopy
Electrochemical cell
Faradaic current
References
Electrochemistry | Faradaic impedance | [
"Chemistry"
] | 561 | [
"Electrochemistry",
"Physical chemistry stubs",
"Electrochemistry stubs"
] |
46,194,161 | https://en.wikipedia.org/wiki/NodeMCU | NodeMCU is a low-cost open source IoT platform. It initially included firmware which runs on the ESP8266 Wi-Fi SoC from Espressif Systems, and hardware which was based on the ESP-12 module. Later, support for the ESP32 32-bit MCU was added.
Overview
NodeMCU is an open source firmware for which open source prototyping board designs are available. The name "NodeMCU" combines "node" and "MCU" (micro-controller unit). Strictly speaking, the term "NodeMCU" refers to the firmware rather than the associated development kits.
Both the firmware and prototyping board designs are open source.
The firmware uses the Lua scripting language. The firmware is based on the eLua project, and built on the Espressif Non-OS SDK for ESP8266. It uses many open source projects, such as lua-cjson and SPIFFS, a flash file system for embedded controllers. Due to resource constraints, users need to select the modules relevant for their project and build a firmware tailored to their needs. Support for the 32-bit ESP32 has also been implemented.
The prototyping hardware typically used is a circuit board functioning as a dual in-line package (DIP) which integrates a USB controller with a smaller surface-mounted board containing the MCU and antenna. The choice of the DIP format allows for easy prototyping on breadboards. The design was initially based on the ESP-12 module of the ESP8266, which is a Wi-Fi SoC integrated with a Tensilica Xtensa LX106 core, widely used in IoT applications (see related projects).
Types
There are two available versions of NodeMCU as version 0.9 & 1.0 where the version 0.9 contains ESP-12 and version 1.0 contains ESP-12E where E stands for "Enhanced".
History
NodeMCU was created shortly after the ESP8266 came out. On December 30, 2013, Espressif Systems began production of the ESP8266. NodeMCU started on 13 Oct 2014, when Hong committed the first file of nodemcu-firmware to GitHub. Two months later, the project expanded to include an open-hardware platform when developer Huang R committed the gerber file of an ESP8266 board, named devkit v0.9. Later that month, Tuan PM ported MQTT client library from Contiki to the ESP8266 SoC platform, and committed to NodeMCU project, then NodeMCU was able to support the MQTT IoT protocol, using Lua to access the MQTT broker. Another important update was made on 30 Jan 2015, when Devsaurus ported the u8glib to the NodeMCU project, enabling NodeMCU to easily drive LCD, Screen, OLED, even VGA displays.
In the summer of 2015 the original creators abandoned the firmware project and a group of independent contributors took over. By the summer of 2016 the NodeMCU included more than 40 different modules.
Related projects
ESP8266 Arduino Core
As Arduino.cc began developing new MCU boards based on non-AVR processors like the ARM/SAM MCU used in the Arduino Due, they needed to modify the Arduino IDE so it would be relatively easy to change the IDE to support alternate toolchains to allow Arduino C/C++ to be compiled for these new processors. They did this with the introduction of the Board Manager and the SAM Core. A "core" is the collection of software components required by the Board Manager and the Arduino IDE to compile an Arduino C/C++ source file for the target MCU's machine language. Some ESP8266 enthusiasts developed an Arduino core for the ESP8266 WiFi SoC, popularly called the "ESP8266 Core for the Arduino IDE". This has become a leading software development platform for the various ESP8266-based modules and development boards, including NodeMCUs.
Pins
NodeMCU provides access to the GPIO (General Purpose Input/Output) and a pin mapping table is part of the API documentation.
[*] D0 (GPIO16) can only be used for GPIO read/write. It does not support open-drain/interrupt/PWM/I²C or 1-Wire.
See also
MCU (Micro Controller Unit)
References
External links
Open hardware electronic devices
Internet of things
Robotics hardware | NodeMCU | [
"Engineering"
] | 983 | [
"Robotics hardware",
"Robotics engineering"
] |
46,197,689 | https://en.wikipedia.org/wiki/Holmberg%2015A | Holmberg 15A (abbreviated to Holm 15A) is a supergiant elliptical galaxy and the central dominant galaxy of the Abell 85 galaxy cluster in the constellation Cetus, about 700 million light-years from Earth. It was discovered by Erik Holmberg. It became well known when it was reported to have the largest core ever observed in a galaxy, spanning some 15,000 light years, however this was subsequently refuted.
Supermassive black hole
It has been postulated that the primary component of the galactic core is a supermassive black hole with a mass of 40 billion solar masses (), although no direct measurement has yet been made. Previous estimates by Lauer et al. have jointed a mass value as high as 310 billion using the gamma ray point break radius method. Kormendy and Bender gave a value of 260 billion in a 2009 paper. Lower estimates were given by Kormendy and Ho et al. in 2013 at 2.1 and 9.2 billion . The paper by Lopez-Cruz et al. stated: "Therefore, we conservatively suggest that Holm 15A hosts an SMBH with a mass of ~1 ." Kormendy and Ho et al derived these equations using the M–sigma relation and the size of the outer bulge of the galaxy, which are indirect estimates. Rusli et al derived a value of 170 billion using break radius methodology. In addition, Abell 85 has its velocity dispersion of dark matter halo at ~750 km/s, which could be explained only by a black hole with a mass greater than 150 billion , although Kormendy and Ho et al stated that "dark matter halos are scale-free, and the SMBH-dark matter coevolution is independent from the effects of baryons". This makes it one of the most massive black holes ever discovered, with it being classified as an ultramassive black hole.
See also
List of galaxies
References
Measures SMBH mass as .
Supermassive black holes
Cetus
Elliptical galaxies
002501
Astronomical objects discovered in 1937 | Holmberg 15A | [
"Physics",
"Astronomy"
] | 424 | [
"Black holes",
"Unsolved problems in physics",
"Supermassive black holes",
"Constellations",
"Cetus"
] |
50,162,037 | https://en.wikipedia.org/wiki/Quaternionic%20polytope | In geometry, a quaternionic polytope is a generalization of a polytope in real space to an analogous structure in a quaternionic module, where each real dimension is accompanied by three imaginary ones. Similarly to complex polytopes, points are not ordered and there is no sense of "between", and thus a quaternionic polytope may be understood as an arrangement of connected points, lines, planes and so on, where every point is the junction of multiple lines, every line of multiple planes, and so on. Likewise, each line must contain multiple points, each plane multiple lines, and so on. Since the quaternions are non-commutative, a convention must be made for the multiplication of vectors by scalars, which is usually in favour of left-multiplication.
As is the case for the complex polytopes, the only quaternionic polytopes to have been systematically studied are the regular ones. Like the real and complex regular polytopes, their symmetry groups may be described as reflection groups. For example, the regular quaternionic lines are in a one-to-one correspondence with the finite subgroups of U1(H): the binary cyclic groups, binary dihedral groups, binary tetrahedral group, binary octahedral group, and binary icosahedral group.
References
Quaternions | Quaternionic polytope | [
"Mathematics"
] | 287 | [
"Geometry",
"Geometry stubs"
] |
50,165,057 | https://en.wikipedia.org/wiki/NKX%202-9 | Nkx 2.9 is a transcription factor responsible for the formation of the branchial and visceral motor neuron subtypes of cranial motor nerves in vertebrates. Nkx 2.9 works together with another transcription factor, Nkx 2.2, to direct neural progenitor cells to their cell fate.
Gene defects
Cell lineage analysis of Nkx 2.9 and Nkx 2.2 double knockout (deficient) mouse embryos shows that cranial nerve alterations are a result of changes in neuronal progenitor cell fate. The trigeminal nerve is not affected in the double knockout mouse embryos, indicating that cell fate alteration is limited to the caudal hindbrain; that Nkx 2.9 and Nkx 2.2 proteins do not play a role in branchial or visceral motor neuron development in the portion of the hindbrain superior to neuromere 4.
Disturbance of Nkx 2.9 and Nkx 2.2 in mouse embryos results in the total loss of the spinal accessory and vagal motor nerves, and a partial loss of the glossopharyngeal and facial motor nerves. However, the somatic hypoglossal and abducens motor nerves are not disrupted.
References
Transcription factors
Developmental neuroscience | NKX 2-9 | [
"Chemistry",
"Biology"
] | 267 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
50,165,151 | https://en.wikipedia.org/wiki/Transient%20receptor%20potential%20calcium%20channel%20family | The transient receptor potential Ca2+ channel (TRP-CC) family (TC# 1.A.4) is a member of the voltage-gated ion channel (VIC) superfamily and consists of cation channels conserved from worms to humans. The TRP-CC family also consists of seven subfamilies (TRPC, TRPV, TRPM, TRPN, TRPA, TRPP, and TRPML) based on their amino acid sequence homology:
the canonical or classic TRPs,
the vanilloid receptor TRPs,
the melastatin or long TRPs,
ankyrin (whose only member is the transmembrane protein 1 [TRPA1])
TRPN after the nonmechanoreceptor potential C (nonpC), and the more distant cousins,
the polycystins
and mucolipins.
A representative list of members belonging to the TRP-CC family can be found in the Transporter Classification Database.
Function
Members of the TRP-CC family are characterized as cellular sensors with polymodal activation and gating properties. Many TRP channels are activated by a variety of different stimuli and function as signal integrators. These mammalian proteins have been tabulated revealing their accepted designations, activators and inhibitors, putative interacting proteins and proposed functions. The founding members of the TRP superfamily are the TRPC (TRP canonical) channels, which can be activated following the stimulation of phospholipase C and/or depletion of internal calcium stores. However, the precise mechanisms leading to TRPC activation remain unclear. TRPC channels regulate nicotine-dependent behavior.
One member of the TRP-CC family, TRP-PLIK (1862 aas; AF346629), has been implicated in the regulation of cell division. It has an N-terminal TRP-CC-like sequence and a C-terminal protein kinase-like sequence. It was shown to autophosphorylate and exhibits an ATP phosphorylation-dependent, non-selective, Ca2+-permeable, outward rectifying conductance. Another long homologue, Melastatin, is associated with melanocytic tumor progression whereas another homologue, MTR1, is associated with Beckwith-Wiedemann syndrome and a predisposition for neoplasia. Each of these proteins may be present in the cell as several splice variants.
The ability to detect variations in humidity is critical for many animals. Birds, reptiles and insects all show preferences for specific humidities that influence their mating, reproduction and geographic distribution. Because of their large surface area to volume ratio, insects are particularly sensitive to humidity, and its detection can influence their survival. Two types of hygroreceptors exist in insects: one responds to an increase (moist receptor) and the other to a reduction (dry receptor) in humidity. Although previous data indicated that mechanosensation might contribute to hygrosensation, the cellular basis of hygrosensation and the genes involved in detecting humidity remain unknown. To understand better the molecular bases of humidity sensing, investigated several genes encoding channels associated with mechanosensation, thermosensing or water transport.
Transport reaction
The generalized transport reaction catalyzed by TRP-CC family members is:
Ca2+ (out) ⇌ Ca2+ (in)
or
C+ and Ca2+ (out) ⇌ C+ and Ca2+ (in).
Anesthesia
Most local anaesthetics used clinically are relatively hydrophobic molecules that gain access to their blocking site on the sodium channel by diffusing into or through the cell membrane. These anaesthetics block sodium channels and the excitability of neurons. Binshtok et al. (2007) tested the possibility that the excitability of primary sensory nociceptor (pain-sensing) neurons could be blocked by introducing the charged, membrane-impermeant lidocaine derivative QX-314 through the pore of the noxious-heat-sensitive TRPV1 channel (TC #1.A.4.2.1). They found that charged sodium-channel blockers can be targeted into nociceptors by the application of TRPV1 agonists to produce a pain-specific local anaesthesia. QX-314 applied externally had no effect on the activity of sodium channels in small sensory neurons when applied alone, but when applied in the presence of the TRPV1 agonist capsaicin, QX-314 blocked sodium channels and inhibited excitability.
Structure
Members of the VIC (TC# 1.A.1), RIR-CaC (TC# 2.A.3) and TRP-CC (TC# 1.A.4) families have similar transmembrane domain structures, but very different cytosolic domain structures.
The proteins of the TRP-CC family exhibit the same topological organization with a probable KscA-type 3-dimensional structure. They consist of about 700-800 (VR1, SIC or ECaC) or 1300 (TRP proteins) amino acyl residues (aas) with six transmembrane spanners (TMSs) as well as a short hydrophobic 'loop' region between TMSs 5 and 6. This loop region may dip into the membrane and contribute to the ion permeation pathway.
All members of the vanilloid family of TRP channels (TRPV) possess an N-terminal ankyrin repeat domain (ARD), which regulates calcium uptake and homeostasis. It is essential for channel assembly and regulation. The 1.7 Å crystal structure of the TRPV6-ARD revealed conserved structural elements unique to the ARDs of TRPV proteins. First, a large twist between the fourth and fifth repeats is induced by residues conserved in all TRPV ARDs. Second, the third finger loop is the most variable region in sequence, length and conformation. In TRPV6, a number of putative regulatory phosphorylation sites map to the base of this third finger. The TRPV6-ARD does not assemble as a tetramer and is monomeric in solution. Voltage sensing in thermo-TRP channels has been reviewed by Brauchi et al.
TRP channels have six TMS helices. These channels can be classified to six groups: TRPV (1-6), TRPM (1-8), TRPC (1-7), TRPA1, TRPP (1-3), and TRPML (1-3). TRP channels are involved in intracellular calcium mobilization and reabsorption. TRP channelopathies are involved in neurodegenerative disorders, diabetes mellitus, bowel diseases, epilepsy and cancer. Some TRP receptors act as molecular thermometers of the body. Some of them also play a role in pain and nociception.
Crystal structures
There are several crystal structures available for members of the TRP-CC family. Some of these include:
VR1: , , , ,
TRPV2 aka VRL-1:
Transient receptor potential cation channel subfamily A member 1:
See also
Voltage-gated ion channel
Ion channel
Transporter Classification Database
References
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | Transient receptor potential calcium channel family | [
"Biology"
] | 1,551 | [
"Protein families",
"Protein classification",
"Membrane proteins"
] |
36,817,859 | https://en.wikipedia.org/wiki/The%20Void%20%28philosophy%29 | The concept of "The Void" in philosophy encompasses the ideas of nothingness and emptiness, a notion that has been interpreted and debated across various schools of metaphysics. In ancient Greek philosophy, the Void was discussed by thinkers like Democritus, who saw it as a necessary space for atoms to move, thereby enabling the existence of matter. Contrasting this, Aristotle famously denied the existence of a true Void, arguing that nature inherently avoids a vacuum.
In Eastern philosophical traditions, the Void takes on significant spiritual and metaphysical meanings. In Buddhism, Śūnyatā refers to the emptiness inherent in all things, a fundamental concept in understanding the nature of reality. In Taoism, the Void is represented by Wuji, the undifferentiated state from which all existence emerges, embodying both the potential for creation and the absence of form.
Throughout the history of Western thought, the Void has also been explored in the context of existentialism and nihilism, where it often symbolizes the absence of intrinsic meaning in life and the human condition's confrontation with nothingness. Modern scientific discussions have further engaged with the concept of the Void, particularly in the study of quantum mechanics and cosmology, where it is linked to ideas such as the quantum vacuum and the structure of the universe.
In Western esotericism, aphairesis ("clearing aside"), or the via negativa, is a method used to approach the transcendent 'Ground of Being' by systematically negating all finite concepts and attributes associated with the divine. This process allows mystics to move beyond the limitations of human understanding and language, ultimately seeking a direct experience of the divine as the ineffable source of all existence, beyond any specific attributes or definitions.
Historical background
The concept of the Void has its origins in ancient Greek philosophy, where it was central to discussions on the nature of the cosmos and space. Parmenides suggested it did not exist and used this to argue for the non-existence of change, motion, and differentiation, among other things. In response to Parmenides, Democritus, one of the early proponents of atomism, posited that the universe was composed of atoms moving through the Void. According to Democritus, the Void was a necessary empty space that allowed for the movement and interaction of atoms, making it essential for the existence of matter itself. This view framed the Void as a real and foundational component of the universe, contrasting with the notion of it being mere nothingness.
Aristotle, in contrast, rejected the existence of a true Void, arguing that nature abhors a vacuum (horror vacui). In Book IV of Physics, Aristotle contended that the Void (), understood as an absolute absence of matter, could not exist because it would contradict the natural laws governing movement and change. He believed that movement required a medium through which it could occur, and a completely empty space would prevent such movement. This Aristotelian view became highly influential, shaping medieval and Renaissance perspectives on the nature of space and matter.
Stoic philosophers admitted the subsistence of four incorporeals among which they included void: "Outside of the world is diffused the infinite void, which is incorporeal. By incorporeal is meant that which, though capable of being occupied by body, is not so occupied. The world has no empty space within it, but forms one united whole. This is a necessary result of the sympathy and tension which binds together things in heaven and earth." Chrysippus discusses the Void in his work On Void and in the first book of his Physical Sciences; so too Apollophanes in his Physics, Apollodorus, and Posidonius in his Physical Discourse, book ii."
During the medieval period, Christian theologians engaged with the concept of the Void from a metaphysical and theological perspective. Classical theologians like Thomas Aquinas integrated Aristotelian philosophy with Christian theology, arguing that God's omnipresence precluded the existence of a Void. For Aquinas, the idea of a Void was incompatible with the belief in a God who is present everywhere, thus reinforcing the rejection of any absolute emptiness in creation.
Despite Aristotle's rejection, the concept of the Void reemerged during the Renaissance and early modern period, particularly in the context of scientific inquiry. The development of vacuum experiments by scientists like Evangelista Torricelli in the 17th century challenged Aristotelian physics by demonstrating the possibility of creating a vacuum, thereby reigniting philosophical discussions about the nature of the Void and its place in the physical world. These experiments laid the groundwork for later scientific advancements, including the study of space and the vacuum in modern physics. There were questions as to whether the Void was truly nothing, or if it was in fact filled with something, with theories of aether being suggested in the 18th century to fill the Void.
In The Void (2007), particle physicist Frank Close discusses the concept of 'empty space' from Aristotle through Newton, Mach, Einstein and beyond (including the idea of an 'aether' and current examinations of the Higgs field).
In Eastern philosophy
The concept of the Void holds significant spiritual and metaphysical importance in Eastern philosophy, particularly in Buddhism and Taoism. While each tradition interprets the Void differently, both see it as central to understanding the nature of reality and existence.
Buddhism: Śūnyatā
In Buddhism, the concept of the Void is most closely associated with Śūnyatā, often translated as "emptiness". This idea is central to Mahayana Buddhist philosophy and is most elaborately discussed in the works of Nagarjuna, a foundational figure in the Madhyamaka school. Śūnyatā refers to the absence of inherent existence in all phenomena; nothing possesses an independent, permanent self-nature. Instead, everything exists interdependently, arising and ceasing due to a web of causes and conditions. This understanding is meant to free practitioners from attachment and the delusion of a permanent self, leading to enlightenment.
Nagarjuna's analysis in the Mūlamadhyamakakārikā (Fundamental Verses on the Middle Way) elaborates on Śūnyatā by deconstructing various concepts and phenomena to show that they lack intrinsic essence. This deconstruction is not nihilistic; rather, it opens the way to seeing reality as a dynamic interplay of conditions, without clinging to any fixed viewpoints. Śūnyatā, therefore, is both a philosophical insight and a meditative realization that leads to the understanding of the true nature of reality.
Taoism: Wuji and Taiji
In Taoism, the concept of the Void is represented by Wuji (無極), which denotes a state of undifferentiated emptiness or non-being. Wuji is the source of all existence, preceding the dualistic manifestation of Taiji (太極), the Supreme Ultimate, which gives rise to the interplay of yin and yang. This cosmological framework is central to Taoist metaphysics, where Wuji symbolizes the limitless potential and the unmanifest state from which all things emerge and to which they ultimately return.
The Tao Te Ching, attributed to Laozi, discusses the concept of the Tao (道) as the ultimate source and underlying principle of the universe, which can be understood as synonymous with Wuji. The Tao is described as something that cannot be named or defined, embodying the qualities of the Void—emptiness, potentiality, and the origin of all phenomena. This understanding of the Void as the root of existence reflects a non-dualistic view, where the apparent multiplicity of the world is ultimately grounded in an ineffable, empty source.
Zhou Dunyi, a Song dynasty philosopher, synthesized Taoist and Confucian ideas in his Taijitu shuo (Explanation of the Diagram of the Supreme Ultimate), where he describes Wuji and Taiji as interconnected aspects of the same reality. Wuji represents the boundless void from which the dynamism of Taiji emerges, leading to the generation of the yin-yang duality and, consequently, the entire cosmos.
In modern philosophy
The concept of the Void takes on new dimensions in modern philosophy, particularly in the realms of existentialism and nihilism. These philosophical movements, emerging primarily in the 19th and 20th centuries, grapple with the implications of the Void for human existence, meaning, and morality.
Nihilism and the rejection of meaning
Nihilism, particularly as articulated by Friedrich Nietzsche, presents a more radical confrontation with the Void, often characterized by the rejection of all moral, religious, and metaphysical beliefs. Nietzsche famously declared the "death of God" in The Gay Science (1882), a metaphor for the collapse of traditional values and the rise of the Void as a central concern in modernity. With the death of God, Nietzsche argues, humanity faces a profound Void—an absence of any external source of meaning or value. This leads to what Nietzsche calls "nihilism", where the previous foundations of meaning are exposed as baseless, leaving individuals in a state of existential crisis.
However, Nietzsche does not view the Void purely negatively. Instead, he sees it as an opportunity for the Übermensch (lit. 'Overman') to create new values and meanings. In this way, the Void becomes a space of potential, where the destruction of old beliefs clears the way for the creation of new ones. Nietzsche's vision of the Void is thus both a challenge and an invitation to re-evaluate and re-create meaning in a world devoid of inherent purpose.
Existentialism: The existential void
In existentialist thought, the Void often symbolizes the absence of inherent meaning in the universe and the individual's confrontation with this emptiness. Philosophers such as Albert Camus and Jean-Paul Sartre explore the Void as a fundamental aspect of the human condition, where individuals must create their own meaning in a world that offers none.
Albert Camus
Camus, in The Myth of Sisyphus (1942), elaborates on this existential dilemma by discussing the concept of the absurd—the conflict between humans' desire to find meaning and the universe's indifferent silence. For Camus, the Void is the backdrop against which the absurd plays out, as individuals grapple with the realization that life is inherently meaningless. However, rather than succumbing to despair, Camus advocates for a defiant embrace of the absurd, where one finds freedom and meaning through personal choice and action, even in the face of the Void.
Jean-Paul Sartre
Sartre, in his seminal work Being and Nothingness (1943), describes human existence as being "condemned to be free", where the Void represents the nothingness at the core of existence that individuals must confront when they realize that life has no preordained purpose. Jean-Paul Sartre's exploration of the Void is central to his existentialist philosophy. Sartre argues that consciousness itself is a form of nothingness, or néant, that introduces a fundamental gap between the self and the world. This gap creates a sense of the Void, as consciousness is constantly aware of what it is not—what it lacks or desires. Sartre describes this as a perpetual state of "lack" or "nothingness", where human beings are always confronted with their own freedom to choose, yet burdened by the responsibility that this freedom entails.
For Sartre, the Void is not just an abstract concept but an experiential reality. It manifests in moments of existential anxiety, where individuals confront the absence of any inherent meaning or purpose in life. This confrontation with the Void reveals the radical freedom that defines human existence: we are not bound by any predetermined essence or external authority, but are free to define ourselves through our choices. However, this freedom is accompanied by a sense of vertigo or anguish, as it exposes the individual to the vast, empty space of potential that they must navigate without any guarantees.
Sartre's famous statement that "existence precedes essence" encapsulates this idea. It implies that there is no pre-existing blueprint for what it means to be human; instead, individuals must create their own essence through their actions. This creation, however, occurs against the backdrop of the Void—an absence of inherent meaning that forces individuals to take full responsibility for their choices and the meanings they create.
Moreover, Sartre discusses the Void in the context of interpersonal relationships, particularly in his analysis of "the look" (le regard). When one person gazes at another, it objectifies the other, reducing them to an object within the world. This objectification creates a sense of the Void, as it strips away the subject's freedom and exposes the emptiness at the core of their being. Sartre uses this concept to illustrate how the Void operates not only on an individual level but also in social interactions, where the awareness of others' perceptions can lead to feelings of alienation and nothingness.
In science and cosmology
The scientific understanding of the Void has evolved dramatically, particularly from the 17th century onward. Evangelista Torricelli's vacuum experiments in the 1640s demonstrated the possibility of an empty space devoid of matter, challenging the longstanding Aristotelian belief that nature abhors a vacuum (horror vacui). These experiments laid the groundwork for a new understanding of the Void as a physical reality rather than a mere conceptual possibility.
The concept of the Void underwent further transformation with the rejection of the aether theory in the late 19th and early 20th centuries. Aether was once believed to be a subtle, invisible medium that filled all of space and carried light waves. However, the Michelson-Morley experiment in 1887 failed to detect any evidence of aether, leading to the theory's eventual abandonment. This shift was further reinforced by Albert Einstein's theory of relativity, which revolutionized the understanding of space itself. According to relativity, space is not a passive backdrop but a dynamic field influenced by mass and energy, fundamentally altering the traditional notion of the Void.
In the context of quantum mechanics, the Void is no longer seen as a simple vacuum but as a quantum vacuum—a field filled with fluctuating energy. As Lawrence Krauss describes it in A Universe from Nothing (2012), even "empty" space is not truly empty but contains a seething field of virtual particles that continuously pop in and out of existence. This quantum vacuum is a foundational aspect of modern physics, underlying the particles and forces that constitute the universe.
In art and literature
The concept of the Void has had a profound influence on both art and literature, where it is often used to explore themes of emptiness, the unknown, and the boundaries of human experience. Through visual and literary expressions, the Void becomes a metaphor for existential questions, psychological states, and the nature of reality itself.
Literary themes
In literature, the Void often serves as a metaphor for existential despair, the search for meaning, or the confrontation with the unknown. Samuel Beckett's Waiting for Godot (1953) is a quintessential example, where the Void is both literal and metaphorical. The play's setting is a barren, empty landscape, and the characters are caught in an endless wait for something that never arrives. The Void here represents the absence of meaning, purpose, and resolution, reflecting the existentialist idea that life is fundamentally devoid of intrinsic meaning.
Franz Kafka's works also engage deeply with the concept of the Void. In The Trial (1925), the protagonist, Josef K., finds himself entangled in a nightmarish legal system where the rules are arbitrary and the authority figures remain unseen. The Void in Kafka's work often symbolizes the oppressive and incomprehensible nature of modern life, where individuals struggle against forces that they cannot understand or control.
In more contemporary literature, the Void is explored in works like Don DeLillo's White Noise (1985), where the pervasive sense of emptiness and alienation in modern society is a central theme. The characters in White Noise are constantly bombarded by the noise of consumer culture and media, creating a metaphorical Void that reflects the absence of authentic human connection and meaning in their lives.
Artistic representations
In the visual arts, the Void is frequently represented as an absence, a space that invites contemplation or evokes a sense of the infinite. One of the most notable artists who explored the Void is Yves Klein, a French artist known for his monochrome works and his exploration of immateriality. Klein's Le Vide (The Void) exhibition in 1958 featured an empty gallery space, painted white, intended to focus the viewer's attention on the emptiness and the absence of material objects. This work challenges traditional notions of art by making the Void itself the subject of the experience.
Alberto Giacometti, another prominent artist, frequently engaged with the concept of the Void in his sculptures. His elongated figures, such as Walking Man (1960), evoke a sense of isolation and alienation, with the surrounding space emphasizing the emptiness and solitude of the figures. Giacometti's work reflects existential themes, where the Void becomes a metaphor for the human condition and the pervasive sense of nothingness that can accompany it.
Japanese artist Yayoi Kusama also explores the Void through her immersive installations, such as the Infinity Mirror Rooms. These rooms use mirrors and lights to create an illusion of infinite space, allowing viewers to experience the disorienting and transcendent qualities of the Void. Kusama's work often reflects her own struggles with mental illness, using the Void as both a personal and universal symbol of the unknown and the infinite.
Film
The Void is a recurring motif in cinema, often used to symbolize existential dread, the unknown, or the metaphysical boundaries between life and death. Stanley Kubrick's 2001: A Space Odyssey (1968) is one of the most iconic examples, where the vast emptiness of space represents both the awe-inspiring and terrifying aspects of the Void. The film's minimal dialogue and expansive visual sequences emphasize the isolation and mystery of space, which serves as a metaphor for the human condition and the search for meaning in an indifferent universe.
Another film that delves into the concept of the Void is The Void (2016), a Canadian horror film directed by Steven Kostanski and Jeremy Gillespie. The film blends Lovecraftian horror with surreal imagery, depicting a hospital that becomes a gateway to a nightmarish otherworld. The Void in this film is not just a physical space but also a symbolic representation of terror and the unknown, drawing on cosmic horror traditions to explore the fear of the incomprehensible.
Scholarly perspectives and criticism
In analytical philosophy, the Void has often been a subject of scrutiny, particularly regarding the treatment of "nothingness" as a substantive concept. Bertrand Russell, a prominent figure in analytical philosophy, expressed skepticism about metaphysical discussions that involve the Void, arguing that such concepts often arise from linguistic and conceptual confusions. Russell posited that the idea of the Void or nothingness can be misleading, as it seems to ascribe existence to a non-existent entity, thereby generating paradoxes rather than resolving philosophical problems.
This critique of the Void extends into contemporary discussions, particularly in the context of scientific theories. Lawrence Krauss's book A Universe from Nothing presents a scientific perspective on the Void, arguing that the quantum vacuum—an apparently empty space filled with fluctuating energy and virtual particles—requires a rethinking of what "nothing" truly means. While Krauss's approach attempts to bridge the gap between physics and metaphysics, it has drawn criticism from philosophers like David Albert, who argue that Krauss conflates scientific and philosophical concepts, leading to oversimplified conclusions about the nature of existence and the origins of the universe.
In popular culture
See also
References
Works cited
3 vols.
Further reading
Aether theories
Buddhism
Concepts in ancient Greek metaphysics
Concepts in metaphysics
Existentialism
Nihilism
Philosophy of physics
Quantum field theory
Taoism
Vacuum | The Void (philosophy) | [
"Physics"
] | 4,133 | [
"Quantum field theory",
"Philosophy of physics",
"Applied and interdisciplinary physics",
"Quantum mechanics",
"Vacuum",
"Matter"
] |
36,821,570 | https://en.wikipedia.org/wiki/Riemann%E2%80%93Silberstein%20vector | In mathematical physics, in particular electromagnetism, the Riemann–Silberstein vector or Weber vector named after Bernhard Riemann, Heinrich Martin Weber and Ludwik Silberstein, (or sometimes ambiguously called the "electromagnetic field") is a complex vector that combines the electric field E and the magnetic field B.
History
Heinrich Martin Weber published the fourth edition of "The partial differential equations of mathematical physics according to Riemann's lectures" in two volumes (1900 and 1901). However, Weber pointed out in the preface of the first volume (1900) that this fourth edition was completely rewritten based on his own lectures, not Riemann's, and that the reference to "Riemann's lectures" only remained in the title because the overall concept remained the same and that he continued the work in Riemann's spirit. In the second volume (1901, §138, p. 348), Weber demonstrated how to consolidate Maxwell's equations using . The real and imaginary components of the equation
are an interpretation of Maxwell's equations without charges or currents. It was independently rediscovered and further developed by Ludwik Silberstein in 1907.
Definition
Given an electric field E and a magnetic field B defined on a common region of spacetime, the Riemann–Silberstein vector is
where is the speed of light, with some authors preferring to multiply the right hand side by an overall constant , where is the permittivity of free space. It is analogous to the electromagnetic tensor F, a 2-vector used in the covariant formulation of classical electromagnetism.
In Silberstein's formulation, i was defined as the imaginary unit, and F was defined as a complexified 3-dimensional vector field, called a bivector field.
Application
The Riemann–Silberstein vector is used as a point of reference in the geometric algebra formulation of electromagnetism. Maxwell's four equations in vector calculus reduce to one equation in the algebra of physical space:
Expressions for the fundamental invariants and the energy density and momentum density also take on simple forms:
where S is the Poynting vector.
The Riemann–Silberstein vector is used for an exact matrix representations of Maxwell's equations in an inhomogeneous medium with sources.
Photon wave function
In 1996 contribution to quantum electrodynamics, Iwo Bialynicki-Birula used the Riemann–Silberstein vector as the basis for an approach to the photon, noting that it is a "complex vector-function of space coordinates r and time t that adequately describes the quantum state of a single photon". To put the Riemann–Silberstein vector in contemporary parlance, a transition is made:
With the advent of spinor calculus that superseded the quaternionic calculus, the transformation properties of the Riemann-Silberstein vector have become even more transparent ... a symmetric second-rank spinor.
Bialynicki-Birula acknowledges that the photon wave function is a controversial concept and that it cannot have all the properties of Schrödinger wave functions of non-relativistic wave mechanics. Yet defense is mounted on the basis of practicality: it is useful for describing quantum states of excitation of a free field, electromagnetic fields acting on a medium, vacuum excitation of virtual positron-electron pairs, and presenting the photon among quantum particles that do have wave functions.
Schrödinger equation for the photon and the Heisenberg uncertainty relations
Multiplying the two time dependent Maxwell equations by
the Schrödinger equation for photon in the vacuum is given by
where is the vector built from the spin of the length 1 matrices generating full infinitesimal rotations of 3-spinor particle. One may therefore notice that the
Hamiltonian in the Schrödinger equation of the photon is the projection of its spin 1 onto
its momentum since the normal momentum operator appears there from combining parts of rotations.
In contrast to the electron wave function the modulus square of the wave function of the photon
(Riemann-Silbertein vector) is not dimensionless and must be multiplied by the "local photon
wavelength" with the proper power to give dimensionless expression to normalize i.e.
it is normalized in the exotic way with the integral kernel
The two residual Maxwell equations are only constraints i.e.
and they are automatically fulfilled all time if only fulfilled at the initial time
, i.e.
where
is any complex vector field with the non-vanishing rotation, or
it is a vector potential for the Riemann–Silberstein vector.
While having the wave function of the photon one can estimate the uncertainty relations
for the photon. It shows up that photons are "more quantum" than the electron while their
uncertainties of position and the momentum are higher. The natural candidates to estimate the uncertainty are the natural momentum like simply the projection or from Einstein
formula for the photoelectric effect and the simplest theory of quanta and the , the uncertainty
of the position length vector.
We will use the general relation for the uncertainty for the operators
We want the uncertainty relation for i.e. for the operators
The first step is to find the auxiliary operator such that this relation
can be used directly. First we make the same trick for that Dirac made to calculate the
square root of the Klein-Gordon operator to get the Dirac equation:
where are matrices from the Dirac equation:
Therefore, we have
Because the spin matrices 1 are only to calculate the commutator
in the same space we approximate the spin matrices
by angular momentum matrices of the particle with the length while
dropping the multiplying since the resulting Maxwell equations in 4 dimensions would look too artificial
to the original (alternatively we can keep the original factors but normalize the new 4-spinor
to 2 as 4 scalar particles normalized to 1/2):
We can now readily calculate the commutator while calculating commutators
of matrixes and scaled and noticing that the symmetric Gaussian state
is annihilating in average the terms containing mixed variable like
.
Calculating 9 commutators (mixed may be zero by Gaussian example and the since those matrices are counter-diagonal) and estimating
terms from the norm of the resulting matrix containing four factors giving square of the most natural norm of this matrix as and using the norm inequality for the estimate
we obtain
or
which is much more than for the mass particle in 3 dimensions that is
and therefore photons turn out to be particles
times or almost 3 times "more quantum" than particles with the mass like electrons.
See also
Matrix representation of Maxwell's equations
References
Electromagnetism
Geometric algebra
Bernhard Riemann | Riemann–Silberstein vector | [
"Physics"
] | 1,379 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
57,297,034 | https://en.wikipedia.org/wiki/Pauthenier%20equation | The Pauthenier equation states that the maximum charge accumulated by a particle modelled by a small sphere passing through an electric field is given by:
where is the permittivity of free space, is the radius of the sphere, is the electric field strength, and is a material dependent constant.
For conductors, .
For dielectrics: where is the relative permittivity.
Low charges on nanoparticles and microparticles are stable over more than 103 second time scales.
References
Physics theorems | Pauthenier equation | [
"Physics"
] | 104 | [
"Particle physics",
"Equations of physics",
"Particle physics stubs",
"Physics theorems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.