id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
44,120,932
https://en.wikipedia.org/wiki/Sznajd%20model
The Sznajd model or United we stand, divided we fall (USDF) model is a sociophysics model introduced in 2000 to gain fundamental understanding about opinion dynamics. The Sznajd model implements a phenomenon called social validation and thus extends the Ising spin model. In simple words, the model states: Social validation: If two people share the same opinion, their neighbors will start to agree with them. Discord destroys: If a block of adjacent persons disagree, their neighbors start to argue with them. Mathematical formulation For simplicity, one assumes that each individual  has an opinion Si which might be Boolean ( for no, for yes) in its simplest formulation, which means that each individual either agrees or disagrees to a given question. In the original 1D-formulation, each individual has exactly two neighbors just like beads on a bracelet. At each time step a pair of individual and is chosen at random to change their nearest neighbors' opinion (or: Ising spins) and according to two dynamical rules: If then and . This models social validation, if two people share the same opinion, their neighbors will change their opinion. If then and . Intuitively: If the given pair of people disagrees, both adopt the opinion of their other neighbor. Findings for the original formulations In a closed (1 dimensional) community, two steady states are always reached, namely complete consensus (which is called ferromagnetic state in physics) or stalemate (the antiferromagnetic state). Furthermore, Monte Carlo simulations showed that these simple rules lead to complicated dynamics, in particular to a power law in the decision time distribution with an exponent of -1.5. Modifications The final (antiferromagnetic) state of alternating all-on and all-off is unrealistic to represent the behavior of a community. It would mean that the complete population uniformly changes their opinion from one time step to the next. For this reason an alternative dynamical rule was proposed. One possibility is that two spins and change their nearest neighbors according to the two following rules: Social validation remains unchanged: If then and . If then and Relevance In recent years, statistical physics has been accepted as modeling framework for phenomena outside the traditional physics. Fields as econophysics or sociophysics formed, and many quantitative analysts in finance are physicists. The Ising model in statistical physics has been a very important step in the history of studying collective (critical) phenomena. The Sznajd model is a simple but yet important variation of prototypical Ising system. In 2007, Katarzyna Sznajd-Weron has been recognized by the Young Scientist Award for Socio- and Econophysics of the Deutsche Physikalische Gesellschaft (German Physical Society) for an outstanding original contribution using physical methods to develop a better understanding of socio-economic problems. Applications The Sznajd model belongs to the class of binary-state dynamics on a networks also referred to as Boolean networks. This class of systems includes the Ising model, the voter model and the q-voter model, the Bass diffusion model, threshold models and others. The Sznajd model can be applied to various fields: The finance interpretation considers the spin-state as a bullish trader placing orders, whereas a would correspond to a trader who is bearish and places sell orders. References External links Katarzyna Sznajd-Weron currently works at the Wrocław University of Technology performing research on interdisciplinary applications of statistical physics, complex systems, critical phenomena, sociophysics and agent-based modeling. Spin models Concepts in physics Statistical mechanics Lattice models Social physics Computational physics
Sznajd model
[ "Physics", "Materials_science" ]
747
[ "Applied and interdisciplinary physics", "Spin models", "Quantum mechanics", "Lattice models", "Computational physics", "Social physics", "Condensed matter physics", "nan", "Statistical mechanics" ]
44,121,193
https://en.wikipedia.org/wiki/UV%20curing
UV curing (ultraviolet curing) is the process by which ultraviolet light initiates a photochemical reaction that generates a crosslinked network of polymers through radical polymerization or cationic polymerization. UV curing is adaptable to printing, coating, decorating, stereolithography, and in the assembly of a variety of products and materials. UV curing is a low-temperature, high speed, and solventless process as curing occurs via polymerization. Originally introduced in the 1960s, this technology has streamlined and increased automation in many industries in the manufacturing sector. Applications UV curing is used for converting or curing inks, adhesives, and coatings. UV-cured adhesive has become a high speed replacement for two-part adhesives, eliminating the need for solvent removal, ratio mixing, and potential life concern. It is used in flexographic, offset, pad, and screen printing processes; where UV curing systems are used to polymerize images on screen-printed products, ranging from T-shirts to 3D and cylindrical parts. It is used in fine instrument finishing (guitars, violins, ukuleles, etc.), pool cue manufacturing and other wood craft industries. Printing with UV curable inks provides the ability to print on a very wide variety of substrates such as plastics, paper, canvas, glass, metal, foam boards, tile, films, and many other materials. Industries that use UV curing include medicine, automobiles, cosmetics (for example artificial fingernails and gel nail polish), food, science, education, and art. UV curable inks have successfully met the demands of the publication sector in terms of print quality, durability, and compatibility with different substrates, making them a suitable choice for printing applications in this industry. Advantages of UV curing A primary advantage of curing with ultraviolet light is the speed at which a material can be processed. Speeding up the curing, or drying step, in a process can reduce flaws and errors by decreasing time that an ink or coating spends as wet. This can increase the quality of a finished item, and potentially allow for greater consistency. Another benefit to decreasing manufacturing time is that less space needs to be devoted to storing items which can not be used until the drying step is finished. Because UV energy has unique interactions with many different materials, UV curing allows for the creation of products with characteristics not achievable via other means. This has led to UV curing becoming fundamental in many fields of manufacturing and technology, where changes in strength, hardness, durability, chemical resistance, and many other properties are required. Constituents of a UV curing system Main components in UV cured solution The main components of a UV curing solution includes resins, monomers, and photoinitiators. Resin is an oligomer that imparts specific properties to the final polymer. A monomer is used as a cross-linking agent and regulates the viscosity of the mixture to suit the application. The photoinitiator is responsible for absorbing the light and kickstarting the reaction, which helps control the cure rate and depth of cure. Each of these elements has a role to play in the crosslinking process and is linked to the composition of the final polymer. Types of UV curing lamps Medium-pressure lamps Medium-pressure mercury-vapor lamps have historically been the industry standard for curing products with ultraviolet light. The bulbs work by sending an electric discharge to excite a mixture of mercury and noble gases, generating a plasma. Once the mercury reaches a plasma state, it irradiates a high spectral output in the UV region of the electromagnetic spectrum. Major peaks in light intensity occur in the 240-270 nm and 350-380 nm regions. These intense peaks, when matched with the absorption profile of a photoinitiator, cause the rapid curing of materials. By modifying the bulb mixture with different gases and metal halides, the distribution of wavelength peaks can be altered, and material interactions are changed. Medium-pressure lamps can either be standard gas-discharge lamps or electrodeless lamps, and typically use an elongated bulb to emit energy. By incorporating optical designs such an elliptical or even aconic reflector, light can either be focused or projected over a far distance. These lamps can often operate at over 900 degrees Celsius and produce UV energy levels over 10 W/cm2. Low-pressure lamps Low-pressure mercury-vapor lamps generate primarily 254 nm 'UVC' energy, and are most commonly used in disinfection applications. Operated at lower temperatures and with less voltage than medium-pressure lamps, they, like all UV sources, require shielding when operated to prevent excess exposure of skin and eyes. UV LED Since development of the aluminium gallium nitride LED in the early 2000s, UV LED technology has seen sustained growth in the UV curing marketplace. Generating energy most efficiently in the 365-405 nm 'UVA' wavelengths, continued technological advances, have allowed for improved electrical efficiency of UV LEDs as well as significant increases in output. Benefiting from lower-temperature operation and the lack of hazardous mercury, UV LEDs have replaced medium-pressure lamps in many applications. Major limitations include difficulties in designing optics for curing on complex three-dimensional objects, and poor efficiency at generating lower-wavelength energy, though development work continues. Mechanisms of UV curing Radical Polymerization Radical Polymerization is used in the curing of acrylic resins in the presence of UV in the industry. Light energy from UV breaks apart photoinitiaters, forming radicals. The radical then react with the polymers, forming polymers with radical groups that then react with additional monomers. The monomer chain extends until it reaches another polymer and reacts with the polymer. Polymers will form with monomer bridges between them, thus leading to a cross-linked network. Cationic Polymerization Cationic polymerization is used in the curing of epoxy resins in the presence of UV in the industry. Light energy from UV breaks apart photoinitiaters, forming an acidic solution which then donates a proton to the polymer. The monomers then attach themselves to the polymer, forming longer and longer chains leading to a cross-linked network. See also Photopolymer UV stabilizers in plastics Weather testing of polymers Radical Polymerization Cationic Polymerization References Curing agents Ultraviolet radiation
UV curing
[ "Physics", "Chemistry" ]
1,302
[ "Spectrum (physical sciences)", "Electromagnetic spectrum", "Ultraviolet radiation" ]
44,121,355
https://en.wikipedia.org/wiki/PandaX
The Particle and Astrophysical Xenon Detector, or PandaX, is a dark matter detection experiment at China Jinping Underground Laboratory (CJPL) in Sichuan, China. The experiment occupies the deepest underground laboratory in the world, and is among the largest of its kind. Participants The experiment is run by an international team of about 40 scientists, led by researchers at China's Shanghai Jiao Tong University. The project began in 2009 with researchers from Shanghai Jiao Tong University, Shandong University, the Shanghai Institute of Applied Physics (zh), and the Chinese Academy of Sciences. Researchers from the University of Maryland, Peking University, and the University of Michigan joined two years later. The PandaX team also includes members from the Ertan Hydropower Development Company. Scientists from University of Science and Technology of China, China Institute of Atomic Energy and Sun Yat-Sen University joined PandaX in 2015. Design and construction PandaX is a direct-detection experiment, consisting of a dual-phase xenon time projection chamber (TPC) detector. The use of both liquid and gaseous phases of xenon, similarly to the XENON and LUX experiments, allows the location of events to be determined, and gamma ray events to be vetoed. In addition to searching for dark matter events, PandaX is designed to detect Xe-136 neutrinoless double beta decay. Laboratory PandaX is located at China Jinping Underground Laboratory (CJPL), the deepest underground laboratory in the world at more than below ground. The depth of the laboratory means the experiment is better shielded from cosmic ray interference than similar detectors, allowing the instrument to be scaled up more easily. The muon flux at CJPL is 66 events per square meter per year, compared with 950 events/m2/year at the Sanford Underground Research Facility, home of the LUX experiment, and 8,030 events/m2/year at the Gran Sasso lab in Italy, home to the XENON detector. The marble at Jinping is also less radioactive than the rock at Homestake and Gran Sasso, further reducing the frequency of false detections. Wolfgang Lorenzon, a collaborating researcher from the University of Michigan, has commented that "the big advantage is that PandaX is much cheaper and doesn't need as much shielding material" as similar detectors. Operational stages Like most low-background physics, the experiment is constructing multiple generations of detectors, each serving as a prototype for the next. A larger size allows greater sensitivity, but this is only useful if unwanted "background events" can be kept from swamping the desired ones; ever more stringent limits on radioactive contamination are also required. Lessons learned in earlier generations are used to construct later ones. The first generation, PandaX-I, operated until late November, 2014. It used of xenon (of which served as a fiducial mass) to probe the low-mass regime (<10 GeV) and verify dark matter signals reported by other detector experiments. PandaX-I was the first dark matter experiment in China to use more than 100 kg of xenon in its detector, and its size was second only to the LUX experiment in the United States. PandaX-II, completed in March 2015 and currently operational, uses of xenon (approximately 300 kg fiducual) to probe the 10–1,000 GeV regime. PandaX-II reuses the shield, outer vessel, cryogenics, purification hardware, and general infrastructure from the first version, but uses a much larger time projection chamber, an inner vessel of higher purity (much less radioactive 60Co) stainless steel, and a cryostat. The construction cost of PandaX is estimated at US$15 million, with an initial cost of $8 million for the first stage. PandaX-II produced some preliminary physics results from a brief commissioning run in late 2015 (November 21 to December 14) before the main physics run through 2018. PandaX-II is significantly more sensitive than both the 100-kg XENON100 and 250-kg LUX detectors. XENON100, in Italy has, in the three to four years prior to 2014, produced the highest sensitivities over a wide range of WIMP masses, but was leapfrogged by PandaX-II. The most recent results on the spin-independent WIMP-nucleon scattering cross-section of PandaX-II were published in 2017. In September 2018 the XENON1T experiment published its results from 278.8 days of collected data and set a new record limit for WIMP-nucleon spin-independent elastic interactions. The next stages of PandaX are called PandaX-xT. An intermediate stage with a four-ton target (PandaX-4T) is under construction in the second-phase CJPL-II laboratory. The ultimate goal is to build a third generation dark matter detector, which will contain thirty tons of xenon in the sensitive region. In July 2021, a publication was made, searching for dark matter using results from the PandaX-4T commissioning run. Initial results The majority of the PandaX experimental equipment was transported from Shanghai Jiao Tong University to China Jinping Underground Laboratory in August 2012, and two engineering test runs were conducted in 2013. The initial data-collection run (PandaX-I) began in May 2014. Results from this run were reported in September 2014 in the journal Science China Physics, Mechanics & Astronomy. In the initial run, about 4 million raw events were recorded, with around 10,000 in the expected energy region for WIMP dark matter. Of these, only 46 events were recorded in the quiet inner core of the xenon target. These events were consistent with background radiation, rather than dark matter. The lack of an observed dark-matter signal in the PandaX-I run places strong constraints on previously-reported dark matter signals from similar experiments. Reception Stefan Funk of the SLAC National Accelerator Laboratory has questioned the wisdom of having many separate direct-detection dark matter experiments in different countries, commenting that "spending all our money on different direct-detection experiments is not worth it." Xiangdong Ji, spokesperson for PandaX and a physicist at Shanghai Jiao Tong University, concedes that the international community is unlikely to support more than two multi-tonne detectors, but argues that having many groups working will lead to faster improvement in detection technology. Richard Gaitskell, a spokesperson for the LUX experiment and a physics professor at Brown University, commented, "I'm excited about seeing China developing a fundamental physics program." References Experiments for dark matter search Neutrino experiments
PandaX
[ "Physics" ]
1,362
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
44,122,084
https://en.wikipedia.org/wiki/Eames%20Fiberglass%20Armchair
The Eames Molded Plastic & Fiberglass Armchair is a fiberglass chair, designed by Charles and Ray Eames, that appeared on the market in 1950. The chair was intentionally designed for the International Competition for Low-Cost Furniture Design. This competition, sponsored by the Museum of Modern Art, was motivated by the urgent need in the post-war period for low-cost housing and furnishing designs adaptable to small housing units. The chair was offered in a variety of colors and bases, such as the "Eiffel Tower" metal base, a wooden base, and a rocker base. The plastic fiberglass armchair is one of the most famous designs of Charles and Ray Eames, and is still popular today. Designing with plastic "Getting the most of the best to the greatest number of people for the least": with these words, Charles and Ray Eames described one of their main goals as furniture designers. Of all their designs, the Plastic Chairs come closest to achieving this ideal. They found that the use of plastic in furniture design has several advantages: it has pleasant tactile qualities, it has malleability and static strength combined with a high-degree of flexibility, and it makes feasible, via mass-production, their goal of low-cost furniture. Production The early production of the chair The material of the chair, Zenaloy, which is polyester reinforced with fiberglass, was first developed by the US Army during World War II. Using this material, Ray and Charles Eames designed a prototype chair for the 1948 ‘International Competition of Low-Cost Furniture Design’ held by the Museum of Modern Art. The chairs were made using the latest machines, such as hydraulic press molds from shipbuilding, by manufacturer Zenith Plastics. Mass-producing the molded fiberglass chairs involved a tremendous amount of design and tooling effort, a long period of product development, and considerable investment. The basic technology involved shaping the fiberglass material with metal molds using a hydraulic press. The armchair was the first one-piece plastic chair whose surface was left uncovered and not upholstered. In 1950, Zenith began mass-producing the fiberglass shell armchairs for Herman Miller, who offered them for sale that year. The fiberglass armchair was included in the collection of the Museum of Modern Art in 1950. Production in Europe The Vitra company entered the furniture market in 1957 with the licensed production of furniture from the Herman Miller Collection for the European market. In 1984, the partnership that had been formed with Herman Miller was terminated by mutual consent. Subsequently, Vitra obtained the European and Middle Eastern rights to designs by Charles and Ray Eames and George Nelson. Variety of forms At first, the chair was available in three colors: greige, elephant-hide gray, and parchment. The palette of colors was later expanded. After that, a choice of several possible bases was offered. The early "H" metal base (the SAX standard model and the LAX lounge lower model), "X" metal base (the DAX dining model), a lower model with metal rod base (the LAR model), a wooden base (the DSW model), a steel-wire base (the DSR model, also known as the "Eiffel-Tower base"), a cast aluminium base with castors (the PACC model), and a wood-rocker base (the RAR model). All of the bases were attached to the seat using hard rubber disks to allow flexibility. Despite the fact that Herman Miller ceased production of the rocker in 1968 (until they reintroduced it 30 years later), pregnant employees continued to receive these chairs as a company gift until 1984, solidifying the rocker as a token of high-end nursery decor. The plastic shell became available in an upholstered (fabric or vinyl) version a year after the introduction of the chair. After the success of the arm chair, the side chair (without arms) was introduced (in the DSW, DSX, and DSR models). Over the years, the plastic chair has undergone some modifications: the curve of the back has become more inclined and upholstery is now glued to the plastic shell. The Eames plastic armchair immediately became an iconic design and eventually the chair was used in schools, airports, restaurants, and offices around the world. From 1954, the chairs were used as stadium seating with metal rods put together in rows, the Tandem Shell Seating. Naming convention DAL = Dining (height) Armchair (arms) La Fonda (base) DAR = Dining (height) Armchair (arms) Rod (base) DAW = Dining (height) Armchair (arms) Wood (base) DAX = Dining (height) Armchair (arms) X-Base (base) DAG = Dining (height) Armchair (arms) Wall Guard (base) DAT = Desk (height) Armchair (arms) Tilt (base) MAX = Medium (height) Armchair (arms) X-Base (base) SAX = Small (height) Armchair (arms) X-Base (base) RAR = Rocker (height) Armchair (arms) Rod (base) PAW = Pivot (pivoting) Armchair (arms) Wood (base) PAC = Pivot (pivoting) Armchair (arms) Contract (base) LAR = Low (height) Armchair (arms) Rod (base) LAX = Low (height) Armchair (arms) X-Base (base) Current production The chairs are still in production by Herman Miller and Vitra. However, each producer uses different material for their chair. In 1993, Vitra discontinued production of the fiberglass shells for ecological reasons. The company resumed manufacture of the shells in 1989 and 2004, respectively, making them available in polypropylene, a more environmentally friendly material. Also, Herman Miller uses the polypropylene material for their production of the chairs. The production process for the new fiberglass chairs by both manufacturers is now emission-free and uses a new, monomer-free resin which creates a safer environment for workers and a more environmentally friendly, recyclable shell. The chairs are still available and, after more than seventy years, still commonly used by popular interior designers and featured in many magazines. References External links Eames Office Herman Miller - Eames Vitra - Eames Fiberglass Works by Charles and Ray Eames Chairs Mid-century modern Individual models of furniture Stacking chairs
Eames Fiberglass Armchair
[ "Chemistry", "Materials_science" ]
1,305
[ "Fiberglass", "Polymer chemistry" ]
44,123,744
https://en.wikipedia.org/wiki/Spot%20test%20%28lichen%29
A spot test in lichenology is a spot analysis used to help identify lichens. It is performed by placing a drop of a chemical reagent on different parts of the lichen and noting the colour change (or lack thereof) associated with application of the chemical. The tests are routinely encountered in dichotomous keys for lichen species, and they take advantage of the wide array of lichen products (secondary metabolites) produced by lichens and their uniqueness among taxa. As such, spot tests reveal the presence or absence of chemicals in various parts of a lichen. They were first proposed as a method to help identify species by the Finnish lichenologist William Nylander in 1866. Three common spot tests use either 10% aqueous KOH solution (K test), saturated aqueous solution of bleaching powder or calcium hypochlorite (C test), or 5% alcoholic p-phenylenediamine solution (P test). The colour changes occur due to presence of particular secondary metabolites in the lichen. In identification key reference literature, the outcome of chemical spot tests serves as a primary characteristic for determining the species of lichens. There are several other less frequently used spot tests of more limited use that are employed in specific situations, such as to distinguish between certain species. Variations of the technique, including using filter paper to enhance visibility of reactions or examining under a microscope, accommodate different lichen types and pigmentations, with results typically summarised by a short code indicating the substance and reaction observed. Other diagnostic methods like ultraviolet (UV) light exposure can help identify lichen metabolites and distinguish between species, as some substances fluoresce under UV, aiding in the differentiation of closely related species. Tests Four spot tests are used most commonly to help with lichen identification. K test The reagent for the K test is an aqueous solution of potassium hydroxide (KOH) (10–25%), or, in the absence of KOH, a 10% aqueous solution of sodium hydroxide (NaOH, lye), which provides nearly identical results. A 10% solution of KOH will retain its effectiveness for about 6 months to a year. The test depends on salt formation and requires the presence of at least one acidic functional group in the molecule. Lichen compounds that contain a quinone as part of their structure will produce a dark red to violet colour. Example compounds include the pigments that are anthraquinones, naphthoquinones, and terphenylquinones. Yellow to red colours are produced with the K test and some depsides (including atranorin and thamnolic acid), and many β-orcinol depsidones. In contrast, xanthones, pulvinic acid derivatives, and usnic acid do not have any reaction. Some common and widely distributed lichens that have lichen products with a positive reaction to K include Xanthoria parietina, which is K+ (red-purple) due to the parietin (an anthraquinone), and Dibaeis baeomyces, which is K+ (yellow), due to the didepside compound baeomycesic acid. C test This test uses a saturated solution of calcium hypochlorite (bleaching powder), or alternatively a dilute solution (5.25% is typically used) of sodium hypochlorite, or undiluted household bleach. These solutions are typically replaced daily since they break down within 24–48 hours; they break down even more rapidly when exposed to sunlight (less than an hour) and so are recommended to keep in a dark-coloured bottle. Other factors that accelerate the decomposition of these solutions are heat, humidity, and carbon dioxide. Colours typically observed with the C test are red and orange-rose. Chemicals causing a red reaction include anziaic acid, erythrin, and lecanoric acid, while those resulting in orange-red include gyrophoric acid. Rarely, an emerald-green colour is produced, caused by reaction with dihydroxy dibenzofurans, such as the chemical strepsilin. Another rare colour produced by this test is yellow, which is observed with Cladonia portentosa as a result of the dibenzofuran usnic acid. Some common and widely distributed lichens that have lichen products with a positive reaction to C include Lecanora expallens, which is C+ (orange) because of the xanthone thiophanic acid, and Diploschistes muscorum, which is C+ (red) because of the didepside diploschistesic acid. PD test This is also known as the P test. It uses a 1–5% ethanolic solution of para-phenylenediamine (PD), made by placing a drop of ethanol (70–95%) over a few crystals of the chemical; this yields an unstable, light sensitive solution that lasts for about a day. An alternative form of this solution, called Steiner's solution, is much longer lasting although it produces less intense colour reactions. It is typically prepared by dissolving 1 gram of PD, 10 grams of sodium sulfite, and 0.5 millilitres of detergent in 100 millilitres of water; initially pink in colour, the solution becomes purple with age. Steiner's solution will last for months. The phenylenediamine reacts with aldehydes to yield Schiff bases according to the following reaction: Products of this reaction are yellow to red in colour. Most β-orcinol depsidones and some β-orcinol depsides will react positively. The PD test, known for its high specificity towards substances that yield K+ yellow or red reactions, has largely replaced the simpler yet less conclusive K test. PD is poisonous both as a powder and a solution, and surfaces that come in contact with it (including skin) will discolour. Some common and widely distributed lichens that have lichen products with a positive reaction to P include Parmelia subrudecta, which is PD+ (yellow) because of the didepside atranorin, and Hypogymnia physodes, which is PD+ (orange) because of the depsidone physodalic acid. KC test This spot test may be performed by wetting the thallus with K followed immediately by C. The initial application of K breaks down (via hydrolysis) ester bonds in depsides and depsidones. If a phenolic hydroxyl group is released that is meta to another hydroxyl, then a red to orange colour is produced as C is applied. Alectoronic acid and physodic acid produce this colour, while a violet colour results when picrolichenic acid is present. The CK test is a less commonly used variation that reverses the order of the application of chemicals. It is used in special cases when testing for orange colour produced by barbatic acid or diffractaic acid, such as is present in Cladonia floerkeana. Lugol's iodine is another reagent that may be useful in identifying certain species. Hypogymnia tubulosa is a lichen that is KC+ (orange-pink) because of the depsidone physodic acid; Cetrelia olivetorum is KC+ (pink-red) due to the depsidone alectoronic acid. Less common tests There are several spot tests that are infrequently used due to their limited applicability, but may be useful in situations where particular lichen metabolites need to be detected, or to distinguish between certain species when other tests are negative. A 10% solution of barium hydroxide (Ba(OH)2) gives a violet colour when tested with diploschistesic acid, a chemical found in some Diploschistes species. A saturated solution of barium peroxide (BaO2), when tested with olivetoric acid, will turn a yellow colour that becomes green after a few minutes. A 1% (weight per volume) solution of ferric chloride (FeCl3) in ethanol produces several possible colours when tested with compounds that have phenolic groups. The N test uses a 35% solution of nitric acid, which can be used to distinguish species of Melanelia from brown species of Xanthoparmelia. The S test uses a sulphuric acid solution (0.5% to 10%) brushed over an acetone-extracted, dried sample from a lichen thallus, followed by heating over a flame for 30 seconds or until colour develops. A persistent violet to bright pink colour indicates the presence of miriquidic acid and can be used to distinguish between the two morphologically similar snow lichens, Stereocaulon alpinum and S. groenlandicum without having to resort to more laborious chemical analysis. The Beilstein test involves heating a small sample of the substance to be tested on a copper wire; halogenated compounds cause a temporary deep green flame colour. Performing spot tests Spot tests are performed by placing a small amount of the desired reagent on the portion of the lichen to be tested. Often, both the cortex and medulla of the lichen are tested, and at times it is useful to test other structures such as soralia. One method is to draw up a small amount of the chemical into a glass capillary and touch it to the lichen thallus; a small paint brush is also used for this purpose. Reactions are best visualised with a hand lens or a stereo microscope. A razor blade may be used to remove the cortex and access the medulla. Alternatively, the solution can be applied to lichen features that lack a cortex or that leave the medulla exposed, such as soralia, pseudocyphellae, or the underside of squamules. In a variation of this technique, suggested by the Swedish chemist Johan Santesson, a piece of filter paper is used to try to make the colour reaction more readily observable. The lichen fragment is pressed on the paper, and lichen substances are extracted with 10–20 drops of acetone. After evaporating the acetone, the lichen substances are left on the paper in a ring around the lichen fragment. The filter paper can then be spot tested in the usual way. In cases where the results of a spot test on the thallus are uncertain, it is possible to squash a thin section of the tissue on a microscope slide in a minimal amount of water and reagent under a cover slip. A colour change is visible under a low-power microscope objective, or when the slide placed against a white background. This technique is useful when testing lichens with dark pigments, such as Bryoria. Spot tests may be used individually or in combination. The results of a spot tests are typically represented with a short code that includes, in order, (1) a letter indicating the reagent used, (2) a "+" or "−" sign indicating a colour change or lack of colour change, respectively, and (3) a letter or word indicating the colour observed. In addition, care should be taken to indicate which part of the lichen was tested. For example, "Cortex K+ orange, C−, P−" means the cortex of the test specimen turned orange with application of KOH and did not change under bleach or para-phenylenediamine. Similarly, "Medulla K−, KC+R" would indicate the medulla of the lichen was insensitive to application of KOH, but application of KOH followed immediately by bleach caused the medulla to turn red. Occasionally, it takes some time for the colour reaction to develop. For example, in certain Cladonia species, the PD reaction with fumarprotocetraric acid can take up to half a minute. In contrast, the reactions with C and KC are usually fleeting and occur within a second of applying the reagent, so a colour change can easily be missed. There are several possible reasons that an anticipated test result does not occur. Causes include old and chemically inactive reagents, and low concentrations of lichen substances in the sample. If the colour of the thallus is dark, a colour change might be obscured, and other techniques are more appropriate, like the filter paper technique. Other tests It may sometimes be useful to perform other diagnostic measures in addition to spot tests. For example, some lichen metabolites fluoresce under ultraviolet radiation such that exposing certain parts of the lichen to a UV light source can reveal the presence or absence of those metabolites similarly to spot tests. Examples of lichen substances that give a bright fluorescence in UV are alectoronic, lobaric, and divaricatic acids, and lichexanthone. In some cases, the UV light test can be used to help distinguish between closely related species, such as Cladonia deformis (UV−) and Cladonia sulphurina (UV+, due to presence of squamatic acid). Only long-wavelength UV is useful for observing lichens directly. More advanced analytical techniques, such as thin-layer chromatography, high-performance liquid chromatography, and mass spectrometry may also be useful in initially characterising the chemical composition of lichens or when spot tests are unrevealing. History Finnish lichenologist William Nylander is generally considered to have been the first to demonstrate the use of chemicals to help with lichen identification. In papers published in 1866, he suggested spot tests using KOH and bleaching powder to get characteristic colour reactions—typically yellow, red, or green. In these studies he showed, for example, that the lichens now known as Cetrelia cetrarioides and C. olivetorum could be distinguished as distinct species due to their different colour reactions: C+ red in the latter, contrasted with no reaction in the former. Nylander showed how KOH could be used to distinguish between the lookalikes Xanthoria candelaria and Candelaria concolor because the presence of parietin in the former species results in a strong colour reaction. He also knew that in some cases the lichen chemicals were not evenly distributed throughout the cortex and the medulla due to the differing colour reactions on these areas. In the mid-1930s, Yasuhiko Asahina created the test with para-phenylendiamine, which gives yellow to red reactions with secondary metabolites that have a free aldehyde group. This spot test was later shown to be particularly useful in the taxonomy of the family Cladoniaceae. See also Microcrystallization References Cited literature Chemical tests Lichenology
Spot test (lichen)
[ "Chemistry", "Biology" ]
3,120
[ "Lichenology", "Chemical tests" ]
54,175,973
https://en.wikipedia.org/wiki/Toughening
In materials science, toughening refers to the process of making a material more resistant to the propagation of cracks. When a crack propagates, the associated irreversible work in different materials classes is different. Thus, the most effective toughening mechanisms differ among different materials classes. The crack tip plasticity is important in toughening of metals and long-chain polymers. Ceramics have limited crack tip plasticity and primarily rely on different toughening mechanisms. Toughening in metals For the case of a ductile material such as a metal, this toughness is typically proportional to the fracture stress and strain as well as the gauge length of the crack. The plane strain toughness in a metal is given by: where is the plane strain toughness, is a constant that incorporates the stress state, is the tensile flow stress at fracture, is the tensile fracture strain, and is the radius of crack tip. In a low yield strength material, the crack tip can be blunted easily and larger crack tip radius is formed. Thus, in a given metallic alloy, toughness in a low-strength condition is usually higher than for higher strength conditions because less plasticity is available for toughening. Therefore, some safety-critical structural part such as pressure vessels and pipelines to aluminum alloy air frames are manufactured in relatively low strength version. Nonetheless, toughness should be improved without sacrificing its strength in metal. Designing a new alloy or improving its processing can achieve this goal. Designing a new alloy can be explained by different toughness in several ferrous alloy.18%Ni-maraging steel has a higher toughness than the martensitic steel AISI 4340. In an AISI 4340 alloy, interstitial carbon exist in a bcc (body centered cubic) matrix and show an adverse effect on toughness. In 18%Ni-maraging steel, the carbon content is lower and martensite is strengthened by substitutional Ni atoms. In addition, transformation induced plasticity (TRIP) effects in steel can provide additional toughness. In TRIP steel, matrix is metastable and can be transformed to martensite during deformation. The work associated to phase transformation contributes to the improvement of toughness. In a monolithic Pd–Ag–P–Si–Ge glass alloy, the properties of high bulk modulus and low shear modulus lead to proliferation of shear bands. These bands are self constrained and the toughness is improved. Metals can be toughened by improvement of processing. With a high affinity for oxygen, titanium alloy can absorb oxygen easily. Oxygen can promote the formation of α2 phase. These coherent α2 particles lead to easy crack nucleation and fast crack propagation within the planar slip bands. Therefore, toughness of titanium alloy is decreased. Multiple vacuum arc melting (VAR) technique can be used to minimize the oxygen content and increase the toughness of the alloy. Similarly, phosphorus in steels can decrease toughness dramatically. Phosphorus can segregate on grain boundary and lead to intergranular fracture. If the dephosphorization is improved during steelmaking, the steel will be toughened for a lower phosphorus content. After appropriate processing of steel, crystalline grains and second phases that are oriented along rolling direction can improve toughness of materials by delamination which can relax triaxial stress and blunt the crack tip. Metals can also be strengthened by the methods described below for ceramics, but these methods generally have a lesser impact on toughening than plasticity induced crack blunting. Toughening in ceramics Ceramics are more brittle than most metals and plastics. The irreversible work associated with plastic deformation is not presented in ceramics. Hence, the methods that improve the toughness of ceramics are different from metals. There are several toughening mechanisms called crack deflection, microcrack toughening, transformation toughening, and crack bridging. Crack deflection In polycrystalline ceramics, the crack can propagate in an intergranular way. The associated irreversible work per unit area is 2γ-γgb, where γ is the surface energy of material and γgb is the grain boundary energy. Though the irreversible work is decreased because of grain boundary energy, the fracture area is increased in intergranular crack propagation. Moreover, Mode II crack can be caused by deflection from normal fracture plane during intergranular crack propagation, which furtherly improves the toughness of ceramics. As a result, the ceramics with intergranular fracture shows a higher toughness than that with transgranular fracture. In SiC, the fracture toughness is ~2-3 if it fractures transgranularly and the fracture toughness is improved to 10 when it fractures intergranularly. Crack deflection mechanisms bring about increased toughness in ceramics exhibiting abnormal grain growth (AGG). The heterogeneous microstructures produced by AGG form materials that can be considered as “in-situ composites” or “self-reinforced materials. Crack deflections around second phase particles have also been used in fracture mechanics approaches to predict fracture toughness increases. Microcrack toughening Microcrack toughening means that the formation of microcracks before the main crack can toughen the ceramic. Additional microcracks will cause stress to concentrate in front of the main crack. This leads to additional irreversible work required for crack propagation. In addition, these microcracks can cause crack branches, and one crack can form multiple cracks. Because of the formation of these cracks, irreversible work is increased. The increment of toughness due to microcrack toughening can be expressed by: where is the distance between microcracks and fracture plane, is residual stress, is the difference of thermal expansion coefficient between adjacent grains, is the temperature difference causing thermal strain, and is the fraction of grains that is related to microcracks in an affected volume. In this equation, it has been assumed that residual stress is dominant in nucleating microcracks and formation of microcracks is caused by elastic work. In order to retard crack propagation, these microcracks must form during crack propagation. The grain size should be smaller than a critical grain size to avoid spontaneous formation of microcracks. The distance between microcrack and fracture plane should be larger than grain size to have a toughening effect. As demonstrated most prominently by Katherine Faber in 1981, the toughening induced by the incorporation of second-phase particles subject to microcracking becomes appreciable for a narrow size distribution of particles of appropriate size. Transformation toughening The TRIP effect is found in partially stabilized zirconia. Partially stabilized zirconia is composed of tetragonal phase at high temperature and monoclinic phase and cubic phase at lower temperature in equilibrium. In some components, the onset temperature of tetragonal monoclinic martensite transformation is lower than room temperature. The stress field near the crack tip triggers the martensitic transformation at velocities hypothesized to approach that of sound in the material. The martensitic transformation causes volume expansion (volumetric/ dilatational strain) and shear strains of about 4% and 16% respectively. It applies compressive stress at the crack tip to prevent crack propagation as well as closure tractions at the crack wake. From another point of view, the work associated to this phase transformation contributes to the improvement of toughness. The increment of toughness caused by transformation toughening can be expressed by: where is the distance between boundary of transformed region with fracture plane, is the stress triggering martensite transformation, is the strain of martensite transformation, and is the fraction of tetragonal grains that is related to microcracks in an affected volume. The tetragonal particle size should be controlled properly. It is due to that too large particle size leads to spontaneous transformation and too small particle size leads to a very small toughening effect. Crack bridging When a crack propagates in an irregular path, some grains of each side of main crack may protrude into other side. This leads to additional work for a complete fracture. This irreversible work is related to residual stress, which is about . The increment of toughness can be expressed by: where is the coefficient of friction, is residual stress, is the edge length of grain, and is the fraction of grains associated with crack bridging. There are some other approaches to improve the toughness of ceramics through crack bridging. The phenomenon of abnormal grain growth, or AGG, can be harnessed to impart a crack bridging microstructure within a single phase ceramic material. The presence of abnormally long grains serves to bridge crack-wakes and hinders their opening. This has been demonstrated in silicon carbide and silicon nitride. Abnormally large grains may also serve to toughen ceramics through crack deflection mechanisms. Formation of a textured internal structure within ceramics can be used as a toughening approach. silicon carbide materials have toughened by this approach. Because the interfacial surface area is increased due to the internal structure, the irreversible fracture work is increased in this material. Toughening in composites In metal matrix composites (MMCs), the additions strengthen the metal and reduce the toughness of material. In ceramic matrix composites (CMCs), the additions can toughen materials but not strengthen them. at same time. In carbon fiber reinforced composites (CFRPs), graphite fibers can toughen and strengthen polymer at same time. In bulk metallic glass composites(BMGs), dendrites are added to hind the movement of shear band and the toughness is improved. If fibers have larger fracture strain than matrix, the composite is toughened by crack bridging. The toughness of a composite can be expressed: where and are toughness of matrix and fibers respectively, and are volume of matrix and fibers respectively, is the additional toughness caused by bridging toughening. After crack propagates across through fiber, the fiber is elongated and is pulled out from matrix. These processes correspond to plastic deformation and pull-out work and contribute to toughening of composite. When fiber is brittle, the pull-out work dominates the irreversible work contributing to toughening. The increment of toughness caused by pull-out work can be expressed by: where is the ratio between debond length and critical length, is the strength of fibers, is the width of fiber, is the fraction of fibers and is the interface friction stress. From the equation, it can be found that higher volume fraction, higher fiber strength and lower interfacial stress can get a better toughening effect. Ductile phase crack bridging When fiber is ductile, the work from plastic deformation mainly contributes to the improvement of toughens. The additional toughness contributed by plastic deformation can be expressed by: where is a constant between 1.5-6, is the flow stress of fibers, is the fracture strain of fibers, is the fraction of fibers, and is the debond length. From the equation, it can be found that higher flow stress and longer debond length can improve the toughening. However, longer debond length usually lead to a decrease of flow stress because of loss of constraint for plastic deformation. The toughness in a composite with ductile phase toughening can also be shown using stress intensity factor, by linear superposition of the matrix and crack bridging based on solutions by Tada. This model can predict behavior for small-scale bridging (bridge length << crack length) under monotonic loading conditions, but not large scale bridging. where is the fracture toughness of the matrix, is the toughening due to crack bridging, is the bridge length, is the distance behind the crack tip, is the uniaxial yield stress, and is a constraint/ triaxiality factor. Toughening in polymers Toughening mechanisms in polymers are similar with that have been discussed above. There are only several examples are used to explain the toughening in polymers. In high-impact polystyrene (HIPS), the elastomeric dispersion is used to improve crack propagation resistance. When main crack propagates, microcracks form around elastomeric dispersion above or below the fracture plane. The HIPS is toughened by additional work associated with formation of microcracks. In epoxies, glass particles are used to improve toughness of materials. The toughening mechanism is similar with crack deflection.  The addition of plasticizers in polymers is also a good way to improve its toughness. References Fracture mechanics
Toughening
[ "Materials_science", "Engineering" ]
2,645
[ "Structural engineering", "Materials degradation", "Materials science", "Fracture mechanics" ]
54,176,686
https://en.wikipedia.org/wiki/Junghuhnia%20africana
Junghuhnia africana is a species of crust fungus in the family Steccherinaceae. The type specimen was collected in Bwindi Impenetrable National Park, Uganda, growing on a rotting hardwood log. Its ellipsoid spores measure 5–6 by 4–4.5 μm. The fungus was described as new to science in 2005 by mycologists Perpetua Ipulet & Leif Ryvarden. References Fungi described in 2005 Fungi of Africa Steccherinaceae Taxa named by Leif Ryvarden Fungus species
Junghuhnia africana
[ "Biology" ]
118
[ "Fungi", "Fungus species" ]
54,179,011
https://en.wikipedia.org/wiki/Ellipticine
Ellipticine is a tetracyclic alkaloid first extracted from trees of the species Ochrosia elliptica and Rauvolfia sandwicensis, which inhibits the enzyme topoisomerase II via intercalative binding to DNA. Natural occurrence and synthesis Ellipticine is an organic compound present in several trees within the genera Ochrosia, Rauvolfia, Aspidosperma, and Apocynaceae. It was first isolated from Ochrosia elliptica Labill., a flowering tree native to Australia and New Caledonia which gives the alkaloid its name, in 1959, and synthesised by Robert Burns Woodward later the same year. Biological activity Ellipticine is a known intercalator, capable of entering a DNA strand between base pairs. In its intercalated state, ellipticine binds strongly and lies parallel to the base pairs, increasing the superhelical density of the DNA. Intercalated ellipticine binds directly to topoisomerase II, an enzyme involved in DNA replication, inhibiting the enzyme and resulting in powerful antitumour activity. In clinical trials, ellipticine derivatives have been observed to induce remission of tumour growth, but are not used for medical purposes due to their high toxicity; side effects include nausea and vomiting, hypertension, cramp, pronounced fatigue, mouth dryness, and mycosis of the tongue and oesophagus. Further DNA damage results from the formation of covalent DNA adducts following enzymatic activation of ellipticine by with cytochromes P450 and peroxidases, meaning that ellipticine is classified as a prodrug. References Indole alkaloids Isoquinoline alkaloids Carbazoles Heterocyclic compounds with 4 rings Nitrogen heterocycles DNA replication inhibitors Prodrugs Topoisomerase inhibitors DNA intercalaters Plant toxins
Ellipticine
[ "Chemistry" ]
389
[ "Chemical ecology", "Indole alkaloids", "Isoquinoline alkaloids", "Plant toxins", "Prodrugs", "Alkaloids by chemical classification", "Chemicals in medicine" ]
54,179,231
https://en.wikipedia.org/wiki/Architecture%20of%20Lagos
The architecture of Lagos is an eclectic mix of different types, styles and periods. Buildings range from traditional vernacular architecture to tropical modern architecture or a mixture. The oldest European-styled buildings date back to the 17th century. Elements of Portuguese architecture introduced by returnee ex-slaves from Brazil and the Caribbean, although present all over the city, predominates in places like: Lagos Island, Surulere and Yaba Municipalities. Colonial-styled architecture flourished during the Lagos Colony. The Lagos skyline is a mixture of modern high rise buildings, skyscrapers, dilapidated buildings and slums. Lagos has the tallest skyline in Nigeria. Skyscraper construction commenced in the 1960s. Several office and mixed-use buildings have been built by international developers and private equity firms. Modern buildings and structures have been a continuous development until date. Pre-colonial architecture The pre-colonial architecture of the ancient City of "Eko" ('Warcamp') as Lagos was initially known by its Bini and then Awori colonists was largely of the type that characterised the Yoruba namely: Rectangular houses with central inner court-yards, and in well-planned areas, pot-sherd tiled pavements. The palace of the king slightly differed in style with carved-pillars and a series of inter-connected impluvia present. Earlier Benin influence was evident based on the persistence in coastal areas of round, hip-roofed homes. Both sets of colonists extensively utilized fractals in their architecture as was the widespread practice in Africa at the time. Colonial architecture The advent of colonialism in the 1800s was one of the key factors in the drastic, irreversible alteration of 'indigenous' Lagosian architecture. The desire of the English Crown for inexpensive colonialism in humid, tropical West Africa, coupled with the construction of the C.M.S building in Badagry by European missionaries set the precedent for the mass importation of cheap building materials, in particular: Cement, and corrugated iron sheeting. These two materials continue to dominate the building industry to the present day. From the British standpoint, adobe structures were out of the question, so standard colonial-issue tropical houses characterized by deep verandahs, overhanging eaves and classical forms were introduced particularly in planned European quarters like the central areas of Yaba, Surulere and Lagos island. This inevitably led to emulation by the resident natives who immediately took to the foreign building materials to indicate their progressiveness. The other significant factor that impacted upon Lagos's architectural landscape was the slave abolition act, passed on 25 May 1807, which saw the repatriation of thousands of Yoruba ex-slaves and freemen (known as Agudas from Cuba or Saros from Brazil) from all over the Americas but particularly Brazil, and Cuba to the country of their roots. Most of these were skilled artisans and masons and brought with them a much grander style of architecture: Brazilian Baroque architecture. This style incorporated mostly Portuguese architecture with a few trademark motifs of their own like floral motifs and chunky concrete columns. The refinement of Brazilian Baroque quickly found it acceptance among the local elite who before long, made Afro-Brazilian architects much in demand. Many of these buildings have since been pulled down to make room for newer building projects and calls for conservation have not been heeded by authorities. Examples of Brazilian Baroque include Ilojo Bar, Lagos island, which was designed in 1856 by Afro-Brazilian architect Victor Olaiya and Shitta-Bey mosque with its Ottoman influences by João Baptista Da Costa in 1894. These homes are now mostly historical museums. Post-Colonial Post-colonial architecture in Lagos is a preponderance of imported motifs, regional trendism, and differing architectural ideals. The trendy nature of Lagosians has resulted in the accumulation of different building styles over the years. Of these, the most dominant strain is the post-modernist style. According to Pruncal-Ogunsote; "It usually explores simple geometrical forms but often with exposed parapet walls. Characteristic is the use of concrete external walls supplemented by concrete, steel or aluminum sun shading devices (Senate Building at ABU Zaria, Management House in Lagos, CSS Bookshop House in Lagos). This style is well represented by the structures created by architects of the older generation who were trained abroad in modern ideas such as Low-trop buildings for the sprawling masses, although the influential John Godwin (GHK Architects), was a notable exception to this." In recent years, however, Afromodernism as a movement has been gaining traction particularly amongst the younger generation of architects and it is not unusual in the present day to stumble across seemingly post-modernist architecture with an African twist a la Sterling Bank, Jakande. Notable buildings Iga Idunganran Ilojo Bar Jaekel House Water House National Arts Theatre National Stadium Bookshop House Independence House NECOM House City Hall, Lagos St. Nicholas Building Federal Palace Hotel Union Bank Building Eko Hotels and Suites Tejuosho Market Ikeja City Mall The Wings Towers Shitta-Bey Mosque Cathedral Church of Christ, Lagos Heritage Place Nestoil Tower Presidential Lodge 4 Bourdillon Kingsway Tower References Architecture in Lagos Lagos
Architecture of Lagos
[ "Engineering" ]
1,057
[ "Architecture by city", "Architecture" ]
54,182,471
https://en.wikipedia.org/wiki/Anomalous%20oxygen
Anomalous oxygen is hot atomic and singly ionized oxygen believed to be present in Earth's exosphere above 500 km near the poles during their respective summers. This additional component augmenting mainly the hydrogen and helium exosphere is able to explain the unexpectedly high drag forces on satellites passing near the poles in their summers. Anomalous oxygen densities are included in the NRLMSISE-00 models of Earth's atmosphere. References Oxygen Atmospheric chemistry Ionosphere
Anomalous oxygen
[ "Physics", "Chemistry", "Astronomy" ]
96
[ "Astrophysics stubs", "Astronomy stubs", "nan", "Astrophysics" ]
41,263,617
https://en.wikipedia.org/wiki/Quantum%20stochastic%20calculus
Quantum stochastic calculus is a generalization of stochastic calculus to noncommuting variables. The tools provided by quantum stochastic calculus are of great use for modeling the random evolution of systems undergoing measurement, as in quantum trajectories. Just as the Lindblad master equation provides a quantum generalization to the Fokker–Planck equation, quantum stochastic calculus allows for the derivation of quantum stochastic differential equations (QSDE) that are analogous to classical Langevin equations. For the remainder of this article stochastic calculus will be referred to as classical stochastic calculus, in order to clearly distinguish it from quantum stochastic calculus. Heat baths An important physical scenario in which a quantum stochastic calculus is needed is the case of a system interacting with a heat bath. It is appropriate in many circumstances to model the heat bath as an assembly of harmonic oscillators. One type of interaction between the system and the bath can be modeled (after making a canonical transformation) by the following Hamiltonian: where is the system Hamiltonian, is a vector containing the system variables corresponding to a finite number of degrees of freedom, is an index for the different bath modes, is the frequency of a particular mode, and are bath operators for a particular mode, is a system operator, and quantifies the coupling between the system and a particular bath mode. In this scenario the equation of motion for an arbitrary system operator is called the quantum Langevin equation and may be written as: where and denote the commutator and anticommutator (respectively), the memory function is defined as: and the time dependent noise operator is defined as: where the bath annihilation operator is defined as: Oftentimes this equation is more general than is needed, and further approximations are made to simplify the equation. White noise formalism For many purposes it is convenient to make approximations about the nature of the heat bath in order to achieve a white noise formalism. In such a case the interaction may be modeled by the Hamiltonian where: and where are annihilation operators for the bath with the commutation relation , is an operator on the system, quantifies the strength of the coupling of the bath modes to the system, and describes the free system evolution. This model uses the rotating wave approximation and extends the lower limit of to in order to admit a mathematically simple white noise formalism. The coupling strengths are also usually simplified to a constant in what is sometimes called the first Markov approximation: Systems coupled to a bath of harmonic oscillators can be thought of as being driven by a noise input and radiating a noise output. The input noise operator at time is defined by: where , since this operator is expressed in the Heisenberg picture. Satisfaction of the commutation relation allows the model to have a strict correspondence with a Markovian master equation. In the white noise setting described so far, the quantum Langevin equation for an arbitrary system operator takes a simpler form: For the case most closely corresponding to classical white noise, the input to the system is described by a density operator giving the following expectation value: Quantum Wiener process In order to define quantum stochastic integration, it is important to define a quantum Wiener process: This definition gives the quantum Wiener process the commutation relation . The property of the bath annihilation operators in () implies that the quantum Wiener process has an expectation value of: The quantum Wiener processes are also specified such that their quasiprobability distributions are Gaussian by defining the density operator: where . Quantum stochastic integration The stochastic evolution of system operators can also be defined in terms of the stochastic integration of given equations. Quantum Itô integral The quantum Itô integral of a system operator is given by: where the bold (I) preceding the integral stands for Itô. One of the characteristics of defining the integral in this way is that the increments and commute with the system operator. Itô quantum stochastic differential equation In order to define the Itô , it is necessary to know something about the bath statistics. In the context of the white noise formalism described earlier, the Itô can be defined as: where the equation has been simplified using the Lindblad superoperator: This differential equation is interpreted as defining the system operator as the quantum Itô integral of the right hand side, and is equivalent to the Langevin equation (). Quantum Stratonovich integral The quantum Stratonovich integral of a system operator is given by: where the bold (S) preceding the integral stands for Stratonovich. Unlike the Itô formulation, the increments in the Stratonovich integral do not commute with the system operator, and it can be shown that: Stratonovich quantum stochastic differential equation The Stratonovich can be defined as: This differential equation is interpreted as defining the system operator as the quantum Stratonovich integral of the right hand side, and is in the same form as the Langevin equation (). Relation between Itô and Stratonovich integrals The two definitions of quantum stochastic integrals relate to one another in the following way, assuming a bath with defined as before: Calculus rules Just as with classical stochastic calculus, the appropriate product rule can be derived for Itô and Stratonovich integration, respectively: As is the case in classical stochastic calculus, the Stratonovich form is the one which preserves the ordinary calculus (which in this case is noncommuting). A peculiarity in the quantum generalization is the necessity to define both Itô and Stratonovitch integration in order to prove that the Stratonovitch form preserves the rules of noncommuting calculus. Quantum trajectories Quantum trajectories can generally be thought of as the path through Hilbert space that the state of a quantum system traverses over time. In a stochastic setting, these trajectories are often conditioned upon measurement results. The unconditioned Markovian evolution of a quantum system (averaged over all possible measurement outcomes) is given by a Lindblad equation. In order to describe the conditioned evolution in these cases, it is necessary to unravel the Lindblad equation by choosing a consistent . In the case where the conditioned system state is always pure, the unraveling could be in the form of a stochastic Schrödinger equation (SSE). If the state may become mixed, then it is necessary to use a stochastic master equation (SME). Example unravelings Consider the following Lindblad master equation for a system interacting with a vacuum bath: This describes the evolution of the system state averaged over the outcomes of any particular measurement that might be made on the bath. The following describes the evolution of the system conditioned on the results of a continuous photon-counting measurement performed on the bath: where are nonlinear superoperators and is the photocount, indicating how many photons have been detected at time and giving the following jump probability: where denotes the expected value. Another type of measurement that could be made on the bath is homodyne detection, which results in quantum trajectories given by the following : where is a Wiener increment satisfying: Although these two s look wildly different, calculating their expected evolution shows that they are both indeed unravelings of the same Lindlad master equation: Computational considerations One important application of quantum trajectories is reducing the computational resources required to simulate a master equation. For a Hilbert space of dimension d, the amount of real numbers required to store the density matrix is of order d2, and the time required to compute the master equation evolution is of order d4. Storing the state vector for a , on the other hand, only requires an amount of real numbers of order d, and the time to compute trajectory evolution is only of order d2. The master equation evolution can then be approximated by averaging over many individual trajectories simulated using the , a technique sometimes referred to as the Monte Carlo wave-function approach. Although the number of calculated trajectories n must be very large in order to accurately approximate the master equation, good results can be obtained for trajectory counts much less than d2. Not only does this technique yield faster computation time, but it also allows for the simulation of master equations on machines that do not have enough memory to store the entire density matrix. References Quantum optics Stochastic calculus
Quantum stochastic calculus
[ "Physics" ]
1,738
[ "Quantum optics", "Quantum mechanics" ]
41,264,425
https://en.wikipedia.org/wiki/Elasto-capillarity
Elasto-capillarity is the ability of capillary force to deform an elastic material. From the viewpoint of mechanics, elastocapillarity phenomena essentially involve competition between the elastic strain energy in the bulk and the energy on the surfaces/interfaces. In the modeling of these phenomena, some challenging issues are, among others, the exact characterization of energies at the micro scale, the solution of strongly nonlinear problems of structures with large deformation and moving boundary conditions, and instability of either solid structures or droplets/films.The capillary forces are generally negligible in the analysis of macroscopic structures but often play a significant role in many phenomena at small scales. Bulk elasticity When depositing a droplet on a solid surface with contact angle θ, horizontal force balance is described by Young's equation. However, there is a vertical force balance which while often ignored can be written as: Where is the force per unit length in the vertical direction is the surface tension of a liquid is the Young's modulus of a substrate is deformation of the substrate This gives length scale for the deformation of bulk materials caused by the surface tension force. For example, if a water ( ~ 72 mN/m) droplet is deposited on the glass ( ~ 700 GPa), this gives ~10−12m which is typically negligible. However, if a water droplet is deposited on the PDMS ( ~ 300 kPa), this causes the deformation to be ~10−6m, which is in micron scale. This can have great impact on micro/nanotechnology applications where length scale is comparable and "soft" photoresists are used. Bendocapillary length The bendo-capillary length of a flexible sheet is defined as: where B is the bending modulus of an elastic material. γ is the surface tension of a liquid. This provides a comparison between bending stiffness (elasticity) and surface tension (capillarity). An elastic structure will be significantly deformed once its length is larger than the elasto-capillary length, which can be explained when gain of surface energy of a material is larger than stored elastic energy while bending. Capillary rise between parallel plates In the case of capillary rise between two parallel plates, height of capillary rise can be predicted as Jurin's height if plates are rigid. Longer the plates, more flexible they become, consequently plates coalesce as a result of deformation induced by capillary force. As observed, length of capillary rise Lwet between elastic plates increases linearly with total length of plates L, sets length of dry Ld=L-Lwet nearly a constant. By balancing gain of surface energy by capillary force and loss of elastic energy by banding a flexible sheet and minimizing with respect to Ld, dry length was found to be: Where is the elastocapillary length of sheets w is the distance between two parallel sheets This Ld sets the minimum length for parallel sheets to collapse, sheets spontaneously coalesce if they are longer than Ld. Above result can be generalized to multiple parallel plates when N elastic plates were used. By assuming these N sheets is N times more rigid than single sheet, such system can be treated as two bundles of N/2 sheets with a distance Nw/2. Thus the dry length can be written as: Capillary origami Unlike normal origami, capillary origami is the phenomenon where folding of an elastic sheet is done by capillary force. This phenomenon can only be seen as characteristic length of an elastic sheet is longer than elasto-capillary length and can be used in the application of self-assembly in micro and nano applications. In some cases, high voltage was used to actuate a folded structure by using electrostatic energies. Young–Laplace Equation The capillary pressure developed within a liquid droplet/film can be calculated using the Young–Laplace equation (e.g.): where: is the difference between the pressure across the liquid interface (Pa), is the surface tension of the liquid (N/m), is the unit normal pointing out of surface, are the principle radii of curvature at any point on the free surface of the liquid film or droplet (m). If the liquid wets the contacting surfaces then this pressure difference is negative i.e. the pressure inside liquid is less than the ambient pressure, and if the liquid doesn't wet the contacting surfaces then the pressure difference is positive and liquid pressure is higher than the ambient pressure. Examples of elastocapillarity The coalescence happens in a brush after removing it from water is an example of elastocapillarity. Elastocapillary wrapping driven by drop impact is another example. Most of the small scale devices such as microelectromechanical systems (MEMS), magnetic head-disk interface (HDI), and the tip of atomic force microscopy (AFM) for which liquids are present in confined regions during fabrication or during operation can experience elastocapillary phenomena. In these devices, where the spacing between solid structures is small, intermolecular interactions become significant. The liquid can exist in these small scale devices due to contamination, condensation or lubrication. The liquid present in these devices can increase the adhesive forces drastically and cause device failure. Elastocapillarity in contact between rough surfaces Every surface though appears smooth at macro scale has roughness in micro scales which can be measured by a profilometer. The wetting liquid between contacting rough surfaces develops a sub-ambient pressure inside itself, which forces the surfaces toward more intimate contact. Since the pressure drop across the liquid is proportional to the curvature at the free surface and this curvature, in turn, is approximately inversely proportional to the local spacing, the thinner the liquid bridge, the greater is the pull effect. where: are the liquid-solid contact angles for the lower and upper surfaces, respectively, is the gap between the two solids at the location of the free surface of the liquid. These tensile stresses put the two surfaces into more contact while the compressive stresses due to the elastic deformation of the surfaces tend to resist them. Two scenarios could happen in this case: 1. The tensile and compressive stresses come into balance which in this case the gap between the two surfaces is in the order of Surface roughness|roughness of the surfaces, or, 2. The tensile stresses overcome the compressive stresses and the two surfaces come into near complete contact in which gap between surfaces is a small fraction of the Surface roughness|surface roughness. The latter case is the reason for failure of most microscale devices. An estimate of the tensile stresses exerted by the capillary film can be obtained by dividing the adhesion force, , between two surfaces to the area wetted by the liquid film, . Because for relative smooth surfaces, the magnitude of the capillary pressure is predicted to be large, it is anticipated that the capillary pressures will be of large magnitude. A lot of works have been done to ascertain whether there may be some practical limit to the development of such negative pressures (e.g. ). References Elasticity (physics) Fluid dynamics Articles containing video clips
Elasto-capillarity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,515
[ "Physical phenomena", "Elasticity (physics)", "Deformation (mechanics)", "Chemical engineering", "Piping", "Physical properties", "Fluid dynamics" ]
41,270,450
https://en.wikipedia.org/wiki/Thermal%20transpiration
Thermal transpiration (or thermal diffusion) refers to the thermal force on a gas due to a temperature difference. Thermal transpiration causes a flow of gas in the absence of any other pressure difference, and is able to maintain a certain pressure difference called thermomolecular pressure difference in a steady state. The effect is strongest when the mean free path of the gas molecules is comparable to the dimensions of the gas container. Thermal transpiration appears as an important correction in the readings of vapor pressure thermometers, and the effect is historically famous as being an explanation for the rotation of the Crookes radiometer. See also Knudsen pump — a gas pump with no moving parts which functions via thermal transpiration. Thermophoresis (Soret effect) — diffusion of colloidal particles in a liquid, induced by a temperature gradient. References Non-equilibrium thermodynamics
Thermal transpiration
[ "Physics", "Chemistry", "Mathematics" ]
182
[ "Thermodynamics stubs", "Non-equilibrium thermodynamics", "Thermodynamics", "Physical chemistry stubs", "Dynamical systems" ]
41,270,601
https://en.wikipedia.org/wiki/Hi%20Score%20Girl
is a Japanese manga series written and illustrated by Rensuke Oshikiri that ran from October 2010 to September 2018. The story revolves around the life of gamer Haruo Yaguchi, the arcade game scene of the 1990s (particularly fighting games), and his relationship with quiet gamer Akira Ono, as we follow the characters from about age 12 to about age 17. Known as a 1990s arcade romantic comedy, the series is notable for its unique art style, and very precise depictions of the multitude of gaming software and hardware featured. An anime television series adaptation by J.C.Staff and SMDE aired from July to September 2018. A second season aired from October to December 2019. Characters / (Japanese); Johnny Yong Bosch (English) A young man with an affinity for gaming, nicknamed "Beastly Fingers Haruo". He met his match during a fateful encounter with Akira Ono while playing Street Fighter II. Undeterred after losing, he still sees her as an opponent he must challenge and eventually beat. While he starts as a snarky brat with an ego, he eventually grows out of it. However, he never abandons his dedication and love for gaming, which almost borders on unhealthy obsession. On the bright side, this pure passion for gaming is what leads him to find some of his closest friends. (Japanese); Christine Marie Cabanos (English) A daughter of the Ono Zaibatsu, Akira is rich, popular, and multi-talented – the polar opposite of Haruo. To escape the strict educational regimen she faces at home, she sneaks away to play in arcades where she showcases her exceptional gaming ability. She initially encounters Haruo during a match of Street Fighter II, and bonds with him over their love for gaming. She never talks and communicates solely through gestures and facial expressions. She also appears as a guest support character in Million Arthur: Arcana Blood. (Japanese); Erika Harlacher (English) Introduced as a junior high classmate of Haruo, Hidaka was introverted, lonely and mainly studied. However, during the years when Ono is forced to be overseas by the Ono family, Hidaka begins spending time with Haruo after Hidaka's family store adds arcade machines. Hidaka's natural instinct for fighting games proves to be almost as remarkable as Ono's. As the third person in the love triangle, her battles with both Haruo and Oono are eventually realized as fateful video game battles. Nikotama, leader of the local gaming team, mentors Koharu. In Hi Score Girl DASH, a spinoff manga, we meet Hidaka as a mature woman who has become a middle-school teacher. Hidaka Shop's operators. (Japanese); Lucien Dodge (English) One of Haruo's classmates during junior high and high school who is his best friend. He also enjoys arcades, though not to the same degree as Haruo. He has a knack for attracting the ladies, and is quick to pick up on the bizarre love triangle formed by his classmates. Haruo's middle school years classroom 2-3 teacher. (Japanese); Kyle McCarley (English) Haruo's classmate in classroom 6-2 and again in high school. A snobby kid who tries to come off as cultured and suave, his attempts to woo Akira are met with failure. During high school, he begins to hang around Haruo and Miyao. (Japanese); Cherami Leigh (English) The bespectacled official instructor of the Ono household. A totalitarian authoritarian who will stop at nothing to make sure Akira is nothing short of perfect and worthy as an heir to the Ono family name, she is absolutely against any kind of fun within the Ono household, which creates friction amongst its inhabitants. After seeing the effect Haruo's had on Akira and the error of her ways, she begins to relent a little, with emphasis on the word "little". If one doesn't work hard enough, she piles on more work. If one works too hard, she rewards them by piling on more work. (Japanese); Cristina Vee (English) A girl who went to school with Haruo from elementary to junior high, then seen at the same all-girl high school with Koharu. She is grotesque in appearance and crass in demeanor, though she apparently isn't self aware of that. She also has a noticeable lisp. (Japanese); Cindy Robinson (English) Haruo's energetic mother. (The disposition of Yagouchi's father is unknown and deliberately never mentioned in the show or anime; he may be deceased, divorced or perhaps simply absent due to work.) Despite his shortcomings, she's very supportive of her son in her own quirky and loving manner. Whenever there's company, she's quick to offer her special stack of "Hotcakes Straight from a Manga". (Japanese); Joe Ochman (English) An elderly man that works as Akira's chauffeur. He is a self-proclaimed pachinko addict, and has a nasty habit of running over Haruo with the family limousine. (Japanese); Cristina Vee (English) Late in season one, Haruo is shocked to learn that Akira has an older sister, the similar-looking but very different college-age Ono Makoto. Makoto, both flakey and defiant, explicitly rejects the harsh responsibilities of the Ono family. Those responsibilities fall to Akira, when Akira is just a grade-schooler. Conflicted by the effect of her actions, Makoto interacts with the three main characters, the school-friends, and Haruo's mother, in Makoto's attempts to support Akira's side in the final years of the love triangle. The Makoto character oscillates between broad comedy and the most intense moments of the story. (Japanese); Joe Ochman (English) A guidance counselor at Haruo's middle school, who likes to play video games as well. He resembles Lau Chan from Virtua Fighter series. Daughter of an arcade proprietor, Felicia is the head of the "Mizonokuchi Force", a band of gamers who operate in Kawasaki City. She takes Koharu under her wing after witnessing her skill. Aulbath Ōimachi Sagat Takdanobaba Blanka Kuhombutsu Sasquatch Tamagawagakuenmae Video game characters Various video game characters were credited for redubbing for the television series, except for Phobos/Huitzil, Driver, Hell Chaos, EDI.E, Holmes, Watson, and Geese. Street Fighter A USA fighter introduced in Street Fighter II: The World Warrior. The voices of 'Sonic Boom' and 'Faneffu' were dubbed for the television series. A Soviet Union fighter introduced in Street Fighter II: The World Warrior, and Akira's favourite character. /Akuma A hidden character from Japan, introduced in Super Street Fighter II Turbo. A Japanese fighter introduced in Street Fighter II: The World Warrior. An Indian fighter introduced in Street Fighter II: The World Warrior. A Brazilian fighter introduced in Street Fighter II: The World Warrior. A Chinese fighter introduced in Street Fighter II: The World Warrior. /M.Bison A fighter from the Thailand stage, introduced in Street Fighter II: The World Warrior. /Charlie A USA fighter introduced in Street Fighter Alpha: Warriors' Dreams. Final Fight A Final Fight Round 1 boss. A Final Fight Round 3 boss. A Final Fight playable character. Darkstalkers /Huitzil A Darkstalkers fighter, and Koharu's favourite character. A Darkstalkers fighter. A Darkstalkers fighter. Ghosts 'n Goblins The player character from Ghosts 'n Goblins. Out Run The driver from Out Run. Puzzle & Action: Tant-R A detective from Puzzle & Action: Tant-R, resembles Sherlock Holmes. A detective from Puzzle & Action: Tant-R, resembles Dr. Watson. Genpei Tōma Den A Genpei Tōma Den character. A Genpei Tōma Den stage 46 (Kamakura) boss. Puzzle Bobble /Bub The green dinosaur player character in Puzzle Bobble. Fatal Fury A Fatal Fury fighter. Splatterhouse The Splatterhouse stage 7 final boss. Hammerin' Harry /Harry The player character from Hammerin' Harry. Gaming machines Haruo's video game devices. Media Manga Oshikiri launched the manga in Square Enix's Monthly Big Gangan on October 25, 2010, and ended its serialization on September 25, 2018 in the tenth 2018 issue of the magazine. The series has been published in ten tankōbon volumes, with the first volume released on February 25, 2012, and the tenth and final volume released on March 25, 2019. Square Enix Manga & Books licensed the manga in English, with the first volume released on February 25, 2020, and the last on January 17, 2023. The December 2019 issue of Monthly Big Gangan announced that a spinoff manga titled Hi Score Girl DASH focusing on Koharu Hidaka, now a middle school teacher, would be in the magazine's next issue on December 25. Anime Monthly Big Gangan announced in December 2013 that an anime adaptation was green-lit. In March 2018, the anime adaptation was confirmed to be a television series animated by SMDE, with production by J.C. Staff. It aired from July 13 to September 28, 2018. It is directed by Yoshiki Yamakawa and written by Tatsuhiko Urahata, featuring character designs by Michiru Kuwabata, and music by Yoko Shimomura. The series runs at 60fps (mainly for the game footage, due to having a 60hz rate) in selected scenes, as opposed to 24fps. The series' opening theme song "New Stranger" was performed by Sora tob sakana, while the series' ending theme song "Hōkago Distraction" was performed by Etsuko Yakushimaru. Netflix streamed the anime on December 24, 2018, with an English dub. The series received 3 OVA episodes titled Extra Stage that premiered on March 20, 2019. A second, nine episode long season aired from October 25 to December 20, 2019, with the staff and cast reprising their roles. The second season's opening theme song "Flash" was performed by Sora tob sakana, while the second season's ending theme song "Unknown World Map" was performed by Etsuko Yakushimaru. Season 2 premiered on Netflix on April 9, 2020 outside of Japan and China. Reception It was number two on the 2013 Takarajimasha's Kono Manga ga Sugoi! Top 20 Manga for Male Readers survey. It was also nominated for the 6th Manga Taishō and the 17th Tezuka Osamu Cultural Prize. It was number nine in the 2013 Comic Natalie Grand Prize. As of December 30, 2012, volume 3 has sold 59,016 copies and as of July 7, 2013, volume 4 has sold 103,734 copies. Legal issues On August 5, 2014, Osaka District Police searched the offices of Square Enix, the publishers of Hi Score Girl, acting on an IP violation claim by SNK Playmore stating that the manga allegedly features over 100 instances of characters from The King of Fighters, Samurai Shodown, and other fighting games. In response, Square Enix voluntarily recalled all printed volumes and temporarily suspended publication of future volumes and digital sales. However, the manga continued its run in Monthly Big Gangan. In August 2015, it was reported that Square Enix and SNK Playmore had reached a settlement, cancelling the lawsuit and enabling the manga to be sold again in different formats. See also Pupipō!, another manga series by Rensuke Oshikiri Semai Sekai no Identity, another manga series by Rensuke Oshikiri Geniearth, another manga series by Rensuke Oshikiri References External links Hi Score Girl on Netflix High Score Girl at Square Enix Monthly Big Gangan Square Enix Manga and Books page: Hi Score Girl J.C.STAFF page: HSG, HSG2 Hi Score Girl Anime Official Site Comics set in the 1990s Gangan Comics manga J.C.Staff Netflix original anime Romantic comedy anime and manga Seinen manga Shogakukan franchises Square Enix franchises Tokyo MX original programming Works about video games
Hi Score Girl
[ "Technology" ]
2,608
[ "Works about video games", "Works about computing" ]
61,708,396
https://en.wikipedia.org/wiki/Addition%E2%80%93elimination%20reaction
In chemistry, an addition-elimination reaction is a two-step reaction process of an addition reaction followed by an elimination reaction. This gives an overall effect of substitution, and is the mechanism of the common nucleophilic acyl substitution often seen with esters, amides, and related structures. Another common type of addition–elimination is the reversible reaction of amines with carbonyls to form imines in the alkylimino-de-oxo-bisubstitution reaction, and the analogous reaction of interconversion imines with alternate amine reactants. The hydrolysis of nitriles to carboxylic acids is also a form of addition-elimination. References Addition reactions Elimination reactions Reaction mechanisms
Addition–elimination reaction
[ "Chemistry" ]
152
[ "Reaction mechanisms", "Chemical kinetics", "Physical organic chemistry" ]
61,716,327
https://en.wikipedia.org/wiki/Xie%20Chen
Xie Chen () is a Chinese physicist and a professor of theoretical physics at the California Institute of Technology. Her work covers both the field of condensed matter physics and quantum information, with a focus on many-body quantum mechanical systems with unconventional emergent phenomena. She won the 2020 New Horizons in Physics Prize for "incisive contributions to the understanding of topological states of matter and the relationships between them" Early life and education Chen received her bachelor's degree in physics from Tsinghua University in 2006 and her Ph.D. in theoretical physics from Massachusetts Institute of Technology in 2012 under the supervision of Isaac Chuang and Xiao-Gang Wen. From 2012–2014, Chen was a Miller Research Fellow in University of California, Berkeley. In 2014, she joined California Institute of Technology as an assistant professor and in 2017 she was promoted to associate professor. Research Chen's research centers on novel phases and phase transitions in quantum condensed matter systems. Her major research topics include topological order in strongly correlated systems, dynamics in many-body systems, tensor network representation, and quantum information application. Awards and honors Simons Investigator (2021) New Horizons in Physics Prize (2020) Sloan Research Fellowship (2017) National Science Foundation Faculty Early Career Award (2017) Miller Research Fellowship, UC Berkeley (2012) Outstanding Chinese Self-financed Students Abroad (one of nine extraordinary prizewinners, 2012) Andrew M. Lockett III Memorial Fund Award for best theoretical physics graduate student at MIT (2011) Whiteman Fellowship for graduate study of physics at MIT (2006) References Living people Year of birth missing (living people) Chinese women physicists Tsinghua University alumni Massachusetts Institute of Technology School of Science alumni California Institute of Technology faculty Theoretical physicists Chinese expatriates in the United States Sloan Research Fellows
Xie Chen
[ "Physics" ]
360
[ "Theoretical physics", "Theoretical physicists" ]
61,716,683
https://en.wikipedia.org/wiki/Erica%20Ollmann%20Saphire
Erica Ollmann Saphire is an American structural biologist and immunologist and a professor at the La Jolla Institute for Immunology. She investigates the structural biology of viruses that cause hemorrhagic fever such as Ebola, Sudan, Marburg, Bundibugyo, and Lassa. Saphire has served as president and CEO of La Jolla Institute for Immunology since 2021. Early life and education Saphire earned a Bachelor of Arts in biochemistry and cell biology from Rice University in 1993. She then moved to Scripps Research, where she earned a PhD in molecular biology in 2000. Her doctoral research focused on the crystal structure of a neutralizing antibody against HIV-1. She was an avid rugby player throughout college and graduate school, and toured twice with the United States women's national rugby union team. Career and research After an immunology postdoctoral fellowship at Scripps Research, Saphire joined the faculty in the department of immunology as an assistant professor in 2003. She was promoted to associate professor in 2008 and full professor in 2012. In 2019, joined the faculty at the La Jolla Institute for Immunology. Saphire solved the first structure of the entire human IgG. The hexameric array predicted the assembly by which IgG could recruit C1q and launch the complement cascade, which Saphire confirmed by obtaining the cryoEM structure of the C1q-IgG complex and hexameric IgG preparations. Saphire is best known for her research on Ebola virus and other causes of viral hemorrhagic fever. She was the first to discover the structure of the Ebola virus surface glycoprotein and predicted that the Ebola virus receptor was located in the endosome rather than on the cell surface. Later, she showed that the Ebola virus VP40 matrix protein can fold into multiple distinct structures. In 2024, Saphire used in situ cryo-electron tomography to illuminate Ebola virus replication factories inside living cells revealing a hitherto unresolved third and outer layer of the nucleoprotein. Her laboratory has also discovered the structure of the glycoproteins of Sudan virus, Marburg virus, Bundibugyo virus, Lassa virus and LCMV. On field work in West Africa, she followed rodents to study how they spread viruses such as Ebola and Lassa. Saphire attracted national media attention in 2014 when she launched a crowdfunding appeal to raise funds for equipment to assist in research to fight Ebola virus. In recent work, Saphire determined the cryo-electron microscopy structure of the measles virus fusion protein in complex with an antibody and determined that the antibody can trap the fusion protein in an intermediate state, thus halting fusion. Saphire directs the Viral Hemorrhagic Fever Immunotherapeutic Consortium (VIC) and is a strong advocate for strategic collaborations to rapidly develop treatments for Ebola and other severe threats. In 2020, Saphire was named director of the Coronavirus Immunotherapy Consortium (CoVIC), an international effort to evaluate human antibodies against the novel coronavirus, SARS-CoV-2. Her lab also co-led research into COVID-19 mutations with scientists at the Los Alamos National Laboratory. Saphire is also spearheading "America's SHIELD:Strategic Herpesvirus Immune Evasion and Latency Defense" as part of ARPA-H's Antigens Predicted for Broad Viral Efficacy through Computational Experimentation (APECx) program. In 2021, Saphire was appointed president and CEO of La Jolla Institute for Immunology. She succeeded Dr. Mitchell Kronenberg, who had served as institute president since 2003. Saphire is the institute's fifth president and is the first woman to serve in that role. Awards Saphire received the Presidential Early Career Award for Scientists and Engineers and the Global Virus Network's Gallo Award for Scientific Excellence and Leadership. She received the American Society for Biochemistry and Molecular Biology's Young Investigator Award in 2015, the Pantheon Award for Academia, Non-Profit, & Research in 2023, the Marion Spencer Fay Award in 2023 and the Bert & Natalie Vallee Award in Biomedical Science (2023). References Year of birth missing (living people) Living people Structural biologists American molecular biologists American immunologists Rice University alumni Scripps Research faculty American women immunologists 21st-century American biologists 21st-century American women scientists Recipients of the Presidential Early Career Award for Scientists and Engineers
Erica Ollmann Saphire
[ "Chemistry" ]
935
[ "Structural biologists", "Structural biology" ]
61,717,182
https://en.wikipedia.org/wiki/Marija%20Drndic
Marija Drndic (born February 11, 1971) is the Fay R. and Eugene L. Langberg Professor of Physics at the University of Pennsylvania. She works on two-dimensional materials and novel spectroscopic techniques. Early life and education Drndic studied physics and mathematics at Harvard University and spent a year at the University of Cambridge in the Semiconductor Physics Group. At Cambridge Drndic worked on quantum transport of coupled gases with Michael Pepper. At Harvard University she was a member of Phi Beta Kappa and graduated summa cum laude. Drndic was awarded a Clare Booth Luce Fellowship, the Harold T. White Prize for Excellence in Teaching and the Robbins Prize from Harvard University. She remained there for her doctoral studies, working with Robert Westervelt on microelectromagnets for cold-atom experiments. She before joining the Massachusetts Institute of Technology as a Pappalardo Fellow. As a postdoctoral researcher Drndic worked on electron transport in cadmium selenide nanocrystals. She worked alongside Marc A. Kastner and Moungi Bawendi on novel spectroscopies. Research and career In 2003 Drndic joined the University of Pennsylvania. She was awarded an American Chemical Society Petroleum Research Fund Award in 2004, and has since been supported by the National Science Foundation, Alfred P. Sloan Foundation and DARPA. In 2005 Drndic was awarded a Presidential Early Career Award for Scientists and Engineers. Her work considers low-dimensional materials including nanowires, nanocrystals and biomaterials. Drndic uses electron beams to image and shape materials. In particular, Drndic works on two-dimensional nanopores, which are nanoscale size holes that can be used to detect single molecules. They are typically used for biomolecular analysis, but were unable to sequence DNA. Drndic demonstrated it is possible to use light to control the shape of nanopores, indicating it may be possible to fabricate them using light. By removing individual atoms from the nanopores using ion beams, Drndic demonstrated that the nanopores can also be used in water desalination. She has shown that nanopores can be integrated with field-effect transistors to sense nearby ionic and electrical currents. They can also provide information on the physical and chemical properties of biomolecules including DNA and proteins. Selected publications Her publications include; Drndic holds several patents for electronic devices and thin film structures. References 1971 births Living people University of Pennsylvania faculty Alumni of the University of Cambridge Harvard College alumni 21st-century women physicists Nanotechnologists Fellows of the American Physical Society Recipients of the Presidential Early Career Award for Scientists and Engineers
Marija Drndic
[ "Materials_science" ]
552
[ "Nanotechnology", "Nanotechnologists" ]
59,085,826
https://en.wikipedia.org/wiki/Kiwi%20drive
A Kiwi drive is a holonomic drive system of three omni-directional wheels (such as omni wheels or Mecanum wheels), 120 degrees from each other, that enables movement in any direction using only three motors. This is in contrast with non-holonomic systems such as traditionally wheeled or tracked vehicles which cannot move sideways without turning first. This drive system is similar to the Killough platform which achieves omni-directional travel using traditional non-omni-directional wheels in a three-wheel configuration. It was named for the flightless national bird of New Zealand, the Kiwi. Motion When only the front wheel is powered, the chassis will turn and strafe at once. If the back wheels turn the same amount in the same direction, the strafe is cancelled out, so the chassis will only turn. If the back wheels turn half as much in the opposite direction, the turn is cancelled out, so the chassis will only strafe. If the front wheel is not powered, and the back wheels turn the same amount in opposite directions, the chassis will only drive. References Robotics engineering
Kiwi drive
[ "Technology", "Engineering" ]
232
[ "Computer engineering", "Robotics engineering" ]
59,087,742
https://en.wikipedia.org/wiki/Killough%20platform
A Killough platform is a three-wheel drive system that uses traditional wheels to achieve omni-directional movement without the use of omni-directional wheels (such as omni wheels/Mecanum wheels). Designed by Stephen Killough, after which the platform is named, with help from Francois Pin, wanted to achieve omni-directional movement without using the complicated six motor arrangement required to achieve a controllable three caster wheel system (one motor to control wheel rotation and one motor to control pivoting of the wheel). He first looked into solutions by other inventors that used rollers on the rims larger wheels but considered them flawed in some critical way. This led to the Killough system: With Francois Pin, who helped with the computer control and choreography aspects of the design, Killough and Pin readied a public demonstration in 1994. This led to a partnership with Cybertrax Innovative Technologies in 1996, which was developing a motorized wheelchair. By combining two the motion of two-wheel the vehicle can move in the direction of the perpendicular wheel, or, by rotating all the wheels in the same direction, the vehicle can rotate in place. By using the resultant motion of the vector addition of the wheels a Killough platform is able to achieve omni-directional motion. References Robotics engineering
Killough platform
[ "Technology", "Engineering" ]
267
[ "Computer engineering", "Robotics engineering" ]
62,696,900
https://en.wikipedia.org/wiki/Protective%20colloid
A protective colloid is a lyophilic colloid that when present in small quantities keeps lyophobic colloids from precipitating under the coagulating action of electrolytes. Need for protective colloids When a small amount of hydrophilic colloid is added to hydrophobic colloids it may coagulate the latter. This is due to neutralisation of the charge on the hydrophobic colloidal particles. However, the addition of large amount of hydrophilic colloid increases the stability of the hydrophobic colloidal system. This is due to adsorption. When lyophilic sols are added to lyophobic sols, depending on their sizes, either lyophobic sol is adsorbed in the surface of lyophilic sol or lyophilic sol is adsorbed on the surface of lyophobic sol. The layer of the protective colloid prevents direct collision between the hydrophobic colloidal particles and thus prevents coagulation. Examples Lyophilic sols like starch and gelatin act as protective colloids. Measurement of protective action For a comparative study Zsigmondy introduced a scale of protective action for different protective colloids in terms of gold number. The gold number is the weight in milligrams of a protective colloid which checks the coagulation of 10ml of a given gold sol on adding 1 ml of 10% sodium chloride. Thus smaller the gold number, greater is the protective action. Gold numbers of some materials Gelatin 0.005-0.01 Albumin 0.1 Acacia 0.1-0.2 Sodium oleate 1-5 Tragacanth 2 [4] References Colloids
Protective colloid
[ "Physics", "Chemistry", "Materials_science" ]
358
[ "Chemical mixtures", "Condensed matter physics", "Colloids" ]
62,701,080
https://en.wikipedia.org/wiki/Canadian%20Energy%20Centre
The Canadian Energy Centre Limited (CEC), also commonly called the "Energy War Room", was an Alberta provincial corporation mandated to promote Alberta's energy industry and rebut "domestic and foreign-funded campaigns against Canada's oil and gas industry". The creation of an organization to promote Alberta's oil and gas industries was a campaign promise by United Conservative Party leader Jason Kenney during the 2019 Alberta general election. After winning a majority of seats in the election, Kenney's government inaugurated the CEC with a $2.84 million budget in December 2019. The CEC originally had an annual budget of CA$30 million which was decreased to $CA12 million. The CEC has been the subject of several controversies since its establishment, including accusations of plagiarizing logo designs. The CEC attracted widespread media attention when it launched a campaign against the Netflix animated children's movie Bigfoot Family because it cast Alberta's oil and gas industry in a negative light. In June 2024, the CEC was shut down, and merged into Alberta Intergovernmental Relations. Background The creation of a 'war room' capable of challenging "energy industry critics' inaccuracies" was an election promise made by then candidate Jason Kenney as part of his campaign leading up to the 16 April 2019 Alberta general election. In the founding speech of the UCP on 9 May 2018, Kenney announced that he would engage in "national and international advocacy" including a "fully staffed rapid response war room in government to quickly and effectively rebut every lie told by the green left about our world-class energy industry. If companies like HSBC decide to boycott our oil sands, our government will boycott them. It's called a market decision." Premier Kenney, whose United Conservative Party (UCP), won a majority of seats in the Alberta Legislature announced the creation of Calgary-based $30 million "Energy War Room" on 7 June 2019 to "fight misinformation related to oil and gas". On 6 May 2019 Nick Koolsbergen, who was the UCP's Alberta campaign manager for the winning election, announced the establishment of the Wellington Advocacy government relations firm with Harper & Associates' Rachel Curran. Both Koolsbergen and Curran had worked in the office of former Prime Minister Stephen Harper. According to a 17 May 2019 CBC article, Postmedia contracted Wellington Advocacy to "lobby" the UCP on "how it could be involved with" the new 'energy war room'. In July 2019, Kenney announced the establishment of a one-year $2.5 million Public Inquiry into Anti-Alberta Energy Campaigns". Kenney cited the work of Vivian Krause, who has spent ten years examining foreign funding of Canadian environmental non-profit organizations (ENGOs) and who claimed that Alberta's interests were being "challenged by well-funded foreign actors who have been waging a decade-long campaign to land lock Alberta's oil." The public inquiry, which was officially established in July 2019 with a "mandate to investigate foreign-funded efforts", is led by the former board chair of the Calgary Economic Development—a forensic accountant—Steve Allan. The inquiry will include interviews, research, and potentially, public hearings. On 9 October 2019 Energy Minister Sonya Savage announced that the CEC was incorporated. The centre (CEC) was officially launched on 11 December by Premier Kenney at a press conference at the Southern Alberta Institute of Technology (SAIT). Mandate and description Its mandate is to "highlight achievements in Alberta's oil and gas sector" and to "refute what it deems to be misinformation about the industry." Kenney said the centre will "counter misinformation" "coming from some environmental groups and others seeking to landlock Alberta's oil and gas". At the 11 December launch, Olsen described the centre as a place to tell the story of the oil and gas industry in Alberta, which includes rebutting its critics respectfully. While explaining the war room's strategy, Olsen states "we are not about attacking, we are about disproving true facts." Funding The Canadian Energy Centre is funded by the Alberta provincial government with an original budget of $30 million. During the COVID-19 pandemic, the Canadian Energy Centre's budget was decreased to $2.84 million for a period of 90 days. In 2020, CEC's budget was about $4 million. Post Media's Financial Post described the CEC as an "Alberta government corporation partly funded by industry." According to a March 2022 CBC article, the CEC is funded by the Technology, Innovation and Emissions Reduction (TIER) fund, which is the province's industrial tax on carbon tax. On 21 March 2022, Minister Savage, who is CEC's "sole voting shareholder", said that the CECalthough not included in the province's proposed budget for 2022-2023has a budget of approximately $12 million a year. Private corporation The Canadian Energy Centre Limited is a private corporation, which means that it is not subject to Alberta's Freedom of Information and Protection of Privacy Act (FOIP Act). Premier Kenney's press secretary Christine Myatt said that keeping CECL's internal operations secret is a "tactical and/or strategic advantage to the very foreign-funded special interests the CEC is looking to counter." CBC's Jennie Russell submitted a request in May 2021, for further information on how CEC awarded its contracts. The request was denied because CEC is protected from any FOI request due to its status as a private corporation. Russell challenged the decision and the case was sent to Alberta's information and privacy commissioner, who appointed an external adjudicator, Catherine Tully, to decide on the issue. Tully found that the CEC did not qualify as either a provincial government office or branch and therefore Russell's FOI did not apply. University of Victoria's Sean Holman, an expert on freedom of information laws, said that the way in which CEC uses information and spends money is of public interest, as it is not a "run-of-the-mill government operation", it is a "spin centre" for the world's "most controversial industry". Governance The CEC is governed by a three-member board of directors composed of Sonya Savage (Minister of Energy), Doug Schweitzer (Minister of Justice and Solicitor General), and Jason Nixon (Minister of Environment and Parks). The appointment of Tom Olsen as the Canadian Energy Centre's first chief executive officer and managing director was announced in November 2019 by Savage. Olsen, who had run unsuccessfully as a United Conservative Party candidate in the 2019 election, is a former veteran political journalist who previously worked as a spokesman for Ed Stelmach. Themes In a 18 December rebuttal to the 14 December Medicine Hat News critical opinion piece that said that the CEC was not "subject to freedom-of-information searches" and that the Centre "could be used to stifle legitimate dissent and commentary on the oil and gas industry", Olson, who is a former Calgary Herald journalist, said that "oversight" of the CEC is "rigorous" and that the centre is subject to the Fiscal Planning and Transparency Act, the Whistle Blowers Act and audits by Alberta's auditor general. Olsen added that "campaigns to shut down new pipeline projects and damage the reputation of our oil and gas industry have received tens of millions of dollars from U.S. environmental foundations." This has resulted in the "landlocking of Alberta energy" which had resulted in the loss of jobs, "tens of billions of dollars" in capital, less money for public services, as well as "lower value for their shareholders that include many of the country’s biggest pension plans and investment funds." In his CEC post, Grady Semmens responded to the 27 December 2019 opinion piece published in The Globe and Mail by Bill McKibben, an American author and environmentalist, who called on Canada to go beyond cutting emissions to "stop digging up oil and gas and selling it around the world." Semmens said that Canada was only "responsible for less than 1.6 per cent of global greenhouse gas emissions." Semmens cited a Canadian Association of Petroleum Producers (CAPP) which cited a 9 January 2007 Statistics Canada report. Economist Andrew Leach, who described the centre as a "pro-energy corporation", is providing a rebuttal of truth claims made by the CEC on their website. Canadian Energy Centre Logo The CEC logo, which was unveiled at the launch, was also used in the 11 December promotional video, on the CEC's website, "on the wall of its downtown Calgary office, and on signs". By the evening of 18 December, "social media users" on Twitter began to share side-by-side versions of the CEC logo and the "trademarked symbol" for Progress Software Corporation, the Massachusetts-headquartered "software giant"—Progress Software, A 19 December Canadian Press report said that the icons were "identical, stylized sharp-angled depictions of what appear to be radiating waves... the Progress one is emerald-green and the war room version is two shades of blue." According to a CP report, the Massachusetts-headquartered "software giant"—Progress Software sent an email that morning saying that it was "looking into whether Alberta’s new energy war room has violated the company's trademarked logo." In a 19 December statement, the energy centre's CEO and managing director, Tom Olsen, said that the logo was pulled and was to be replaced. Olsen said that the "design debacle" "mistake" was an "unfortunate situation". He said that the CEC was in "discussions" with the marketing agency—Lead & Anchor "to determine how it happened". The CEC had selected Lead & Anchor over eight other contractors proposed to the CEC by the Calgary marketing agency, Communo. On 27 December, the Calgary Herald reported that Pasadena, California-based ATK Technologies Inc.—a company that developed the mobile phone Alpha Browser app launched in 2018—claimed that the logo the CEC was using to replace its original logo, was "similar" to the Alpha Browser app logo—a "stylized, red-striped letter "a." The new CEC logo appears to take that same letter "a" and turn it on its side, with a red maple leaf added to the top right corner." According to the Herald a member of ATK said that the logo was ATK's "intellectual property" and that they their legal team was "on top of it." Currently the Canadian Energy Centre's Facebook, Twitter and Websites are not using either of these logos. The image being used where needed is simply their name in simple black text. Bigfoot Family Controversy On 12 March 2021 the Canadian Energy Centre launched a website and petition against the Netflix animated children's movie Bigfoot Family. The website hosted an online petition titled "Tell the truth Netflix" addressed to Netflix Canada's Head of Communications. The form letter, which could not be edited by users, asked Netflix to use its "powerful platform to tell the true story of Canada’s peerless oil and gas industry, and not contribute to misinformation targeting your youngest, most vulnerable and impressionable viewers." Of particular concern was the animated film's representation of "oil being extracted by blowing up a valley using glowing red bombs" which, the CEC claimed, looked "like something out of an action movie". Of note, there have been experimental projects where oil has been extracted using explosives. One such project - Project Oilsand - was proposed for Pony Creek, Alberta but was never followed through with. On 12 March 2021 when Canadian Energy Centre CEO Tom Olsen was asked why the Bigfoot Family campaign was launched, the CEC released a statement saying it responded after "a parent flagged" the film. However a 17 March 2021 column in the Calgary Herald states that the idea for the campaign came from a CEC member of staff. The Bigfoot Family controversy—also known as Bigfootgate—has received provincial, national as well as international media attention in the US, UK and elsewhere. In Alberta, opposition MLAs have used the pointed to the Bigfoot controversy to question the value and effectiveness of the Canadian Energy Centre which has a budget of $12 million for 2021–2022. Meanwhile, Jason Kenney has publicly defended the CEC's campaign against Bigfoot Family saying that the film was deliberately designed to "defame in the most vicious way possible, in the impressionable minds of kids, the largest industry in the province". Twitter account On 12 February 2020, Tom Olsen, CEC's CEO Tom Olsen apologized for "the tone" of tweets posted by CEC's official Twitter account "attacking" The New York Times. CEC's Twitter account—@CDNEnergyCentre—had posted a 20-tweet thread on 12 February in response to an article in The New York Times by Christopher Flavelle, in which Flavelle describing how some of the "largest financial institutions" in the world had stopped investing in Alberta's oil sands. Flavelle said that the oil sands was "one of the world's most extensive, and also dirtiest, oil reserves." The Times said that in June 2017, when Sweden's largest pension fund—AP7—divested of six companies that they said "breached" the 2016 Paris Agreement, it began a shift in the "campaign against the oil sands...to the world of finance". Since then, HSBC—Europe's largest bank, the insurance "giant"—The Hartford, the central bank of Sweden, and "one of BlackRock's "fast-growing green-oriented funds, France's BNP Paribas and Société Générale, and Norway's sovereign wealth fund, also announced their divestment from pipelines, the oil sands and/or fossil fuels, according to the Times. The tweet thread, the CEC account said that the Times had "been "called out for anti-Semitism countless times," has a "dodgy" track record, is "routinely accused of bias" and is "not the most dependable source."" The tweets were since deleted and Olsen, said that "The tone did not meet CEC's standard for public discourse. This issue has been dealt with internally." On 11 February, CEC's social media manager apologized for retweeting "factually incorrect information" about how clean Teck Frontier's oil would be compared to other North American oil streams, after Andrew Leach, a University of Alberta economist pointed out the error. See also Canadian Centre for Energy Information (CCEI), a Canadian federal government website and portal that was announced on 23 May 2019. The Canadian Energy Information Portal was launched by Statistics Canada, in partnership with Natural Resources Canada, Environment and Climate Change Canada, and the—now defunct—National Energy Board. The regularly updated and expanded online interactive site provides a "single point" for accessing information Canadian energy sector including energy production, consumption, international trade, transportation, prices with monthly federal and provincial statistics. Notes References Petroleum industry in Alberta Natural gas in Alberta Energy organizations Politics of Alberta Defunct Alberta government departments and agencies Energy in Canada Energy in Alberta 2019 establishments in Alberta Environmental impact of fossil fuels Environmental impact of the petroleum industry Petroleum politics 2024 disestablishments in Alberta
Canadian Energy Centre
[ "Chemistry", "Engineering" ]
3,221
[ "Petroleum", "Energy organizations", "Petroleum politics" ]
62,704,502
https://en.wikipedia.org/wiki/Stratification%20%28water%29
Stratification in water is the formation in a body of water of relatively distinct and stable layers by density. It occurs in all water bodies where there is stable density variation with depth. Stratification is a barrier to the vertical mixing of water, which affects the exchange of heat, carbon, oxygen and nutrients. Wind-driven upwelling and downwelling of open water can induce mixing of different layers through the stratification, and force the rise of denser cold, nutrient-rich, or saline water and the sinking of lighter warm or fresher water, respectively. Layers are based on water density: denser water remains below less dense water in stable stratification in the absence of forced mixing. Stratification occurs in several kinds of water bodies, such as oceans, lakes, estuaries, flooded caves, aquifers and some rivers. Mechanism The driving force in stratification is gravity, which sorts adjacent arbitrary volumes of water by local density, operating on them by buoyancy and weight. A volume of water of lower density than the surroundings will have a resultant buoyant force lifting it upwards, and a volume with higher density will be pulled down by the weight which will be greater than the resultant buoyant forces, following Archimedes' principle. Each volume will rise or sink until it has either mixed with its surroundings through turbulence and diffusion to match the density of the surroundings, reaches a depth where it has the same density as the surroundings, or reaches the top or bottom boundary of the body of water, and spreads out until the forces are balanced and the body of water reaches its lowest potential energy. The density of water, which is defined as mass per unit of volume, is a function of temperature (), salinity () and pressure (), which is a function of depth and the density distribution of the overlaying water column, and is denoted as . The dependence on pressure is not significant, since water is almost perfectly incompressible. An increase in the temperature of the water above 4 °C causes expansion and the density will decrease. Water expands when it freezes, and a decrease in temperature below 4 °C also causes expansion and a decrease in density. An increase in salinity, the mass of dissolved solids, will increase the density. Density is the decisive factor in stratification. It is possible for a combination of temperature and salinity to result in a density that is less or more than the effect of either one in isolation, so it can happen that a layer of warmer saline water is layered between a colder fresher surface layer and a colder more saline deeper layer. A pycnocline is a layer in a body of water where the change in density is relatively large compared to that of other layers. The thickness of the pycnoocline is not constant everywhere and depends on a variety of variables. Just like a pycnocline is a layer with a large change in density with depth, similar layers can be defined for a large change in temperature, a thermocline, and salinity, a halocline. Since the density depends on both the temperature and the salinity, the pycno-, thermo-, and haloclines have a similar shape. Mixing Mixing is the breakdown of stratification. Once a body of water has reached a stable state of stratification, and no external forces or energy are applied, it will slowly mix by diffusion until homogeneous in density, temperature and composition, varying only due to minor effects of compressibility. This does not usually occur in nature, where there are a variety of external influences to maintain or disturb the equilibrium. Among these are heat input from the sun, which warms the upper volume, making it expand slightly and decreasing the density, so this tends to increase or stabilise stratification. Heat input from below, as occurs from tectonic plate spreading and vulcanism is a disturbing influence, causing heated water to rise, but these are usually local effects and small compared to the effects of wind, heat loss and evaporation from the free surface, and changes of direction of currents. Wind has the effects of generating wind waves and wind currents, and increasing evaporation at the surface, which has a cooling effect and a concentrating effect on solutes, increasing salinity, both of which increase density. The movement of waves creates some shear in the water, which increases mixing in the surface water, as does the development of currents. Mass movement of water between latitudes is affected by coriolis forces, which impart motion across the current direction, and movement towards or away from a land mass or other topographic obstruction may leave a deficit or excess which lowers or raises the sea level locally, driving upwelling and downwelling to compensate. The major upwellings in the ocean are associated with the divergence of currents that bring deeper waters to the surface. There are at least five types of upwelling: coastal upwelling, large-scale wind-driven upwelling in the ocean interior, upwelling associated with eddies, topographically associated upwelling, and broad-diffusive upwelling in the ocean interior. Downwelling also occurs in anti-cyclonic regions of the ocean where warm rings spin clockwise, causing surface convergence. When these surface waters converge, the surface water is pushed downwards. These mixing effects destabilise and reduce stratification. By water body type Oceans Ocean stratification is the natural separation of an ocean's water into horizontal layers by density, and occurs in all ocean basins. Denser water is below lighter water, representing a stable stratification. The pycnocline is the layer where the rate of change in density is largest. Ocean stratification is generally stable because warmer water is less dense than colder water, and most heating is from the sun, which directly affects only the surface layer. Stratification is reduced by mechanical mixing induced by wind, but reinforced by convection (warm water rising, cold water sinking). Stratified layers act as a barrier to the mixing of water, which impacts the exchange of heat, carbon, oxygen and other nutrients. The surface mixed layer is the uppermost layer in the ocean and is well mixed by mechanical (wind) and thermal (convection) effects. Due to wind driven movement of surface water away from and towards land masses, upwelling and downwelling can occur, breaking through the stratification in those areas, where cold nutrient-rich water rises and warm water sinks, respectively, mixing surface and bottom waters. The thickness of the thermocline is not constant everywhere and depends on a variety of variables. Between 1960 and 2018, upper ocean stratification increased between 0.7 and 1.2% per decade due to climate change. This means that the differences in density of the layers in the oceans increase, leading to larger mixing barriers and other effects. Global upper-ocean stratification has continued its increasing trend in 2022. The southern oceans (south of 30°S) experienced the strongest rate of stratification since 1960, followed by the Pacific, Atlantic, and the Indian Oceans. Increasing stratification is predominantly affected by changes in ocean temperature; salinity only plays a role locally. Estuaries An estuary is a partially enclosed coastal body of brackish water with one or more rivers or streams flowing into it, and with a free connection to the open sea. The residence time of water in an estuary is dependent on the circulation within the estuary that is driven by density differences due to changes in salinity and temperature. Less dense freshwater floats over saline water and warmer water floats above colder water for temperatures greater than 4 °C. As a result, near-surface and near-bottom waters can have different trajectories, resulting in different residence times. Vertical mixing determines how much the salinity and temperature will change from the top to the bottom, profoundly affecting water circulation. Vertical mixing occurs at three levels: from the surface downward by wind forces, the bottom upward by turbulence generated at the interface between the estuarine and oceanic water masses, and internally by turbulent mixing caused by the water currents which are driven by the tides, wind, and river inflow. Different types of estuarine circulation result from vertical mixing: Salt wedge estuaries are characterized by a sharp density interface between the upper layer of freshwater and the bottom layer of saline water. River water dominates in this system, and tidal effects have a small role in the circulation patterns. The freshwater floats on top of the seawater and gradually thins as it moves seaward. The denser seawater moves along the bottom up the estuary forming a wedge shaped layer and becoming thinner as it moves landward. As a velocity difference develops between the two layers, shear forces generate internal waves at the interface, mixing the seawater upward with the freshwater. An example is the Mississippi estuary. As tidal forcing increases, the control of river flow on the pattern of circulation in the estuary becomes less dominating. Turbulent mixing induced by the current creates a moderately stratified condition. Turbulent eddies mix the water column, creating a mass transfer of freshwater and seawater in both directions across the density boundary. Therefore, the interface separating the upper and lower water masses is replaced with a water column with a gradual increase in salinity from surface to bottom. A two layered flow still exists however, with the maximum salinity gradient at mid depth. Partially stratified estuaries are typically shallow and wide, with a greater width to depth ratio than salt wedge estuaries. An example is the Thames. In vertically homogeneous estuaries, tidal flow is greater relative to river discharge, resulting in a well mixed water column and the disappearance of the vertical salinity gradient. The freshwater-seawater boundary is eliminated due to the intense turbulent mixing and eddy effects. The width to depth ratio of vertically homogeneous estuaries is large, with the limited depth creating enough vertical shearing on the seafloor to mix the water column completely. If tidal currents at the mouth of an estuary are strong enough to create turbulent mixing, vertically homogeneous conditions often develop. Fjords are usually examples of highly stratified estuaries; they are basins with sills and have freshwater inflow that greatly exceeds evaporation. Oceanic water is imported in an intermediate layer and mixes with the freshwater. The resulting brackish water is then exported into the surface layer. A slow import of seawater may flow over the sill and sink to the bottom of the fjord (deep layer), where the water remains stagnant until flushed by an occasional storm. Inverse estuaries occur in dry climates where evaporation greatly exceeds the inflow of freshwater. A salinity maximum zone is formed, and both riverine and oceanic water flow close to the surface towards this zone. This water is pushed downward and spreads along the bottom in both the seaward and landward direction. The maximum salinity can reach extremely high values and the residence time can be several months. In these systems, the salinity maximum zone acts like a plug, inhibiting the mixing of estuarine and oceanic waters so that freshwater does not reach the ocean. The high salinity water sinks seaward and exits the estuary. Lakes Lake stratification, generally a form of thermal stratification caused by density variations due to water temperature, is the formation of separate and distinct layers of water during warm weather, and sometimes when frozen over. Typically stratified lakes show three distinct layers, the epilimnion comprising the top warm layer, the thermocline (or metalimnion): the middle layer, which may change depth throughout the day, and the colder hypolimnion extending to the floor of the lake. The thermal stratification of lakes is a vertical isolation of parts of the water body from mixing caused by variation in the temperature at different depths in the lake, and is due to the density of water varying with temperature. Cold water is denser than warm water of the same salinity, and the epilimnion generally consists of water that is not as dense as the water in the hypolimnion. However, the temperature of maximum density for freshwater is 4 °C. In temperate regions where lake water warms up and cools through the seasons, a cyclical pattern of overturn occurs that is repeated from year to year as the water at the top of the lake cools and sinks (see stable and unstable stratification). For example, in dimictic lakes the lake water turns over during the spring and the fall. This process occurs more slowly in deeper water and as a result, a thermal bar may form. If the stratification of water lasts for extended periods, the lake is meromictic. In shallow lakes, stratification into epilimnion, metalimnion, and hypolimnion often does not occur, as wind or cooling causes regular mixing throughout the year. These lakes are called polymictic. There is not a fixed depth that separates polymictic and stratifying lakes, as apart from depth, this is also influenced by turbidity, lake surface area, and climate. The lake mixing regime (e.g. polymictic, dimictic, meromictic) describes the yearly patterns of lake stratification that occur in most years. However, short-term events can influence lake stratification as well. Heat waves can cause periods of stratification in otherwise mixed, shallow lakes, while mixing events, such as storms or large river discharge, can break down stratification. Recent research suggests that seasonally ice-covered dimictic lakes may be described as "cryostratified" or "cryomictic" according to their wintertime stratification regimes. Cryostratified lakes exhibit inverse stratification near the ice surface and have depth-averaged temperatures near 4 °C, while cryomictic lakes have no under-ice thermocline and have depth-averaged winter temperatures closer to 0 °C. Anchialine systems An anchialine system is a landlocked body of water with a subterranean connection to the ocean. Depending on its formation, these systems can exist in one of two primary forms: pools or caves. The primary differentiating characteristics between pools and caves is the availability of light; cave systems are generally aphotic while pools are euphotic. The difference in light availability has a large influence on the biology of a given system. Anchialine systems are a feature of coastal aquifers which are density stratified, with water near the surface being fresh or brackish, and saline water intruding from the coast at depth. Depending on the site, it is sometimes possible to access the deeper saline water directly in the anchialine pool, or sometimes it may be accessible by cave diving. Anchialine systems are extremely common worldwide especially along neotropical coastlines where the geology and aquifer systems are relatively young, and there is minimal soil development. Such conditions occur notably where the bedrock is limestone or recently formed volcanic lava. Many anchialine systems are found on the coastlines of the island of Hawaii, the Yucatán Peninsula, South Australia, the Canary Islands, Christmas Island, and other karst and volcanic systems. Karst caves which drain into the sea may have a halocline separating the fresh water from the seawater underneath which can be visible even when both layers are clear due to the difference in refractive indices. References Hydrology
Stratification (water)
[ "Chemistry", "Mathematics", "Engineering", "Environmental_science" ]
3,198
[ "Hydrology", "Functions and mappings", "Mathematical objects", "Vertical distributions", "Mathematical relations", "Environmental engineering" ]
42,677,761
https://en.wikipedia.org/wiki/Graph%20amalgamation
In graph theory, a graph amalgamation is a relationship between two graphs (one graph is an amalgamation of another). Similar relationships include subgraphs and minors. Amalgamations can provide a way to reduce a graph to a simpler graph while keeping certain structure intact. The amalgamation can then be used to study properties of the original graph in an easier to understand context. Applications include embeddings, computing genus distribution, and Hamiltonian decompositions. Definition Let and be two graphs with the same number of edges where has more vertices than . Then we say that is an amalgamation of if there is a bijection and a surjection and the following hold: If , are two vertices in where , and both and are adjacent by edge in , then and are adjacent by edge in . If is a loop on a vertex , then is a loop on . If joins , where , but , then is a loop on . Note that while can be a graph or a pseudograph, it will usually be the case that is a pseudograph. Properties Edge colorings are invariant to amalgamation. This is obvious, as all of the edges between the two graphs are in bijection with each other. However, what may not be obvious, is that if is a complete graph of the form , and we color the edges as to specify a Hamiltonian decomposition (a decomposition into Hamiltonian paths, then those edges also form a Hamiltonian Decomposition in . Example Figure 1 illustrates an amalgamation of . The invariance of edge coloring and Hamiltonian Decomposition can be seen clearly. The function is a bijection and is given as letters in the figure. The function is given in the table below. Hamiltonian decompositions One of the ways in which amalgamations can be used is to find Hamiltonian Decompositions of complete graphs with 2n + 1 vertices. The idea is to take a graph and produce an amalgamation of it which is edge colored in colors and satisfies certain properties (called an outline Hamiltonian decomposition). We can then 'reverse' the amalgamation and we are left with colored in a Hamiltonian Decomposition. In Hilton outlines a method for doing this, as well as a method for finding all Hamiltonian Decompositions without repetition. The methods rely on a theorem he provides which states (roughly) that if we have an outline Hamiltonian decomposition, we could have arrived at it by first starting with a Hamiltonian decomposition of the complete graph and then finding an amalgamation for it. Notes References Bahmanian, Amin; Rodger, Chris (2012), "What Are Graph Amalgamations?", Auburn University Hilton, A. J. W (1984), "Hamiltonian Decompositions of Complete Graphs, Journal of Combinatorial Theory, Series B 36, 125–134 Gross, Jonathan L.; Tucker, Thomas W. (1987), Topological Graph Theory, Courier Dover Publications, 151 Gross, Jonathan L. (2011), "Genus Distributions of Cubic Outerplanar Graphs", Journal of Graph Algorithms and Applications, Vol. 15, no. 2, pp. 295–316 Graph theory
Graph amalgamation
[ "Mathematics" ]
626
[ "Discrete mathematics", "Mathematical relations", "Graph theory", "Combinatorics" ]
42,681,375
https://en.wikipedia.org/wiki/Anaerolinea%20thermolimosa
Anaerolinea thermolimosa is a thermophilic, non-spore-forming, non-motile, Gram-negative, filamentous bacteria with type strain IMO-1T (=JCM 12577T =DSM 16554T). References Further reading Beatty, Tom J. Genome Evolution of Photosynthetic Bacteria. Vol. 66. Academic Press, 2013. Tewari, Vinod, Vinod C. Tewari, and Joseph Seckbach, eds.STROMATOLITES: Interaction of Microbes with Sediments: Interaction of Microbes with Sediments. Vol. 18. Springer, 2011. Dilek, Yıldırım. Links Between Geological Processes, Microbial Activities & Evolution of Life: Microbes and Geology. Eds. Yildirim Dilek, H. Furnes, and Karlis Muehlenbachs. Vol. 4. Springer, 2008. External links LPSN Type strain of Anaerolinea thermolimosa at BacDive - the Bacterial Diversity Metadatabase Gram-negative bacteria Thermophiles Chloroflexota Bacteria described in 2006
Anaerolinea thermolimosa
[ "Biology" ]
245
[ "Bacteria stubs", "Bacteria" ]
42,682,101
https://en.wikipedia.org/wiki/Strichartz%20estimate
In mathematical analysis, Strichartz estimates are a family of inequalities for linear dispersive partial differential equations. These inequalities establish size and decay of solutions in mixed norm Lebesgue spaces. They were first noted by Robert Strichartz and arose out of connections to the Fourier restriction problem. Examples Consider the linear Schrödinger equation in with h = m = 1. Then the solution for initial data is given by . Let q and r be real numbers satisfying ; ; and . In this case the homogeneous Strichartz estimates take the form: Further suppose that satisfy the same restrictions as and are their dual exponents, then the dual homogeneous Strichartz estimates take the form: The inhomogeneous Strichartz estimates are: References Theorems in analysis Inequalities
Strichartz estimate
[ "Mathematics" ]
171
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical analysis stubs", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
42,684,750
https://en.wikipedia.org/wiki/Contribution%20of%20epigenetic%20modifications%20to%20evolution
Epigenetics is the study of changes in gene expression that occur via mechanisms such as DNA methylation, histone acetylation, and microRNA modification. When these epigenetic changes are heritable, they can influence evolution. Current research indicates that epigenetics has played a role in the evolution of a number of organisms, including plants and animals. In plants Overview DNA methylation is a process by which methyl groups are added to the DNA molecule. Methylation can change the activity of a DNA segment without changing the sequence. Histones are proteins found in cell nuclei that package and order the DNA into structural units called nucleosomes. DNA methylation and histone modification are two mechanisms used to regulate gene expression in most organisms which includes plants and animals. DNA methylation can be stable during cell division, allowing for methylation states to be passed to other orthologous genes in a genome. DNA methylation can be reversed via enzymes known as DNA de-methylases, while histone modifications can be reversed by removing histone acetyl groups with deacetylases. The process of DNA methylation reversal is known DNA demethylation. Interspecific differences due to environmental factors are shown to be associated with the difference between annual and perennial life cycles. There can be varying adaptive responses based on this. Arabidopsis thaliana Forms of histone methylation cause repression of certain genes that are stably inherited through mitosis but that can also be erased during meiosis or with the progression of time. The induction of flowering by exposure to low winter temperatures in Arabidopsis thaliana shows this effect. Histone methylation participates in repression of expression of an inhibitor of flowering during cold. In annual, semelparous species such as Arabidopsis thaliana, this histone methylation is stably inherited through mitosis after return from cold to warm temperatures giving the plant the opportunity to flower continuously during spring and summer until it senesces. However, in perennial, iteroparous relatives the histone modification rapidly disappears when temperatures rise, allowing expression of the floral inhibitor to increase and limiting flowering to a short interval. Epigenetic histone modifications control a key adaptive trait in Arabidopsis thaliana, and their pattern changes rapidly during evolution associated with reproductive strategy. Another study tested several epigenetic recombinant inbred lines (epiRILs) of Arabidopsis thaliana - lines with similar genomes but varying levels of DNA methylation - for their drought sensitivity and their sensitivity to nutritional stress. It was found that there was a significant amount of heritable variation in the lines in regards to traits important for survival from drought and nutrient stress. This study proved that variation in DNA methylation could result in heritable variation of ecologically important plant traits, such as root allocation, drought tolerance, and nutrient plasticity. It also hinted that epigenetic variation alone could result in rapid evolution. Dandelions Scientists found that changes in DNA methylation induced by stress were inherited in asexual dandelions. Genetically similar plants were exposed to different ecological stresses, and their offspring were raised in an unstressed environment. Amplified fragment-length polymorphism markers that were methylation-sensitive were used to test for methylation on a genome-wide scale. It was found that many of the environmental stresses caused induction of pathogen and herbivore defenses, which caused methylation in the genome. These modifications were then genetically transmitted to the offspring dandelions. The transgenerational inheritance of a stress response can contribute to the heritable plasticity of the organism, allowing it to better survive environmental stresses. It also helps add to the genetic variation of specific lineages with little variability, giving a greater chance of reproductive success. In animals Primates A comparative analysis of CpG methylation patterns between humans and primates found that there were more than 800 genes that varied in their methylation patterns among orangutans, gorillas, chimpanzees, and bonobos. Despite these apes having the same genes, methylation differences are what accounts for their phenotypic variation. The genes in question are involved in development. It is not the protein sequences that account for the differences in physical characteristics between humans and apes; rather, it is the epigenetic changes to the genes. Since humans and the great apes share 99% of their DNA, it is thought that the differences in methylation patterns account for their distinction. So far, there are known to be 171 genes that are uniquely methylated in humans, 101 genes that are uniquely methylated in chimpanzees, 101 genes that are uniquely methylated in gorillas, and 450 genes that are uniquely methylated in orangutans. For example, genes involved in blood pressure regulation and the development of the inner ear's semicircular canal are highly methylated in humans, but not in apes. There are also 184 genes that are conserved at the protein level between humans and chimpanzees, but have epigenetic differences. Enrichments in multiple independent gene categories show that regulatory changes to these genes have given humans their specific traits. This research shows that epigenetics plays an important role in the evolution of primates. It has also been shown that cis-regulatory elements changes affect the transcription start sites (TSS) of genes. 471 DNA sequences are found to be enriched or depleted in regards to histone trimethylation at the H3K4 histone in chimpanzee, human, and macaque prefrontal cortexes. Among these sequences, 33 are selectively methylated in neuronal chromatin from children and adults, but not from non-neuronal chromatin. One locus that was selectively methylated was DPP10, a regulatory sequence that showed evidence of hominid adaptation, such as higher nucleotide substitution rates and certain regulatory sequences that were missing in other primates. Epigenetic regulation of TSS chromatin has been identified as an important development in the evolution of gene expression networks in the human brain. These networks are thought to play a role in cognitive processes and neurological disorders. An analysis of methylation profiles of humans and primate sperm cells reveals epigenetic regulation plays an important role here as well. Since mammalian cells undergo reprogramming of DNA methylation patterns during germ cell development, the methylomes of human and chimp sperm can be compared to methylation in embryonic stem cells (ESCs). There were many hypomethylated regions in both sperms cells and ESCs that showed structural differences. Also, many of the promoters in human and chimp sperm cells had different amounts of methylation. In essence, DNA methylation patterns differ between germ cells and somatic cells as well as between human and chimpanzee sperm cells. Meaning, differences in promoter methylation could possibly account for the phenotypic differences between humans and primates. Research has also shown surprisingly amounts of conserved tissue-specific methylation, in line with phylogenetic relatedness Chickens Red Junglefowl, an ancestor of domestic chickens, show that gene expression and methylation profiles in the thalamus and hypothalamus differed significantly from that of a domesticated egg-laying breed. Methylation differences and gene expression were maintained in the offspring, depicting that epigenetic variation is inherited. Some of the inherited methylation differences were specific to certain tissues, and the differential methylation at specific loci was not altered much after intercrossing between Red Junglefowl and domesticated laying hens for eight generations. The results hint that domestication has led to epigenetic changes, as domesticated chickens maintained a higher level of methylation for more than 70% of the genes. Role in evolution The role of epigenetics in evolution is clearly linked to the selective pressures that regulate that process. As organisms leave offspring that are best suited to their environment, environmental stresses change DNA gene expression that is further passed down to their offspring, allowing for them also to better thrive in their environment. The classic case study of the rats who experience licking and grooming from their mothers pass this trait to their offspring shows that a mutation in the DNA sequence is not required for a heritable change. Basically, a high degree of maternal nurturing makes the offspring of that mother more likely to nurture their own children with a high degree of care as well. Rats with a lower degree of maternal nurturing are less likely to nurture their own offspring with so much care. Also, rates of epigenetic mutations, such as DNA methylation, are much higher than rates of mutations transmitted genetically and are easily reversed. This provides a way for variation within a species to rapidly increase, in times of stress, providing opportunity for adaptation to newly arising selection pressures. Lamarckism Lamarckism supposes that species acquire characteristics to deal with challenges experienced during their lifetimes, and that such accumulations are then passed to their offspring. In modern terms, this transmission from parent to offspring could be considered a method of epigenetic inheritance. Scientists are now questioning the framework of the modern synthesis, as epigenetics to some extent is Lamarckist rather than Darwinian. While some evolutionary biologists have dismissed epigenetics' impact on evolution entirely, others are exploring a fusion of epigenetic and traditional genetic inheritance. See also Transgenerational epigenetic inheritance References Epigenetics Evolutionary biology
Contribution of epigenetic modifications to evolution
[ "Biology" ]
1,944
[ "Evolutionary biology" ]
64,073,069
https://en.wikipedia.org/wiki/Spectroelectrochemistry
Spectroelectrochemistry (SEC) is a set of multi-response analytical techniques in which complementary chemical information (electrochemical and spectroscopic) is obtained in a single experiment. Spectroelectrochemistry provides a whole vision of the phenomena that take place in the electrode process. The first spectroelectrochemical experiment was carried out by Theodore Kuwana, PhD, in 1964. The main objective of spectroelectrochemical experiments is to obtain simultaneous, time-resolved and in-situ electrochemical and spectroscopic information on reactions taking place on the electrode surface. The base of the technique consist in studying the interaction of a beam of electromagnetic radiation with the compounds involved in these reactions. The changes of the optical and electrical signal allow us to understand the evolution of the electrode process. The techniques on which the spectroelectrochemistry is based are: Electrochemistry, which studies the interaction between electrical energy and chemical changes. This technique allows us to analyse reactions that involve electron transfer processes (redox reactions). Spectroscopy, which studies the interaction between electromagnetic radiation and matter (absorption, dispersion or emission). Spectroelectrochemistry provides molecular, thermodynamic and kinetic information of reagents, products and/or intermediates involved in the electron transfer process. Classification of spectroelectrochemical techniques There are different spectroelectrochemical techniques based on the combination of spectroscopic and electrochemical techniques. Regarding electrochemistry, the most common techniques used are: Chronoamperometry, which measures current intensity as a function of time by applying a constant difference of potential to the working electrode. Chronopotentiometry, which measures the difference of potential as a function of time by applying a constant current. Voltammetry, which measures the change of current as a function of the linear change of the working electrode potential. Pulse techniques, which measure the change of current as a function of difference of potential, applying pulse potential functions to the working electrode. The general classification of the spectroelectrochemical techniques is based on the spectroscopic technique chosen. Ultraviolet-visible absorption spectroelectrochemistry Ultraviolet-visible(UV-Vis) absorption spectroelectrochemistry is a technique that studies the absorption of electromagnetic radiation in the UV-Vis regions of the spectrum, providing molecular information related to the electronic levels of molecules. It provides qualitative as well as quantitative information. UV-Vis spectroelectrochemistry helps to characterize compounds and materials, determines concentrations and different parameters such as absorptivity coefficients, diffusion coefficients, formal potentials or electron transfer rates. Photoluminescence spectroelectrochemistry Photoluminescence (PL) is a phenomenon related to the ability of some compounds that, after absorbing specific electromagnetic radiation, relax to a lower energy state through emission of photons. This spectroelectrochemical technique is limited to those compounds with fluorescent or luminescent properties. The experiments are strongly interfered by ambient light. This technique provides structural information and quantitative information with great detection limits. Infrared spectroelectrochemistry Infrared spectroscopy is based on the fact that molecules absorb electromagnetic radiation at characteristic frequencies related to their vibrational structure. Infrared (IR) spectroelectrochemistry is a technique that allows the characterization of molecules based on the resistance, stiffness and number of bonds present. It also detects the presence of compounds, determines the concentration of species during a reaction, the structure of compounds, the properties of the chemical bonds, etc. Raman spectroelectrochemistry Raman spectroelectrochemistry is based on the inelastic scattering or Raman scattering of monochromatic light when it strikes upon a specific molecule, providing information about vibrational energy of that molecule. Raman spectrum provides highly specific information about the structure and composition of the molecules, such as a true fingerprint of them. It has been extensively used to study single wall carbon nanotubes and graphene. X-ray spectroelectrochemistry X-ray spectroelectrochemistry is a technique that studies the interaction of high-energy radiation with matter during an electrode process. X-rays can originate absorption, emission or scattering phenomena, allowing to perform both quantitative and qualitative analysis depending on the phenomenon taking place. All these processes involve electronic transitions in the inner layers of the atoms involved. Particularly, it is interesting to study the processes of radiation, absorption and emission that take place during an electron transfer reaction. In these processes, the promotion or relaxation of an electron can occur between an outer shell and an inner shell of the atom. Nuclear magnetic resonance spectroelectrochemistry Nuclear magnetic resonance (NMR) is a technique used to obtain physical, chemical, electronic and structural information about molecules due to the chemical shift of the resonance frequencies of nuclear spins in the sample. Its combination with electrochemical techniques can provide detailed and quantitative information about the functional groups, topology, dynamics and the three-dimensional structure of molecules in solution during a charge transfer process. The area under an NMR peak is related to the ratio of the number of turns involved and the peak integrals to determine the composition quantitatively. Electron paramagnetic resonance spectroelectrochemistry Electron paramagnetic resonance (EPR) is a technique that allows the detection of free radicals formed in chemical or biological systems. In addition, it studies the symmetry and electronic distribution of paramagnetic ions. This is a highly specific technique because the magnetic parameters are characteristic of each ion or free radical. The physical principles of this technique are analogous to those of NMR, but in the case of EPR, electronic spins are excited instead of nuclear, that is interesting in certain electrode reactions. Advantages and applications The versatility of spectroelectrochemistry is increasing due to the possibility of using several electrochemical techniques in different spectral regions depending on the purpose of the study and the information of interest. The main advantages of spectroelectrochemical techniques are: The simultaneous information is obtained by different techniques in a single experiment, increasing the selectivity and the sensitivity. Both qualitative and quantitative information can be obtained. The possibility of working with a small amount of sample, saving it for future analysis. Due to the high versatility of the technique, the field of applications is considerably wide. Study of reaction mechanisms, where the oxidation and reduction of the species involved in the reaction can be observed, as well as the generation of reaction intermediates. Characterization of organic and inorganic materials, that allow to understand the structure and properties of the material when is perturbed by a signal (electric, light, etc.). Development of spectroelectrochemical sensors, which are based on optical and electrical responses, capable of providing two independent signals about the same sample and offering a self-validated determination. Study of catalysts, obtaining relationships between the electrochemical and spectroscopic properties and their photochemical and photophysical behaviour. Study different processes and molecules in biotechnology, biochemistry or medicine. Determine specific properties and characteristics of new materials in fields such as energy or nanotechnology. References Physical chemistry Spectroscopy Electrochemistry
Spectroelectrochemistry
[ "Physics", "Chemistry" ]
1,484
[ "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Molecular physics", "Instrumental analysis", "Spectroscopy", "Electrochemistry", "nan", "Physical chemistry" ]
64,080,717
https://en.wikipedia.org/wiki/TET%20enzymes
The TET enzymes are a family of ten-eleven translocation (TET) methylcytosine dioxygenases. They are instrumental in DNA demethylation. 5-Methylcytosine (see first Figure) is a methylated form of the DNA base cytosine (C) that often regulates gene transcription and has several other functions in the genome. Demethylation by TET enzymes (see second Figure), can alter the regulation of transcription. The TET enzymes catalyze the hydroxylation of DNA 5-methylcytosine (5mC) to 5-hydroxymethylcytosine (5hmC), and can further catalyse oxidation of 5hmC to 5-formylcytosine (5fC) and then to 5-carboxycytosine (5caC). 5fC and 5caC can be removed from the DNA base sequence by base excision repair and replaced by cytosine in the base sequence. TET enzymes have central roles in DNA demethylation required during embryogenesis, gametogenesis, memory, learning, addiction and pain perception. TET proteins The three related TET genes, TET1, TET2 and TET3 code respectively for three related mammalian proteins TET1, TET2, and TET3. All three proteins possess 5mC oxidase activity, but they differ in terms of domain architecture. TET proteins are large (~180- to 230-kDa) multidomain enzymes. All TET proteins contain a conserved double-stranded β-helix (DSBH) domain, a cysteine-rich domain, and binding sites for the cofactors Fe(II) and 2-oxoglutarate (2-OG) that together form the core catalytic region in the C terminus. In addition to their catalytic domain, full-length TET1 and TET3 proteins have an N-terminal CXXC zinc finger domain that can bind DNA. The TET2 protein lacks a CXXC domain, but the IDAX gene, that's a neighbor of the TET2 gene, encodes a CXXC4 protein. IDAX is thought to play a role in regulating TET2 activity by facilitating its recruitment to unmethylated CpGs. TET isoforms The three TET genes are expressed as different isoforms, including at least two isoforms of TET1, three of TET2 and three of TET3. Different isoforms of the TET genes are expressed in different cells and tissues. The full-length canonical TET1 isoform appears virtually restricted to early embryos, embryonic stem cells and primordial germ cells (PGCs). The dominant TET1 isoform in most somatic tissues, at least in the mouse, arises from alternative promoter usage which gives rise to a short transcript and a truncated protein designated TET1s. The three isoforms of TET2 arise from different promoters. They are expressed and active in embryogenesis and differentiation of hematopoietic cells. The isoforms of TET3 are the full length form TET3FL, a short form splice variant TET3s, and a form that occurs in oocytes designated TET3o. TET3o is created by alternative promoter use and contains an additional first N-terminal exon coding for 11 amino acids. TET3o only occurs in oocytes and the one cell stage of the zygote and is not expressed in embryonic stem cells or in any other cell type or adult mouse tissue tested. Whereas TET1 expression can barely be detected in oocytes and zygotes, and TET2 is only moderately expressed, the TET3 variant TET3o shows extremely high levels of expression in oocytes and zygotes, but is nearly absent at the 2-cell stage. It appears that TET3o, high in oocytes and zygotes at the one cell stage, is the major TET enzyme utilized when almost 100% rapid demethylation occurs in the paternal genome just after fertilization and before DNA replication begins (see DNA demethylation). TET specificity Many different proteins bind to particular TET enzymes and recruit the TETs to specific genomic locations. In some studies, further analysis is needed to determine whether the interaction per se mediates the recruitment or instead the interacting partner helps to establish a favourable chromatin environment for TET binding. TET1‑depleted and TET2‑depleted cells revealed distinct target preferences of these two enzymes, with TET1‑preferring promoters and TET2‑preferring gene bodies of highly expressed genes and enhancers. The three mammalian DNA methyltransferases (DNMTs) show a strong preference for adding a methyl group to the 5 carbon of a cytosine where a cytosine nucleotide is followed by a guanine nucleotide in the linear sequence of bases along its 5' → 3' direction (at CpG sites). This forms a 5mCpG site. More than 98% of DNA methylation occurs at CpG sites in mammalian somatic cells. Thus TET enzymes largely initiate demethylation at 5mCpG sites. Oxoguanine glycosylase (OGG1) is one example of a protein that recruits a TET enzyme. TET1 is able to act on 5mCpG if an ROS has first acted on the guanine to form 8-hydroxy-2'-deoxyguanosine (8-OHdG or its tautomer 8-oxo-dG), resulting in a 5mCp-8-OHdG dinucleotide (see Figure). After formation of 5mCp-8-OHdG, the base excision repair enzyme OGG1 binds to the 8-OHdG lesion without immediate excision (see Figure). Adherence of OGG1 to the 5mCp-8-OHdG site recruits TET1, allowing TET1 to oxidize the 5mC adjacent to 8-OHdG. This initiates the demethylation pathway. EGR1 is another example of a protein that recruits a TET enzyme. EGR1 has an important role in learning and memory. When a new event such as fear conditioning causes a memory to be formed, EGR1 messenger RNA is rapidly and selectively up-regulated in subsets of neurons in specific brain regions associated with learning and memory formation. TET1s is the predominant isoform of TET1 that is expressed in neurons. When EGR1 proteins are expressed, they appear to bring TET1s to about 600 sites in the neuron genome. Then EGR1 and TET1 appear to cooperate in demethylating and thereby activating the expression of genes downstream of the EGR1 binding sites in DNA. TET processivity TET processivity can be viewed at three levels, the physical, chemical and genetic levels. Physical processivity refers to the ability of a TET protein to slide along the DNA from one CpG site to another. An in vitro study showed that DNA-bound TET does not preferentially oxidize other CpG sites on the same DNA molecule, indicating that TET is not physically processive. Chemical processivity refers to the ability of TET to catalyze the oxidation of 5mC iteratively to 5caC without releasing its substrate. It appears that TET can work through both chemically processive and non‑processive mechanisms depending on reaction conditions. Genetic processivity refers to the genetic outcome of TET‑mediated oxidation in the genome, as shown by mapping of the oxidized bases. In mouse embryonic stem cells, many genomic regions or CpG sites are modified so that 5mC is changed to 5hmC but not to 5fC or 5caC, whereas at many otherCpG sites 5mCs are modified to 5fC or 5caC but not 5hmC, suggesting that 5mC is processed to different states at different genomic regions or CpG sites. TET enzyme activity TET enzymes are dioxygenases in the family of alpha-ketoglutarate-dependent hydroxylases. A TET enzyme is an alpha-ketoglutarate (α-KG) dependent dioxygenase that catalyses an oxidation reaction by incorporating a single oxygen atom from molecular oxygen (O2) into its substrate, 5-methylcytosine in DNA (5mC), to produce the product 5-hydroxymethylcytosine in DNA. This conversion is coupled with the oxidation of the co-substrate α-KG to succinate and carbon dioxide (see Figure). The first step involves the binding of α-KG and 5-methylcytosine to the TET enzyme active site. The TET enzymes each harbor a core catalytic domain with a double-stranded β-helix fold that contains the crucial metal-binding residues found in the family of Fe(II)/α-KG- dependent oxygenases. α-KG coordinates as a bidentate ligand (connected at two points) to Fe(II) (see Figure), while the 5mC is held by a noncovalent force in close proximity. The TET active site contains a highly conserved triad motif, in which the catalytically-essential Fe(II) is held by two histidine residues and one aspartic acid residue (see Figure). The triad binds to one face of the Fe center, leaving three labile sites available for binding α-KG and O2 (see Figure). TET then acts to convert 5-methylcytosine to 5-hydroxymethylcytosine while α-ketoglutarate is converted to succinate and CO2. Alternate TET activities The TET proteins also have activities that are independent of DNA demethylation. These include, for instance, TET2 interaction with O-linked N-acetylglucosamine (O-GlcNAc) transferase to promote histone O-GlcN acylation to affect transcription of target genes. TET functions Early embryogenesis The mouse sperm genome is 80–90% methylated at its CpG sites in DNA, amounting to about 20 million methylated sites. After fertilization, early in the first day of embryogenesis, the paternal chromosomes are almost completely demethylated in six hours by an active TET-dependent process, before DNA replication begins (blue line in Figure). Demethylation of the maternal genome occurs by a different process. In the mature oocyte, about 40% of its CpG sites in DNA are methylated. In the pre-implantation embryo up to the blastocyst stage (see Figure), the only methyltransferase present is an isoform of DNMT1 designated DNMT1o. It appears that demethylation of the maternal chromosomes largely takes place by blockage of the methylating enzyme DNMT1o from entering the nucleus except briefly at the 8 cell stage (see DNA demethylation). The maternal-origin DNA thus undergoes passive demethylation by dilution of the methylated maternal DNA during replication (red line in Figure). The morula (at the 16 cell stage), has only a small amount of DNA methylation (black line in Figure). Gametogenesis The newly formed primordial germ cells (PGC) in the implanted embryo devolve from the somatic cells at about day 7 of embryogenesis in the mouse. At this point the PGCs have high levels of methylation. These cells migrate from the epiblast toward the gonadal ridge. As reviewed by Messerschmidt et al., the majority of PGCs are arrested in the G2 phase of the cell cycle while they migrate toward the hindgut during embryo days 7.5 to 8.5. Then demethylation of the PGCs takes place in two waves. There is both passive and active, TET-dependent demethylation of the primordial germ cells. At day 9.5 the primordial germ cells begin to rapidly replicate going from about 200 PGCs at embryo day 9.5 to about 10,000 PGCs at day 12.5. During days 9.5 to 12.5 DNMT3a and DNMT3b are repressed and DNMT1 is present in the nucleus at a high level. But DNMT1 is unable to methylate cytosines during days 9.5 to 12.5 because the UHRF1 gene (also known as NP95) is repressed and UHRF1 is an essential protein needed to recruit DNMT1 to replication foci where maintenance DNA methylation takes place. This is a passive, dilution form of demethylation. In addition, from embryo day 9.5 to 13.5 there is an active form of demethylation. As indicated in the Figure of the demethylation pathway above, two enzymes are central to active demethylation. These are a ten-eleven translocation (TET) methylcytosine dioxygenase and thymine-DNA glycosylase (TDG). One particular TET enzyme, TET1, and TDG are present at high levels from embryo day 9.5 to 13.5, and are employed in active TET-dependent demethylation during gametogenesis. PGC genomes display the lowest levels of DNA methylation of any cells in the entire life cycle of the mouse by embryonic day 13.5. Learning and memory Learning and memory have levels of permanence, differing from other mental processes such as thought, language, and consciousness, which are temporary in nature. Learning and memory can be either accumulated slowly (multiplication tables) or rapidly (touching a hot stove), but once attained, can be recalled into conscious use for a long time. Rats subjected to one instance of contextual fear conditioning create an especially strong long-term memory. At 24 hours after training, 9.17% of the genes in the genomes of rat hippocampus neurons were found to be differentially methylated. This included more than 2,000 differentially methylated genes at 24 hours after training, with over 500 genes being demethylated. Similar results to that in the rat hippocampus were also obtained in mice with contextual fear conditioning. The hippocampus region of the brain is where contextual fear memories are first stored (see Figure), but this storage is transient and does not remain in the hippocampus. In rats contextual fear conditioning is abolished when the hippocampus is subjected to hippocampectomy just one day after conditioning, but rats retain a considerable amount of contextual fear when hippocampectomy is delayed by four weeks. In mice, examined at 4 weeks after conditioning, the hippocampus methylations and demethylations were reversed (the hippocampus is needed to form memories but memories are not stored there) while substantial differential CpG methylation and demethylation occurred in cortical neurons during memory maintenance. There were 1,223 differentially methylated genes in the anterior cingulate cortex (see Figure) of mice four weeks after contextual fear conditioning. Thus, while there were many methylations in the hippocampus shortly after memory was formed, all these hippocampus methylations were demethylated as soon as four weeks later. Li et al. reported one example of the relationship between expression of a TET protein, demethylation and memory while using extinction training. Extinction training is the disappearance of a previously learned behavior when the behavior is not reinforced. A comparison between infralimbic prefrontal cortex (ILPFC) neuron samples derived from mice trained to fear an auditory cue and extinction-trained mice revealed dramatic experience-dependent genome-wide differences in the accumulation of 5-hmC in the ILPFC in response to learning. Extinction training led to a significant increase in TET3 messenger RNA levels within cortical neurons. TET3 was selectively activated within the adult neo-cortex in an experience-dependent manner. A short hairpin RNA (shRNA) is an artificial RNA molecule with a tight hairpin turn that can be used to silence target gene expression via RNA interference. Mice trained in the presence of TET3-targeted shRNA showed a significant impairment in fear extinction memory. Addiction The nucleus accumbens (NAc) has a significant role in addiction. In the nucleus accumbens of mice, repeated cocaine exposure resulted in reduced TET1 messenger RNA (mRNA) and reduced TET1 protein expression. Similarly, there was a ~40% decrease in TET1 mRNA in the NAc of human cocaine addicts examined postmortem. As indicated above in learning and memory, a short hairpin RNA (shRNA) is an artificial RNA molecule with a tight hairpin turn that can be used to silence target gene expression via RNA interference. Feng et al. injected shRNA targeted to TET1 in the NAc of mice. This could reduce TET1 expression in the same manner as reduction of TET1 expression with cocaine exposure. They then used an indirect measure of addiction, conditioned place preference. Conditioned place preference can measure the amount of time an animal spends in an area that has been associated with cocaine exposure, and this can indicate an addiction to cocaine. Reduced Tet1 expression caused by shRNA injected into the NAc robustly enhanced cocaine place conditioning. Pain (nociception) As described in the article Nociception, nociception is the sensory nervous system's response to harmful stimuli, such as a toxic chemical applied to a tissue. In nociception, chemical stimulation of sensory nerve cells called nociceptors produces a signal that travels along a chain of nerve fibers via the spinal cord to the brain. Nociception triggers a variety of physiological and behavioral responses and usually results in a subjective experience, or perception, of pain. Work by Pan et al. first showed that TET1 and TET3 proteins are normally present in the spinal cords of mice. They used a pain inducing model of intra plantar injection of 5% formalin into the dorsal surface of the mouse hindpaw and measured time of licking of the hindpaw as a measure of induced pain. Protein expression of TET1 and TET3 increased by 152% and 160%, respectively, by 2 hours after formalin injection. Forced reduction of expression of TET1 or TET3 by spinal injection of Tet1-siRNA or Tet3-siRNA for three consecutive days before formalin injection alleviated the mouse perception of pain. On the other hand, forced overexpression of TET1 or TET3 for 2 consecutive days significantly produced pain-like behavior as evidenced by a decrease in the mouse of the thermal pain threshold. They further showed that the nociceptive pain effects occurred through TET mediated conversion of 5-methylcytosine to 5-hydroxymethylcytosine in the promoter of a microRNA designated miR-365-3p, thus increasing its expression. This microRNA, in turn, ordinarily targets (decreases expression of) the messenger RNA of Kcnh2, that codes for a protein known as Kv11.1 or KCNH2. KCNH2 is the alpha subunit of a potassium ion channel in the central nervous system. Forced decrease in expression of TET1 or TET3 through pre-injection of siRNA reversed the decrease of KCNH2 protein in formalin-treated mice. References Gene expression Epigenetics Further reading
TET enzymes
[ "Chemistry", "Biology" ]
4,141
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
64,088,128
https://en.wikipedia.org/wiki/Baubotanik
Baubotanik is a building method in which architectural structures are created through the interaction of technical joints and plant growth. The term entails the practice of designing and building living structures using living plants. In this regard, living and non-living elements are intertwined in such a way that they grow together into plant-technical composite structures. The Baubotanik method combines the aesthetic and ecological qualities of living trees with the static functions and structural requirements of buildings, thereby reducing the need for artificial building materials. The structures provide valuable habitats for a variety of animal species and make conventional foundations redundant, due to their root anchorage. The use of Baubotanik is not a new invention and can be found in various historical and cultural contexts, such as the Tanzlinden (“dancing lime”) tree in Germany and living root bridge in North-East India. Common in the Indian state of Meghalaya and grown by the Khasi and Jaintia, the bridges consist of the aerial roots of rubber fig trees (Ficus elastica) and are grown over rivers to form walkable bridges. While the process can take fifteen years to complete, the bridges can be reinforced with natural materials and can withstand the strongest tropical storms. Furthermore, since the turn of the millennium, ‘willow churches’ (made of willow rods and lacking a fixed roof) have been constructed on various former garden show grounds, yet provide only limited functionality as buildings. Research An early publication in this field of study was the article Baubotanik: Mit lebenden Pflanzen konstruieren (translating to “Baubotanik: Designing with Living Plants) by Ferdinand Ludwig and Oliver Storz in 2005 in the magazine Baumeister. The term “Baubotanik” was defined in 2007 at the Institute of Theory of Architecture and Design (Institut für Grundlagen moderner Architektur und Entwerfen) at the University of Stuttgart, where its concept was scientifically further developed. Within the scope of the research, simple experimental buildings were constructed, such as a footbridge and a Baubotanik tower that illustrated the possibilities of creating larger Baubotanik structures by adding individual plants. Moreover, a two-story bird-watching station was planted in the town of Waldkirchen as part of the Bavarian State Horticultural Show 2007. Subsequently, a three-story plane tree cube was created for the Baden-Württemberg State Horticultural Show 2012 in Nagold. Since 2017, the Baubotanik field of research has been based at the Professorship for Green Technologies in Landscape Architecture at the Technical University of Munich. See also References Literature Middleton, Wilfrid & Habibi, Amin & Shankar, Sanjeev & Ludwig, Ferdinand. (2020). Characterizing Regenerative Aspects of Living Root Bridges. Sustainability. 12. 10.3390/su12083267. Open access article link Well, Friederike & Ludwig, Ferdinand. (2020). Blue-green architecture: A case study analysis considering the synergetic effects of water and vegetation. 9. 191–202. 10.1016/j.foar.2019.11.001. Open access article link Ludwig, Ferdinand & Middleton, Wilfrid & Gallenmüller, Friederike & Rogers, Patrick & Speck, Thomas. (2019). Living bridges using aerial roots of ficus elastica – an interdisciplinary perspective. Scientific Reports. 9. 10.1038/s41598-019-48652-w. Open access article link Ludwig, Ferdinand & Schönle, Daniel & Vees, Ute. (2016). Baubotanik - Building Architecture with Nature. International Online Journal Biotope City. PDF download and open access article link Ludwig, Ferdinand & Mihaylov, Boyan & Schwinn, Tobias. (2013). Emergent Timber: A tool for designing the growth process of Baubotanik structures PDF download and open access article link External links Ferdinand Ludwig, TEDxTUM, Designing living buildings with trees Faculty of Architecture, Technical University of Munich, GTLA research (Professorship of Green Technologies in Landscape Architecture) Baubotanik shapes living tree branches into building facades Youtube video: Kirsten Dirksen Baubotanik: Ein Hybrid von Natur und Technik Youtube video: EGGER Group ArchDaily: Baubotanik - The Botanically Inspired Design System That Creates Living Buildings ArchDaily Grow Your Own Building with Baubotanik Architecture Grow Your Own Building with Baubotanik Architecture Sustainable architecture Buildings and structures Architectural design Sustainable building Environmental engineering Trees
Baubotanik
[ "Chemistry", "Engineering", "Environmental_science" ]
947
[ "Sustainable building", "Sustainable architecture", "Building engineering", "Chemical engineering", "Construction", "Civil engineering", "Architectural design", "Buildings and structures", "Environmental engineering", "Design", "Environmental social science", "Architecture" ]
67,019,071
https://en.wikipedia.org/wiki/Lianhua%20Qingwen
Lianhua Qingwen (, LHQW) is a traditional Chinese medicine (TCM) formulation used for the treatment of influenza. Background Lianhua Qingwen was developed by Shijiazhuang Yiling Pharmaceutical in 2003 as a treatment for severe acute respiratory syndrome (SARS) following the outbreak of the disease in 2002 and was listed by the National Health Commission of China in 2004 as a treatment for influenza and other respiratory disease. Its formulation includes 13 herbs and minerals which are said to have been used in Chinese traditional medicine as early as the Han dynasty. The medication is approved in China as a Chinese patent medicine. As a result, the package insert includes a list of herbs, but not their amounts. Sources of its formulation reportedly consist of: 北板蓝根 Isatis root 连翘 Weeping forsythia 金银花 Lonicera japonica 炙麻黄 Ephedra 甘草 Licorice root 绵马贯众 Male fern rhizome 石膏 Gypsum fibrosum 广藿香 Cablin patchouli herb 红景天 Herba rhodiolae 鱼腥草 Houttuynia cordata 大黄 Rhubarb root and rhizome 炒苦杏仁 Bitter apricot kernel 薄荷醇 menthol The medicine is both in capsule and granular form. Ethnopharamacology Being approved as a Chinese patent medicine, LHQW also needs to have information regarding its supposed function in the practice of TCM. The package insert text reads: [Functions and Indications] Clears epidemics and removes toxins. Ventilates lung and discharges heat. Used to treat influenza of the heat-toxin invading lung pattern, with symptoms such as: fever or high fever, aversion to cold, muscle soreness and pain, nasal congestion and runny nose, coughing, headache, dry and sore throat, reddish tongue, yellow or greasy yellow tongue coating, etc. Addition approved in April 2020: In the routine treatment of novel coronavirus pneumonia, it can be used for fever, cough, and fatigue caused by mild and moderate types. Contraindication Lianhua Qingwen should be avoided for patients with G6PD deficiency, since its active ingredient, Lonicera japonica, will lead to hemolysis to such patients. Due to the inclusion of Ephedra, people with high blood pressure, anxiety, history of seizures, irregular heart beats, or other heart conditions, should avoid taking Linhua Quingwen. Adverse effects The official package insert of LHQW states that the adverse effects are "unclear". A January 2022 meta-analysis from China reports that it may cause GI discomfort, rashes and itches, dry mouth, and dizziness. Uses and controversies of Lianhua Qingwen in relation to COVID-19, by region In China During the COVID-19 pandemic, the government of the People's Republic of China (PRC) approved the use of Lianhua Qingwen for mild to moderate COVID-19 cases in January 2020, and promotes the use of the medicine abroad. In March 2022, during the Shanghai COVID-19 outbreak, the medication was distributed en masse to residents. Reports emerged indicating that this process consumed significant logistical capacity, drawing criticism about misuse of resources at a time when people were struggling with shortages of basic needs such as food and medication. An article on telemedicine and medical news platform Dr. Lilac pointed out that there was no scientific evidence available to indicate that LHQW was effective as prophylaxis to prevent infection, nor did there exist any official government recommendation for such usage; instead, taking the drug unnecessarily carried a risk of side effects. It argued that there was thus no reasonable basis for the mass distribution of the medication to healthy individuals to begin with, let alone doing so in a way that took up transportation capacity and resources that were urgently needed elsewhere. Lianhua Qingwen has also been promoted and distributed by the government in Hong Kong (HK). The pro-establishment DAB alliance was found to have distributed unregistered doses of LHQW, in breach of health regulations. Conflict of Interest Controversy In April 2022, the Financial Times reported that the leading COVID-19 health official in the PRC, the famous epidemiologist and pulmonologist Zhong Nanshan - who has also been the most prominent scientific promoter of Lianhua Qingwen - had undisclosed prior investments in large stakeholdings in corporations producing Lianhua Qingwen and other treatments under question. As the Financial Times report showed, these appeared to be serious conflicts of interest as the investments benefited from the PRC and HK governments' rapid approval and then widespread national & international promotion of Lianhua Qingwen and other suspect treatments for purportedly helping COVID-19 sufferers. A 2020 "randomized controlled trial" of LHQW involving Zhong was also found in April 2022 to have undisclosed funding from Yiling Pharmaceuticals, forcing an erratum. Retraction Watch also notes that author Jia Zheng-hua is the son-in-law of Wu Yi-ling, the founder of the company in question. Elsewhere in Asia In the Philippines, its Food and Drug Administration approved Lianhua Qingwen on 7 August 2020 as a traditional herbal product that helps remove "heat-toxin invasion of the lungs, including symptoms such as fever, aversion to cold, muscle soreness, stuffy and runny nose". It is not registered as a COVID-19 medication, and a doctor's prescription is required for its use. A Filipino TCM physician interviewed by ABS-CBN clarified that although the medicine can be used for symptomatic treatment of flu-like symptoms in COVID-19 patients, it is not an antibiotic nor anti-viral, and cannot cure the disease itself. It cannot be taken as prophylaxis or as a health supplement. In Singapore, the Health Sciences Authority has issued an advisory to clarify that although it has been approved for sale as a Chinese proprietary medicine for the relief of cold and flu symptoms, Lianhua Qingwen is not approved to treat or alleviate symptoms of COVID-19. It warned that sellers who make claims that it can prevent, protect against or treat COVID-19 may face prosecution. In Cambodia at least 50,000 boxes were handed over to the Ministry of Health around April 2021, but it remains unclear both who the sponsor of the donation was, exactly how many capsules were donated and where the products were to be put to clinical use. After local pharmaceutical distributor Argon, a subsidiary of Dynamic Group claimed exclusive rights to distribute the product, a number of private importers and online resellers were shut down. North America Although the medicine has been allowed to be sold in Canada since 2012, Health Canada has cautioned against the use of the Chinese traditional medicine to prevent, treat, and cure COVID-19. In the United States, the FDA is advising consumers not to purchase or use Lianhua Qingwen, stating that it has not been approved or authorized by FDA and is being misleadingly represented as safe and/or effective for the treatment or prevention of COVID-19. Australia In Australia, the Therapeutic Goods Administration has not given approval to Lianhua Qingwen, as it contains ephedra, a key ingredient used to make the drug methamphetamine. Despite the ban, Lianhua Qingwen has been sold illegally in Australia as a COVID-19 treatment. See also NRICM101 References Traditional Chinese medicine pills COVID-19 drug development COVID-19 pandemic in China
Lianhua Qingwen
[ "Chemistry" ]
1,577
[ "COVID-19 drug development", "Drug discovery" ]
39,882,238
https://en.wikipedia.org/wiki/Biotechnology%20and%20genetic%20engineering%20in%20Bangladesh
Biotechnology and genetic engineering in Bangladesh is one of the thriving fields of science and technology in the country. History The research for biotechnology in Bangladesh started in the late 1970s. The root cause behind the initiation was the significance of agricultural sector, which had been the backbone of the national economy since the ancient times. The research first started in the department of Genetics and Plant Breeding in Bangladesh Agricultural University through Tissue culture on jute. Subsequently, within the next 10–12 years, similar research programs began to take place in the Faculty of Biotechnology & Genetic Engineering at Mawlana Bhashani Science and Technology University, University of Rajshahi, University of Chittagong, University of Khulna, Islamic University, Kushtia, Jagannath University, Jahangirnagar University, Shahjalal University of Science and Technology, Bangladesh Rice Research Institute, Bangladesh Jute Research Institute, Bangabandhu Sheikh Mujibur Rahman Science and Technology University, Sylhet Agricultural University, Bangabandhu Shiekh Mujibur Rahman Agricultural University, Bangladesh Agricultural Research Institute, Bangladesh Agricultural University, Bangladesh Forest Research Institute, Bangladesh Institute of Nuclear Agriculture, Bangladesh Council of Scientific and Industrial Research, Bangladesh Livestock Research Institute and Bangladesh Atomic Energy Commission. In 1990, Bangladesh Association for Plant Tissue Culture (BAPTC) was formed which has been organising several international conferences since its inception. In September 1993, the government of Bangladesh formed a National Committee on Biotechnology Product Development to select potential biotechnological projects which could be leased out for commercialisation. In collaboration with BAPTC, the Ministry of Science and Technology organised a workshop on Biosafety Regulation in 1997, after which a task force was formed to formulate biosafety guidelines and biosafety regulations in the light of the regulation of the workshop. In the late 1990s, Bangladesh became a member of the International Centre for Genetic Engineering and Biotechnology (ICGEB). In 1999, the National Institute of Biotechnology was established as the centre of excellence in biotechnological education. To accelerate multidimensional biotechnological research, in 2006, the government adopted the national policy guidelines on biotechnology which was approved by the National Task force on biotechnology. In 2012, the cabinet approved the draft of National Biotechnology Policy, 2012 which was aimed at eradicating poverty through increasing productivity in agriculture and industrial sectors. Genome sequencing projects Jute In 2008, with the funding of the government, the University of Dhaka, DataSoft IT firm and Bangladesh Jute Research Institute initiated a collaborative genome research program on jute under the leadership of Dr. Maqsudul Alam who had previously sequenced the genomes of papaya and rubber. Subsequently, in 2010, the group of scientists successfully sequenced the genome of jute, through which, Bangladesh became only the second country after Malaysia, among the developing nations, to have successfully sequenced a plant genome. Fungus In 2012, the same group of scientists decoded the genome of Macrophomina phaseolina, a Botryosphaeriaceae fungus, which is responsible for causing seedling blight, root rot, and charcoal rot of more than 500 crop and non-crop species throughout the world. The sequencing took place at the laboratory of Bangladesh Jute Research Institute and was done as part of The Basic and applied Research on Jute project. Rice In 2021, scientists at the Bangladesh Institute of Nuclear Agriculture (BINA) and Bangladesh Agricultural University (BAU) unveiled the full genome sequence of salinity and submergence-tolerant rice for the first time. Biotechnology industry The biotechnology industry is yet to be a major contributor on the national economy, however, according to the experts, the results of some ongoing research shows enough potentials of this sector. BCSIR has undertaken the production of Spirulina and a certain quantity of it is being marketed as tablets by several private manufacturers. BCSIR has also explored the production of baker's yeast using molasses which are by-products of the sugarcane manufacturing plants in the northern part of the country. The net production of molasses numbers about 100,000 million tons per year, about half of which is used in the distilleries for the production of ethanol. The production of Rhizobium is also perceived to have commercial potential. Several private pharmaceutical companies have started to develop separate and dedicated biotech units. Some private firms like BRAC Biotechnology Center, Square Agric-tech and Aman Agro Industries are producing virus-free potato seeds in substantial quantities, gradually reducing the dependency on imported potato seeds. Proshika Tissue Culture Center is now exporting varieties of tissue culture derived orchid plants. Pharmaceutical companies like the Incepta Pharmaceuticals have begun to produce and market insulin and preparing to export abroad. Incepta has also signed an agreement with ICGEB to receive the technological know-how for commercially manufacturing hepatitis B vaccine. References Biotechnology by country Ban Science and technology in Bangladesh
Biotechnology and genetic engineering in Bangladesh
[ "Engineering", "Biology" ]
1,005
[ "Genetic engineering", "Genetic engineering by country", "Biotechnology by country" ]
39,888,138
https://en.wikipedia.org/wiki/Hantavirus%20hemorrhagic%20fever%20with%20renal%20syndrome
Hantavirus hemorrhagic fever with renal syndrome (HFRS) is a hemorrhagic fever caused by hantaviruses. Symptoms occur usually occur 12–16 days after exposure to the virus and come in five distinct phases: febrile, hypotensive, low urine production (oliguric), high urine production (diuretic), and recovery. Early symptoms include headache, lower back pain, nausea, vomiting, diarrhea, bloody stool, the appearance of spots on the skin, bleeding in the respiratory tract, and renal symptoms such as kidney swelling, excess protein in urine, and blood in urine. During the hypotensive phase, blood pressure lowers due to microvascular leakage. Renal failure then causes the diuretic phase, before recovering and increasing urine production as disease progression improves. The severity of symptoms varies depending on which virus causes HFRS and ranges from a mild illness to severe. The case fatality rate likewise varies by virus, at less than 1% up to 15%. HFRS is caused mainly by four viruses in Asia and Europe: Hantaan virus, Seoul virus, Puumala virus, and Dobrava-Belgrade virus. In East Asia, Hantaan virus is the most common cause of HFRS, causes a severe form of HFRS, and is spread by striped field mice. Seoul virus accounts for about a quarter of HFRS cases, causes a moderate form of the disease, and is found worldwide due to the global distribution of its natural reservoir, the brown rat. Puumala virus is the most common cause of HFRS in Russia and northern and central Europe, usually causes a mild form of HFRS, and is transmitted by the bank vole. Dobrava-Belgrade virus is the most common cause of HFRS in southern Europe, and varies in disease severity and natural reservoir depending on its genotype. A mild form of HFRS often called nephropathia epidemica is caused by Puumala virus and Dobrava-Belgrade virus. Transmission occurs mainly through inhalation of aerosols that contain rodent saliva, urine, or feces, but can also occur through contaminated food, bites, and scratches. Vascular endothelial cells and macrophages are the primary cells infected by hantaviruses, and infection causes abnormalities with blood clotting, all of which results in fluid leakage responsible for the more severe symptoms. Recovery from infection likely confers life-long protection. The main way to prevent HFRS is to avoid or minimize contact with rodents that carry hantaviruses. Removing sources of food for rodents, safely cleaning up after them, and preventing them from entering one's house are all important means of protection. People who are at a risk of interacting with infected rodents can wear masks to protect themselves. Bivalent vaccines that protect against Hantaan virus and Seoul virus are in use in China and South Korea. Initial diagnosis of infection can be made based on epidemiological information and symptoms. Confirmation of infection can be done by testing for hantavirus nucleic acid, proteins, or hantavirus-specific antibodies. Treatment of HFRS is supportive and depends on the phase of disease and clinical presentation. Intravenous hydration, electrolyte therapy, and platelet transfusions may be performed, as well as intermittent hemodialysis for renal failure and continuous renal replacement therapy in critical cases. No specific antiviral drugs exist for hantavirus infection. More than 100,000 cases of HFRS occur each year. China is the most affected country in Asia while Finland is the most affected country in Europe. More than 10,000 cases of NE are diagnosed annually. The distribution of viruses that cause HPS is directly tied to the distribution of their natural reservoirs. Transmission is also greatly influenced by environmental factors such as rainfall, temperature, and humidity, which affect the rodent population and virus transmissibility. Outbreaks of HFRS have occurred throughout history, especially among soldiers during wartime who live in poor conditions. During the Korean War in the 1950s, an epidemic of HFRS occurred among United Nations soldiers stationed near the Hantan river. The outbreak was determined in the 1970s and 1980s to be caused by Hantaan virus, which was named after the river and which was the first hantavirus discovered. Other HFRS epidemics include an outbreak in Finland in World War Two among German and Finnish soldiers, caused by Puumala virus, and an outbreak in Croatia during the Balkan Wars, caused by Puumala virus and Dobrava-Belgrade virus. Signs and symptoms Hantavirus hemorrhagic fever with renal syndrome (HFRS) is characterized by five phases: febrile, hypotensive, low urine production (oliguria), high urine production (polyuria or diuretic), and recovery. Symptoms usually occur 12–16 days after exposure to the virus, but may appear as early as 5 days or as late as 42 days after exposure. A hallmark of HFRS is acute kidney disease with kidney swelling, excess protein in urine (proteinuria), and blood in urine (hematuria). Other symptoms include headache, lower back pain, impaired vision, nausea, vomiting, diarrhea, and bloody stool. These early symptoms last 3–7 days. Hemorrhagic symptoms include the appearance of red, purple, or brown spots on the skin (petechiae) and mucosa within 3–4 days after the onset of symptoms, coughing up blood or blood-stain mucus or airway bleeding (hemoptysis), congestion of the conjunctiva in the eye, gastrointestinal bleeding, and intracranial bleeding. In severe cases, excess blood clotting throughout the body, called disseminated intravascular coagulation (DIC), can occur. The severity of symptoms varies depending on the virus: Hantaan virus causes severe HFRS; Seoul virus moderate HFRS; Puumala virus mild HFRS; and Dobrava-Belgrade varies depending on genotype. During the hypotensive phase, which lasts 1–3 days, there is a sudden onset of lower blood pressure and shock due to microvascular leakage, which can result in sudden death. Urine production and platelet count decrease, white blood cell count in blood increases, and hemorrhaging begin during the hypotensive phase. The oliguric or diuretic phase, which lasts 2–6 days, is caused by renal failure with oliguria and proteinuria. During the polyuria phase, which lasts about 1–2 weeks, renal functions gradually recovers, which leads to increased urine production. Complete renal function can be restored after a long recovery process, about 2–6 months, but chronic renal failure, endocrine dysfunction, and hypertension may occur. Extra-renal complications include acute respiratory distress syndrome (ARDS), pituitary gland injury or insufficient pituary gland hormone production, and inflammation of the gallbladder (cholecystitis), the pericardium (pericarditis), the brain (encphalitis), and the pancreas (pancreatitis). Infection with Puumala virus or Dobrava-Belgrade virus often causes a mild form of HFRS known as nephropathia epidemica (NE). For pregnant women and fetuses, symptoms are more severe, whereas for children symptoms are milder with greater frequency of abdominal symptoms. Serological surveys suggest that many HFRS infections go unnoticed, either as asymptomatic infections or as a mild flu-like illness with symptoms such as high fever, malaise, and muscle pain. In more mild cases, the different phases of illness may be hard to distinguish, or some phases may be absent, while in more severe cases, the phases may overlap. While HFRS is typically associated with renal disease, symptoms sometimes include cardiopulmonary symptoms associated with hantavirus pulmonary syndrome. Some symptoms are associated with specific hantaviruses: Puumala virus often causes ocular symptoms such as blurry vision and nearsightedness, Hantaan virus can affect the pituitary gland and cause empty sella syndrome, and Dobrava-Belgrade virus commonly causes acute respiratory distress syndrome (ARDS). Repeated infections of hantaviruses have not been observed, so recovering from infection likely grants life-long immunity. Virology Genome and structure The genome of hantaviruses is segmented into three parts: the large (L), medium (M), and small (S) segments. Each part is a single-stranded negative-sense RNA strand, consisting of 10,000–15,000 nucleotides in total. The segments form into circles via non-covalent bonding of the ends of the genome. The L segment is about 6.6 kilobases (kb) in length and encodes RNA-dependent RNA polymerase (RdRp), which mediates transcription and replication of viral RNA. The M segment, about 3.7 kb in length, encodes a glycoprotein precursor that is co-translated and cleaved into Gn and Gc. Gn and Gc bind to cell receptors, regulate immune responses, and induce protective antibodies. The S segment is around 2.1 kb in length and encodes the N protein, which binds to and protects viral RNA. An open reading frame in the N gene on the S segment of some hantaviruses also encodes the non-structural protein NS that inhibits interferon production in host cells. The untranslated regions at the ends of the genome are highly conserved and participate in the replication and transcription of the genome. Individual hantavirus particles (virions) are usually spherical, but may be oval, pleomorphic, or tubular. The diameter of the virion is 70–350 nanometers (nm). The lipid envelope is about 5 nm thick. Embedded in the envelope are the surface spike glycoproteins Gn and Gc, which are arranged in a lattice pattern. Each surface spike is composed of a tetramer of Gn and Gc (four units each) that has four-fold rotational symmetry, extending about 10 nm out from the envelope. Gn forms the stalk of the spike and Gc the head. Inside the envelope are helical nucleocapsids made of many copies of the nucleocapsid protein N, which interact with the virus's genome and RdRp. Hantaviruses do not encode matrix proteins to assist with structuring the virion, so how surface proteins organize into a sphere with a symmetrical lattice is not yet known. Life cycle Vascular endothelial cells and macrophages are the primary cells infected by hantaviruses. Podocytes, tubular cells, dendritic cells, and lymphocytes can also be infected. Attachment and entry into the host cell is mediated by the binding of the viral glycoprotein spikes to host cell receptors, particularly β1 and β3 integrins. Decay acceleration factors and complement receptors have also been proposed to be involved in attachment. After attachment, hantaviruses rely on several ways to enter a cell, including micropinocytosis, clathrin-independent receptor-mediated endocytosis and cholesterol- or caveolae-dependent endocytosis. Old World hantaviruses use clathrin-dependent endocytosis while New World hantaviruses use clathrin-independent endocytosis. After entering a cell, virions form vesicles that are transported to early endosomes, then late endosomes and lysosomal compartments. A decrease in pH then causes the viral envelope to fuse with the endosome or lysosome. This fusion releases viral ribonucleoprotein complexes into the cell cytoplasm, initiating transcription and replication by RdRp. RdRp transcribes viral -ssRNA into complementary positive-sense strands, then snatches 5′ ("five prime") ends of host messenger RNA (mRNA) to prepare mRNA for translation by host ribosomes to produce viral proteins. Complementary RNA strands are also used to produce copies of the genome, which are encapsulated by N proteins to form RNPs. During virion assembly, the glycoprotein precursor is cleaved in the endoplasmic reticulum into the Gn and Gc glycoproteins by host cell signal peptidases. Gn and Gc are modified by N-glycan chains, which stabilize the spike structure and assist in assembly in the Golgi apparatus for Old World hantaviruses or at the cell membrane for New World hantaviruses. Old World hantaviruses obtain their viral envelope from the Golgi apparatus and are then transported to the cell membrane in vesicles to leave the cell via exocytosis. On the other hand, New World hantavirus RNPs are transported to the cell membrane, where they bud from the surface of the cell to obtain their envelope and leave the cell. Evolution The most common form of evolution for hantaviruses is mutations through single nucleotide substitutions, insertions, and deletions. Hantaviruses are usually restricted to individual natural reservoir species and evolve alongside their hosts, but this one-species-one-hantavirus relationship is not true for all hantaviruses. The exact evolutionary history of hantaviruses is likely obscured by many instances of genome reassortment, host spillover, and host-switching. Because hantaviruses have segmented genomes, they are capable of genetic recombination and reassortment in which segments from different viruses can combine to form new viruses. This occurs often in nature and facilitates the adaptation of hantaviruses to multiple hosts and ecosystems. Within species, geography has also affected the evolution of hantaviruses. For example, Hantaan virus and Seoul virus have both formed multiple lineages corresponding to their geographic distribution. Mechanism Transmission Hantaviruses that cause illness in humans are mainly transmitted by rodents. In rodents, hantaviruses usually cause an asymptomatic, persistent infection. Infected animals can spread the virus to uninfected animals through aerosols or droplets from their feces, urine, saliva, and blood, through consumption of contaminated food, from virus particles shed from skin or fur, via grooming, or through biting and scratching. Hantaviruses can also spread through the fecal-oral route and across the placenta during pregnancy from mother to child. They can survive for 10 days at room temperature, 15 days in a temperate environment, and more than 18 days at 4 degrees Celsius, which aids in the transmission of the virus. Environmental conditions favorable to the reproduction and spread of rodents are known to increase disease transmission. Living in a rural environment, in unhygienic settings, and interacting with environments shared with hosts are the biggest risk factors for infection, especially people who are hikers, farmers and forestry workers, as well as those in mining, the military, and zoology. Rodents can transmit hantaviruses to humans through aerosols or droplets from the excretions and through consumption of contaminated food. Rodent bites and scratches can also transmit hantaviruses to humans. The prevalence of hantavirus among rodent breeders and rodent pet owners is up to 80%. In one outbreak in North America in 2017, Seoul virus infected 31 people through contact with pet rats. Andes virus has often been claimed by researchers to be the only hantavirus known to be spread from person to person, usually after coming into close contact with an infected person. It can also reportedly spread through human saliva, airborne droplets from coughing and sneezing, and to newborns through breast milk and the placenta. A 2021 systematic review, however, found human-to-human transmission of the Andes virus to not be strongly supported by evidence but nonetheless possible in limited circumstances, especially between close household contacts such as sexual partners. Puumala virus may be transmissible from person to person through blood and platelet transfusions. Hantaviruses that cause HFRS can be transmitted through the bites of mites and ticks. Research has also shown that pigs can be infected with Hantaan virus without severe symptoms and sows can transmit the virus to offspring through the placenta. Pig-to-human transmission may also be possible, as one swine breeder was infected with hantavirus with no contact with rodents or mites. Hantaan virus and Puumala virus have been detected in cattle, deer, and rabbits, and antibodies to Seoul virus have been detected in cats and dogs, but the role of these hosts for hantaviruses is unknown. Infection in these other animals can potentially facilitate the evolution of hantaviruses by genome reassortment. In addition to rodents, some hantaviruses are found in small insectivorous mammals, such as shrews, and bats. Hantavirus antigen has also been detected in a variety of bird species, indicative of infection. Man-made built environments are important in hantavirus transmission. Deforestation and excess agriculture may destroy rodents' natural habitat. The expansion of agricultural land is associated with a decline in predator populations, which enables hantavirus host species to use farm monocultures as nesting and foraging sites. Agricultural sites built in close proximity to rodents' natural habitats can facilitate the proliferation of rodents as they may be attracted to animal feed. Sewers and stormwater drainage systems may be inhabited by rodents, especially in areas with poor solid waste management. Maritime trade and travel have also been implicated in the spread of hantaviruses. Research results are inconsistent on whether urban living increases or decreases hantavirus incidence. Seroprevalence, showing past infection to hantavirus, is consistently higher in occupations and areas that have greater exposure to rodents. Poor living conditions on battlefields, in military camps, and in refugee camps make soldiers and refugees at great risk of exposure as well. Pathophysiology The main cause of illness is increased vascular permeability, decreased platelet count, and overreaction by the immune system. The increased vascular permeability appears to be the result of infected cells producing vascular endothelial growth factor (VEGF), which activate VEGFR2 receptors on endothelial cells, which increases paracellular permeability. Oxygenation problems and bradykinin are also thought to play a role in increased vascular permeability during infection. Coagulation abnormalities may also occur. Virus particles cluster on the surface of endothelial cells, which causes a misallocation of platelets to infected endothelial cells. Disseminated coagulating without signs of hemorrhaging, major blood clots, and damage to vascular endothelial cells during infection may negatively affect coagulation and platelet levels and promote further vascular leaking and hemorrhaging. Infection begins with interaction of the viral glycoproteins Gn and Gc and β-integrin receptors on target cell membranes. Immature dendritic cells near endothelial cells transport virions from lymphatic vessels to local lymph nodes to infect more endothelial cells. These cells produce antigens to induce an immune response, especially those of macrophages and CD8+ T lymphocytes. After activation of the immune system, cytotoxic T lymphocytes produce pro-inflammatory cytokines that can damage infected endothelial cells, which can lead to increased vascular permeability and inflammatory reactions. These cytokines include interferon (IFN), interleukins (IL-1, IL-6, and IL-10), and tumor necrosis factor-α (TNF-α). Elevated IL-6 levels are associated with low platelet count and renal failure. HFRS mainly affects the kidneys and blood vessels, though other parts of the body such as the nervous system, spleen, and liver can also be affected. While most major organs become infected, organ failure does not occur in most as pathology is different from organ to organ. In the tubular epithelium of kidneys, tight junction proteins are redistributed and tubular necrosis occurs, which impairs kidney tubules and causes proteinuria and hematuria. Glomerular endothelium infection in the kidneys causes decreased function of glomerular ZO-1 expression, which reduces function of glomerulus as molecular filters by increasing glomerular permeability, which causes proteinuria and hematuria. Liver infection does not lead to significant dysfunction since hepatic blood vessels are already relatively permeable. In the spleen, infection of immune cells can cause over-activation of immature lymphocytes elsewhere and facilitate prolonged spread of the virus throughout the body. Immunology The innate immune system recognizes hantavirus infection by detection of viral RNA. This triggers production of interferons, immune cytokines, and chemokines and activation of signaling pathways to respond to viral infection. Monocytes respond to infection by using phagocytosis to consume virus particles. IgM antibodies to the viral surface glycoproteins are created to bind to and disable virus particles. During infection, the anti-Gc IgM response is stronger than the anti-Gn IgM response. Long term, the anti-Gc IgG response is stronger than the anti-Gn IgG response. Anti-N antibodies are produced during infection but are not involved in neutralizing virions. Long non-coding RNA and microRNA are involved in inhibiting hantavirus infection. Pathogenic hantaviruses are able to modify the immune response and evade interferon-mediated antiviral signaling pathways in various ways, including by inhibiting interferon activation, inhibiting the activation of transcription factors, and inhibiting downstream JAK/STAT signaling. They can also regulate cell death to aid in completing their life cycle through autophagy, apoptosis, and pyroptosis. Hantaan virus infection and NP and GP protein expression have shown to promote production of micro-RNAs that reduce expression of pro-inflammatory cytokines. Furthermore, hantaviruses appear to induce cell stress via endoplasmic reticulum stress while inhibiting the cellular response to stress, which helps the virus escape host stress signaling. Prevention Reducing the risk of exposure to rodents at home, one's workplace, and when camping prevents hantavirus infection. Rodent control methods such as rodenticides, traps, and cats have been proposed as ways to control the rodent population. Cleaning and disinfecting human living spaces by removing rodent food sources can prevent the contamination of food and other items with hantaviruses from rodent excretions and secretions. Preventing rodents from entering one's house, removing potential nesting sites around one's house, sweeping areas likely inhabited by rodents, covering trash cans, cutting grass, spraying water to prevent dust prior to activities, and installing public warning signs in endemic areas can help to reduce contact with rodents. People at high risk of infection, including pest exterminators, people who work in agriculture, forestry, and animal husbandry can take preventive measures such as wearing masks to prevent exposure to hantaviruses. In high risk groups, vaccines may be necessary. Ventilation of rooms before entering, using rubber gloves and disinfectants, and using respirators to avoid inhaling contaminated particles while cleaning up rodent-infested areas reduce risk of hantavirus infections. Hantaviruses can be inactivated by heating at 60 degrees Celsius for 30 minutes, or with organic solvents, hypochlorite solvents, and ultraviolet light. Vaccines against hantavirus infection have been approved in South Korea and China. In South Korea, an inactivated whole virus vaccine called Hantavax has been marketed since 1990 to protect against HFRS caused by Hantaan virus and Seoul virus. The vaccine has not shown to prevent infection but is associated with reduced disease severity. Due to a diminishing vaccine-induced antibody response, frequent booster doses are required. A similar vaccine was approved for use in China in 2005, which provides immunity for up to 33 months. Other vaccines have been researched in animal models such as recombinant vaccines, DNA vaccines, virus-like particle (VLP) vaccines, recombinant vector vaccines, and subunit vaccines. These vaccines have shown varying degrees of effectiveness. Diagnosis Initial diagnosis of hantavirus infection can be made based on epidemiological information and clinical symptoms. Confirmation of infection also includes detection of hantavirus nucleic acid, proteins, or hantavirus-specific antibodies. Key findings of laboratory findings include thrombocytopenia, leukocytosis, hemoconcentration, elevated serum creatinine levels, hematuria, and proteinuria. Hantavirus-specific IgM and IgG antibodies are usually present at the onset of symptoms. IgM is detectable in the acute phase of infection but declines over a period of 2–6 months. The response of IgG antibodies is low during infection but grows over time and lasts for one's lifetime. Neutralization tests, immunofluorescent assays (IFAs), and enzyme-linked immunosorbent assays (ELISAs) can be used to detect antibodies to hantavirus infection in blood, usually anti-N or anti-Gc antibodies. ELISA is inexpensive and can be used at any point during the illness, but results may need to be confirmed by other methods. Rapid immunochromatographic IgM antibody tests can also be used for diagnosis as they are simple to carry out and inexpensive. Western blotting can detect hantavirus antigen in tissue samples, but is costly and time consuming. Both traditional and real-time polymerase chain reaction (PCR) tests of blood, saliva, BAL fluids, and tissue samples can be used. There is a possibility of false-negatives with PCR if there are low levels of virus in the blood, and PCR testing is prone to cross-contamination, but when performed during the onset of infection it may predict disease severity. PCR can also be used for postmortem diagnosis and for analysis of organ involvement, and it can be used to sequence the virus's genome to identify which specific virus is causing illness. Management Treatment of HFRS is supportive in nature. The specific form of treatment depending on the phase of the disease and clinical presentation. Intravenous hydration and electrolyte therapy are essential to maintain blood pressure and electrolyte balance. Platelet transfusions can be used to reduce mortality in cases of severe thrombocytopenia and disseminated intravascular coagulation to control bleeding. If acute kidney injury occurs, then intermittent hemodialysis is used as the first option and continuous renal replacement therapy in critical HFRS cases. No specific antiviral drugs exist for hantavirus infection, but ribavirin and favipiravir have shown varying efficacy and safety. Prophylactic use of ribavirin and favipiravir in early infection or post-exposure show some efficacy, and both have shown some anti-hantavirus activity in vivo and in vitro. Ribavirin is effective in the early treatment of HFRS with some limitations such as toxicity at high doses and the potential to cause hemolytic anemia. Anemia is reversible upon completion of ribavirin treatment. In some instances, ribavirin may cause excess bilirubin in the blood (hyperbilirubinemia), abnormally slow heart beat (sinus bradycardia), and rashes. Administering ribavirin after the onset of the cardiopulmonary phase of HPS has not shown to be an effective treatment and currently there is no recommendation for the use of ribavirin to treat HFRS or HPS. Favipiravir, in comparison to ribavirin, has shown greater efficacy without anemia as a side effect. In hamster models, oral administration of favipiravir twice per day of 100 mg/kg significantly reduced viral load in the blood and antigen load in the lungs. Oral administration before viremia prevented HPS, but not after this. A number of other approaches have been researched as potential anti-hantavirus treatments, including small-molecule compounds that target the virus or host, peptides, alligator weed, antibodies, and classical antiviral drugs, tested mainly to block hantavirus entry into cells or restrain virus replication. Host-targeting medicines are designed to improve vascular function or rebuild homeostasis. Prognosis Prognosis is good in most cases. The case fatality rate ranges from less than 1% to 15%, depending on the virus and location. In China, where most cases are caused by Hantaan virus and Seoul virus, the case fatality rate is about 2.89%. In South Korea, the case fatality rate is 1–2%. The global case fatality rate for Seoul virus is around 1%. Pathogenicity of Dobrava-Belgrade virus varies by genotype: infection with the deadlier geontypes results in death for 10–15% of the infected while for the milder genotypes the case fatality rate is 0.3–0.9%. Puumala virus infection has a case fatality rate of 0.1–0.4%. Post-infection Guillain-Barré Syndrome has been reported on rare occasions, indicated by acute limb weakness. Weakness and fatigue are common during recovery. People who are over 60 years of age are at an increased likelihood of death, as are tobacco smokers, people who have diabetes or high blood pressure (hypertension). Deaths typically involve the development of severe complications, and death is more common among those who experience septic shock, ARDS, heart failure, irregular heartbeat, and pancreatitis. Death is also associated with lower platelet count, elevated white blood cell count, and higher aspartate aminotransferase and alanine aminotransferase. The antibody response to hantavirus infection is strong and long-lasting. Early production of neutralizing antibodies (nAbs) that target the surface glycoproteins is directly associated with increased likelihood of survival. High nAb counts can be detected as long as ten years after infection. Higher levels of IL-6, in contrast, are associated with more severe disease, and deceased individuals have higher IL-6 levels than survivors. Genetic susceptibility to severe illness is related to one's human leukocyte antigen (HLA) type, which also depends on the hantavirus as increased susceptibility to different hantaviruses is associated with different HLA haplogroups. Epidemiology Most cases of HFRS are caused by just four viruses: Hantaan virus, Seoul virus, Puumala virus, and Dobrava-Belgrade virus. The geographic distribution of individual hantaviruses is directly tied to the geographic distribution of their natural reservoir. The Seoul virus is found worldwide due to the global distribution of its hosts, the brown rat, but mainly circulates in China and South Korea and accounts for about a quarter of all HFRS cases. In Europe, Dobrava-Belgrade virus and Puumala virus spread, the latter of which is more common. Infections in Europe are most common in Germany, Finland, and Russia. The number of cases in Europe has gradually increased over time, whereas in China and Korea the number of cases has declined significantly. More than 100,000 cases of HFRS are reported each year. From 1950–2007, more than 1.5 million cases and more than 45,000 deaths were recorded in China, where 70–90% of HFRS cases are reported, most of which are caused by Hantaan virus and Seoul virus. These cases are distributed throughout China but more common in the east, and spike during in winter and spring. In South Korea, 400–600 cases occur each year. Finland is the most affected country in Europe, with 1,000–3,000 cases every year. More than 10,000 cases of NE are diagnosed annually. Infection is more common in men. Environment Rodent species that carry hantaviruses inhabit a diverse range of habitats, including desert-like biomes, equatorial and tropical forests, swamps, savannas, fields, and salt marshes. The seroprevalence of hantaviruses in their host species has been observed to range from 5.9% to 38% in the Americas, and 3% to about 19% worldwide, depending on testing method and location. In some places such as South Korea, routine trapping of wild rodents is performed to surveil hantavirus circulation. High humidity can benefit rodent populations in warm climates, where it may positively impact plant growth and thus food availability. Increased forest coverage is associated with increased hantavirus incidence, particularly in Europe. Rainfall is consistently associated with hantavirus incidence in various patterns. Heavy rainfall is a risk factor for outbreaks in the following months, but may negatively affect incidence by flooding rodent burrows and nests. Infections are more common in the wet season than the dry season. Low rainfall and drought are associated with decreased incidence since such conditions result in a smaller rodent population, but displacement of rodent populations via drought or flood can lead to an increase in rodent-human interactions and infections. In Europe, however, no association between rainfall and incidence has been observed. Temperature has varying effects on hantavirus transmission. Higher temperatures create unfavorable environments for virus survival, but it can cause rodents to seek shelter from heat in human settings and is beneficial for aerosol production. Lower temperature can prolong virus survival outside a host. Higher average winter temperature is associated with reduced survival of bank voles, the natural reservoir of Puumala virus, but increased survival of striped field mice in China, the natural reservoirs of Hantaan virus. Extreme temperatures, whether hot or cold, are associated with lower disease incidence. History Hantavirus hemorrhagic disease was likely first described in the Yellow Emperor's Internal Canon in Imperial China during the Warring States Period of 475-221 BCE. Hantaviruses have been suggested as a cause of "trench nephritis" in soldiers during the US Civil War and in British soldiers in Flanders, Belgium during the First World War. The disease was also mentioned in East Asia, where it was probably endemic, and was first described scientifically in Vladivostok in 1913–1914. During the Second World War in 1942, an outbreak of disease with symptoms characteristic of hantavirus infection occurred in Salla, Eastern Lapland, Finland among German and Finnish soldiers. This outbreak was later reported in 1980 to be caused by a virus transmitted by bank voles and was named the Puumala virus. Also during the war, around 10,000 Japanese soldiers stationed in Manchuria developed HFRS. HFRS was common amongst United Nations soldiers stationed near the Hantan river during the Korean War, where it was first identified in 1951 and named "Korean hemorrhagic fever" and "epidemic hemorrhagic fever". About 3,200 cases occurred from 1951 to 1954. After the war, in 1976 in South Korea, trapped striped field mice were tested and antigens in their lungs were shown to react to antibodies in sera from survivors of Korean hemorrhagic fever. A specific agent could not be isolated in cell cultures, but the disease was shown to be caused by an infectious agent and other known causes of hemorrhagic fever were ruled out. In 1978, the virus was isolated for the first time from the lung tissue of a striped field mouse and named the Hantaan virus. Retrospective analysis on sera from war veterans later confirmed that Hantaan virus was responsible for the epidemic during the Korean war. The first successful culturing of the virus was in 1981, in which a hantavirus extracted from naturally infected rodents trapped near Songnaeri, South Korea was transmitted through mice with no history of infection four times. Originally named "KHF strain 76-118", it was renamed to "Hantaan virus, strain 76-118" after the Hantan River and is now commonly called Hantaan virus. Initially thought to be related to arenaviruses, which are also transmitted by rodents, analysis of the structure of hantaviruses suggested it instead belonged to the bunyavirus family, which previously were only known to be transmitted by arthropods. By 1980, studies confirmed that other viruses related to Hantaan virus caused long-known diseases throughout Eurasia. These diseases had a variety of names, so in 1982, the World Health Organization agreed to name the disease "hemorrhagic fever with renal syndrome (HFRS)". Further analysis of Hantaan virus showed that it possessed the characteristics of bunyaviruses but did not serologically cross-react with other known bunyaviruses, so a new genus, Hantavirus, was proposed to accommodate Hantaan virus and related viruses. In 1985, this group of viruses were given the name "hantaviruses", and in 1987, the genus was recognized by International Committee on Taxonomy of Viruses. References External links CDC's Hantavirus Technical Information Index page Viralzone: Hantavirus Virus Pathogen Database and Analysis Resource (ViPR): Bunyaviridae Hantavirus infections Rodent-carried diseases Biological agents Hemorrhagic fevers
Hantavirus hemorrhagic fever with renal syndrome
[ "Biology", "Environmental_science" ]
7,877
[ "Biological agents", "Toxicology", "Biological warfare" ]
39,892,856
https://en.wikipedia.org/wiki/Bifurcation%20memory
Bifurcation memory is a generalized name for some specific features of the behaviour of the dynamical system near the bifurcation. An example is the recurrent neuron memory. General information The phenomenon is known also under the names of "stability loss delay for dynamical bifurcations" and "ghost attractor". The essence of the effect of bifurcation memory lies in the appearance of a special type of transition process. An ordinary transition process is characterized by asymptotic approach of the dynamical system from the state defined by its initial conditions to the state corresponding to its stable stationary regime in the basin of attraction of which the system found itself. However, near the bifurcation boundary can be observed two types of transition processes: passing through the place of the vanished stationary regime, the dynamic system slows down its asymptotic motion temporarily, "as if recollecting the defunct orbit", with the number of revolutions of the phase trajectory in this area of bifurcation memory depending on proximity of the corresponding parameter of the system to its bifurcation value, — and only then the phase trajectory rushes to the state that corresponds to stable stationary regime of the system. In the literature, the effect of bifurcation memory is associated with a dangerous "bifurcation of merging". The twice repeated bifurcation memory effects in dynamical systems were also described in literature; they were observed, when parameters of the dynamical system under consideration were chosen in the area of either crossing two different bifurcation boundaries, or their close neighbourhood. The known definitions It is claimed that the term "bifurcation memory": History of studying The earliest of those described on this subject in the scientific literature should be recognized, perhaps, the result presented in 1973, which was obtained under the guidance of , a Soviet academician, and which initiated then a number of foreign studies of the mathematical problem known as "stability loss delay for dynamical bifurcations". A new wave of interest in the study of the strange behaviour of dynamic systems in a certain region of the state space has been caused by the desire to explain the non-linear effects revealed during the getting out of controllability of ships. Subsequently, similar phenomena were also found in biological systems — in the system of blood coagulation and in one of the mathematical models of myocardium. Topicality The topicality of scientific studies of the bifurcation memory is obviously driven by the desire to prevent conditions of reduced controllability of the vehicle. In addition, the special sort of tachycardias connected with the effects of bifurcation memory are considered in cardiophysics. See also Bifurcation (disambiguation) Bifurcation diagram Bifurcation theory Phase portrait Rulkov map FitzHugh-Nagumo model Notes References Books Papers Biophysics Nonlinear systems Dynamical systems Non-equilibrium thermodynamics
Bifurcation memory
[ "Physics", "Mathematics", "Biology" ]
595
[ "Applied and interdisciplinary physics", "Non-equilibrium thermodynamics", "Nonlinear systems", "Biophysics", "Mechanics", "Dynamical systems" ]
51,314,085
https://en.wikipedia.org/wiki/Ethinylestriol
Ethinylestriol (EE3), or 17α-ethynylestriol, also known as 17α-ethynylestra-1,3,5(10)-triene-3,16α,17β-triol, is a synthetic estrogen which was never marketed. Nilestriol, the 3-cyclopentyl ether of ethinylestriol, is a prodrug of ethinylestriol, and is a more potent estrogen in comparison, but, in contrast to ethinylestriol, has been marketed. Ethinylestriol has been found to reduce the risk of 7,12-dimethylbenz(a)anthracene (DMBA)-induced mammary cancer when given as a prophylactic in animal models, while other estrogens like ethinylestradiol and diethylstilbestrol were ineffective. See also List of estrogens References Abandoned drugs Ethynyl compounds Estranes Hydroxyarenes Synthetic estrogens Triols
Ethinylestriol
[ "Chemistry" ]
225
[ "Drug safety", "Abandoned drugs" ]
51,317,356
https://en.wikipedia.org/wiki/4-Vinylbenzyl%20chloride
4-Vinylbenzyl chloride is an organic compound with the formula ClCH2C6H4CH=CH2. It is a bifunctional molecule, featuring both vinyl and a benzylic chloride functional groups. It is a colorless liquid that is typically stored with a stabilizer to suppress polymerization. In combination with styrene, vinylbenzyl chloride is used as a comonomer in the production of chloromethylated polystyrene. It is produced by the chlorination of vinyltoluene. Often vinyltoluene consists of a mixture of 3- and 4-vinyl isomers, in which case the vinylbenzyl chloride will also be produced as a mixture of isomers. References Monomers Vinylbenzenes Organochlorides
4-Vinylbenzyl chloride
[ "Chemistry", "Materials_science" ]
165
[ "Monomers", "Polymer chemistry" ]
51,320,802
https://en.wikipedia.org/wiki/Proton%20radius%20puzzle
The proton radius puzzle is an unanswered problem in physics relating to the size of the proton. Historically the proton charge radius was measured by two independent methods, which converged to a value of about 0.877 femtometres (1 fm = 10−15 m). This value was challenged by a 2010 experiment using a third method, which produced a radius about 4% smaller than this, at 0.842 femtometres. New experimental results reported in the autumn of 2019 agree with the smaller measurement, as does a re-analysis of older data published in 2022. While some believe that this difference has been resolved, this opinion is not yet universally held. Radius definition The radius of the proton is defined by a formula which can be calculated by quantum electrodynamics and be derived from either atomic spectroscopy or by electron–proton scattering. The formula involves a form-factor related to the two-dimensional parton diameter of the proton. Problem Prior to 2010, the proton charge radius was measured using one of two methods: one relying on spectroscopy, and one relying on nuclear scattering. Spectroscopy method The spectroscopy method compares the energy levels of spherically symmetric 2s orbitals to asymmetric 2p orbitals of hydrogen, a difference known as the Lamb shift. The exact values of the energy levels are sensitive to the distribution of charge in the nucleus since the 2s levels overlap more with the nucleus. Measurements of hydrogen's energy levels are now so precise that the accuracy of the proton radius is the limiting factor when comparing experimental results to theoretical calculations. This method produces a proton radius of about , with approximately 1% relative uncertainty. Electron–proton scattering Similar to Rutherford's scattering experiments that established the existence of the nucleus, modern electron–proton scattering experiments send beams of high energy electrons into 20cm long tube of liquid hydrogen. The resulting angular distribution of the electron and proton are analyzed to produce a value for the proton charge radius. Consistent with the spectroscopy method, this produces a proton radius of about . 2010 experiment In 2010, Pohl et al. published the results of an experiment relying on muonic hydrogen as opposed to normal hydrogen. Conceptually, this is similar to the spectroscopy method. However, the much higher mass of a muon causes it to orbit 207 times closer than an electron to the hydrogen nucleus, where it is consequently much more sensitive to the size of the proton. The resulting radius was recorded as , 5 standard deviations (5σ) smaller than the prior measurements. The newly measured radius is 4% smaller than the prior measurements, which were believed to be accurate within 1%. (The new measurement's uncertainty limit of only 0.1% makes a negligible contribution to the discrepancy.) A follow-up experiment by Pohl et al. in August 2016 used a deuterium atom to create muonic deuterium and measured the deuteron radius. This experiment allowed the measurements to be 2.7 times more accurate, but also found a discrepancy of 7.5 standard deviations smaller than the expected value. Proposed resolutions The anomaly remains unresolved and is an active area of research. There is as yet no conclusive reason to doubt the validity of the old data. The immediate concern is for other groups to reproduce the anomaly. The uncertain nature of the experimental evidence has not stopped theorists from attempting to explain the conflicting results. Among the postulated explanations are the three-body force, interactions between gravity and the weak force, or a flavour-dependent interaction, higher dimension gravity, a new boson, and the quasi-free hypothesis. Measurement artefact Randolf Pohl, the original investigator of the puzzle, stated that while it would be "fantastic" if the puzzle led to a discovery, the most likely explanation is not new physics but some measurement artefact. His personal assumption is that past measurements have misgauged the Rydberg constant and that the current official proton size is inaccurate. Quantum chromodynamic calculation In a paper by Belushkin et al. (2007), including different constraints and perturbative quantum chromodynamics, a smaller proton radius than the then-accepted 0.877 femtometres was predicted. Proton radius extrapolation Papers from 2016 suggested that the problem was with the extrapolations that had typically been used to extract the proton radius from the electron scattering data though these explanation would require that there was also a problem with the atomic Lamb shift measurements. Data analysis method In one of the attempts to resolve the puzzle without new physics, Alarcón et al. (2018) of Jefferson Lab have proposed that a different technique to fit the experimental scattering data, in a theoretically as well as analytically justified manner, produces a proton charge radius from the existing electron scattering data that is consistent with the muonic hydrogen measurement. Effectively, this approach attributes the cause of the proton radius puzzle to a failure to use a theoretically motivated function for the extraction of the proton charge radius from the experimental data. Another recent paper has pointed out how a simple, yet theory-motivated change to previous fits will also give the smaller radius. More recent spectroscopic measurements In 2017 a new approach using a cryogenic hydrogen and Doppler-free laser excitation to prepare the source for spectroscopic measurements; this gave results ~5% smaller than the previously accepted spectroscopic values with much smaller statistical errors. This result was close to the 2010 muon spectroscopy result. These authors suggest that the older spectroscopic analysis did not include quantum interference effects that alter the shape of the hydrogen lines. In 2019, another experiment for the spectroscopy Lamb shift used a variation of Ramsey interferometry that does not require the Rydberg constant to analyze. Its result, 0.833 fm, agreed with the smaller 2010 value once more. More recent electron–proton scattering measurements Also in 2019 W. Xiong et al. reported a similar result using extremely low momentum transfer electron scattering. Their results support the smaller proton charge radius, but do not explain why the results before 2010 came out larger. It is likely future experiments will be able to both explain and settle the proton radius puzzle. 2022 analysis A re-analysis of experimental data, published in February 2022, found a result consistent with the smaller value of approximately 0.84 fm. References 2010 in science 2019 in science Proton Unsolved problems in physics Radii
Proton radius puzzle
[ "Physics" ]
1,310
[ "Unsolved problems in physics" ]
55,742,877
https://en.wikipedia.org/wiki/Droplet%20cluster
Droplet cluster is a self-assembled levitating monolayer of microdroplets usually arranged into a hexagonally ordered structure over a locally heated thin (about 1 mm) layer of water. The droplet cluster is typologically similar to colloidal crystals. The phenomenon was observed for the first time in 2004, and it has been extensively studied after that. Growing condensing droplets with a typical diameter of 0.01 mm – 0.2 mm levitate at an equilibrium height, where their weight is equilibrated by the drag force of the ascending air-vapor jet rising over the heated spot. At the same time, the droplets are dragged towards the center of the heated spot; however, they do not merge, forming an ordered hexagonal (densest packed) pattern due to an aerodynamic repulsive pressure force from gas flow between the droplets. The spot is usually heated by a laser beam or another source of heat to 60 °C – 95 °C, although the phenomenon was observed also at temperatures slightly above 20 °C. The height of levitation and the distance between the droplets are of the same order as their diameters. Due to complex nature of aerodynamic forces between the microdroplets in an ascending jet, the droplets do not coalesce but form a close-packed hexagonal structure showing similarity with various classical and newly discovered objects, where self-organization is prominent, including water breath figures, colloid and dust crystals, foams, Rayleigh–Bénard cells, and to some extent, ice crystals. The droplets pack near the center of heated area where the temperature and the intensity of the ascending vapor jets are the highest. At the same time, there are repulsion forces of aerodynamic nature between the droplets. Consequently, the cluster packs itself in the densest packing shape (a hexagonal honeycomb structure) with a certain distance between the droplets dependent on the repulsion forces. By controlling the temperature and temperature gradient one can control the number of droplets and their density and size. Using infrared irradiation, it is possible to suppress droplet growth and stabilize them for extended periods of time. It has been suggested that the phenomenon, when combined with a spectrographic study of droplets content, can be used for rapid biochemical in situ analysis. Recent studies have shown that the cluster can exist at lower temperatures of about 20 °C, which makes it suitable for biochemical analysis of living objects. Clusters with an arbitrary small number of droplets can be created. Unlike the clusters with a large number of droplets, small clusters cannot always form a hexagonally symmetric structure. Instead, they produce various more or less symmetric configurations depending on the number of droplets. Tracing individual droplets in small clusters is crucial for potential applications. The symmetry, orderliness, and stability of these configurations can be studied with such a measure of self-organization as the Voronoi entropy. Since the most common hexagonal (honeycomb shaped) droplet cluster was observed for the first time in 2004, new types of types of levitating droplet clusters were discovered. In a chain droplet cluster, rotating droplets may be very close to each other but viscosity of the thin gas layer between the droplets prevents them from coalescing. There is a reversible structural transition from the ordered hexagonal cluster to the chain-like structure. A hierarchical cluster is built of small groups of droplets with interactions controlled by the electrostatic force are combined into larger structures controlled by aerodynamic forces. Droplet aggregates keep continuously restructuring The droplets permanently keep rearranging, so the phenomenon is similar to the "deterministic chaos" (the Lorenz attractor). In the absence of the surfactant suppressing the thermocapillary (TC) flow at the surface of the water layer, a ring-shaped cluster is formed. Small clusters may demonstrate 4-fold, 5-fold, and 7-fold symmetry which is absent from large drolet clusters and colloidal crystals. The symmetry properties of small cluster configurations are universal, i.e., they do not depend on the size of the droplets and details of the interactions between the droplets. It was hypothesized that the symmetries in small clusters may be related to the ADE classification or to the simply-laced Dynkin diagrams. The phenomenon of the droplet cluster is different from the Leidenfrost effect because the latter occurs at much higher temperatures over a solid surface, while the droplet cluster forms at lower temperatures over a liquid surface. The phenomenon has also been observed with liquids other than water. See also Leidenfrost effect Rayleigh–Bénard convection Self assembly References External links Video: Levitating clusters of droplets above heated water surface Droplet cluster Video: Droplet clusters Physical phenomena Heat transfer Microfluidics
Droplet cluster
[ "Physics", "Chemistry", "Materials_science" ]
982
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Microfluidics", "Microtechnology", "Thermodynamics" ]
48,972,532
https://en.wikipedia.org/wiki/Sodium%20bismuth%20titanate
Sodium bismuth titanate or bismuth sodium titanium oxide (NBT or BNT) is a solid inorganic compound of sodium, bismuth, titanium and oxygen with the chemical formula of Na0.5Bi0.5TiO3 or Bi0.5Na0.5TiO3. This compound adopts the perovskite structure. Synthesis Na0.5Bi0.5TiO3 is not a naturally occurring mineral and several synthesis routes to obtain the compound have been developed. It can be easily prepared by solid state reaction between Na2CO3, Bi2O3 and TiO2 at temperatures around 850 °C. Structure The exact room-temperature crystal structure of sodium bismuth titanate has been a matter of debate for several years. Early studies in the 1960s using X-ray diffraction suggested Na0.5Bi0.5TiO3 to adopt either a pseudo-cubic or a rhombohedral crystal structure. In 2010, based on the high-resolution single-crystal X-ray diffraction data, a monoclinic structure (space group Cc) was proposed. On heating, Na0.5Bi0.5TiO3 transforms at 533 ± 5 K to a tetragonal structure (space group P4bm) and above 793 ± 5 K to cubic structure (space group Pmm). Physical properties Na0.5Bi0.5TiO3 is a relaxor ferroelectric. Its optical band gap was reported to be in the 3.0–3.5 eV. Applications Various solid solutions with tetragonal ferroelectric perovskites including BaTiO3, Bi0.5K0.5TiO3 have been developed to obtain morphotropic phase boundaries to enhance the piezoelectric properties of Na0.5Bi0.5TiO3. The extraordinarily large strain generated by a field-induced phase transition in sodium bismuth titanate-based solid solutions prompted researchers to investigate its potential as an alternative to lead zirconate titanate for actuator applications. References Further reading Lead-Free Piezoelectrics, Ed. Shashank Priya and Sahn Nahm,(2012), Springer-Verlag, New York. . Titanates Bismuth compounds Ceramic materials Piezoelectric materials Ferroelectric materials Perovskites Sodium compounds
Sodium bismuth titanate
[ "Physics", "Materials_science", "Engineering" ]
502
[ "Physical phenomena", "Ferroelectric materials", "Materials", "Electrical phenomena", "Ceramic materials", "Ceramic engineering", "Piezoelectric materials", "Hysteresis", "Matter" ]
48,981,762
https://en.wikipedia.org/wiki/Attainable%20region%20theory
Attainable region (AR) theory is a branch of chemical engineering, specifically chemical reaction engineering, that uses geometric and mathematical optimization concepts to assist in the design of networks of chemical reactors. AR theory is a method to help define the best reactor flowsheet using graphical techniques for a desired duty or objective function. Origin of AR theory The initial concept of an attainable region for chemical processes was proposed by Fritz Horn in 1964, where he believed in geometric methods to improve process design. These ideas were later refined and made specific to chemical reactors by co-developers David Glasser, Diane Hildebrandt, and Martin Feinberg. Overview The AR is defined as the collection all possible outcomes for all conceivable reactor combinations. Geometrically, the AR may (for instance) be represented as a convex region in state space representing all possible outlet compositions for all reactor combinations. A combination of reactors is often termed a reactor structure. An example of the reactors that are considered for this theory are Continuous flow stirred-tank reactor (CSTR) and a Plug flow reactor model (PFR). Knowledge of the AR helps to address two areas in chemical reactor design: The reactor network synthesis problem: Given a system of reactions and feed point, construction of the AR assists with determining an optimal reactor structure that achieves a desired duty or objective function. That is, AR theory assists with understanding specifically what type and combination of the chemical reactors are best suited for a particular system and duty. Performance targeting: Given an existing reactor design, knowledge of the AR assists with understanding if there are other reactor structures that could achieve superior performance, by comparison to its location in the AR. Seeing as the AR represents all reactor designs by definition, different proposed reactor designs must lie as a point in or on the AR in state space. The effectiveness of each design may then all be assessed by comparison the AR and their relation to objective functions if any. Applications of theory Examples of where AR theory can be applied, include: The design of batch reactor networks; and Comminution (Milling). See also Chemical reactors Chemical reaction engineering References External links Official web site Other official web site Chemical engineering Chemical reactors Chemical reaction engineering
Attainable region theory
[ "Chemistry", "Engineering" ]
440
[ "Chemical reaction engineering", "Chemical reactors", "Chemical engineering", "Chemical equipment", "nan" ]
44,135,401
https://en.wikipedia.org/wiki/Derivative%20of%20the%20exponential%20map
In the theory of Lie groups, the exponential map is a map from the Lie algebra of a Lie group into . In case is a matrix Lie group, the exponential map reduces to the matrix exponential. The exponential map, denoted , is analytic and has as such a derivative , where is a path in the Lie algebra, and a closely related differential . The formula for was first proved by Friedrich Schur (1891). It was later elaborated by Henri Poincaré (1899) in the context of the problem of expressing Lie group multiplication using Lie algebraic terms. It is also sometimes known as Duhamel's formula. The formula is important both in pure and applied mathematics. It enters into proofs of theorems such as the Baker–Campbell–Hausdorff formula, and it is used frequently in physics for example in quantum field theory, as in the Magnus expansion in perturbation theory, and in lattice gauge theory. Throughout, the notations and will be used interchangeably to denote the exponential given an argument, except when, where as noted, the notations have dedicated distinct meanings. The calculus-style notation is preferred here for better readability in equations. On the other hand, the -style is sometimes more convenient for inline equations, and is necessary on the rare occasions when there is a real distinction to be made. Statement The derivative of the exponential map is given by Explanation To compute the differential of at , , the standard recipe is employed. With the result follows immediately from . In particular, is the identity because (since is a vector space) and . Proof The proof given below assumes a matrix Lie group. This means that the exponential mapping from the Lie algebra to the matrix Lie group is given by the usual power series, i.e. matrix exponentiation. The conclusion of the proof still holds in the general case, provided each occurrence of is correctly interpreted. See comments on the general case below. The outline of proof makes use of the technique of differentiation with respect to of the parametrized expression to obtain a first order differential equation for which can then be solved by direct integration in . The solution is then . Lemma Let denote the adjoint action of the group on its Lie algebra. The action is given by for . A frequently useful relationship between and is given by Proof Using the product rule twice one finds, Then one observes that by above. Integration yields Using the formal power series to expand the exponential, integrating term by term, and finally recognizing (), and the result follows. The proof, as presented here, is essentially the one given in . A proof with a more algebraic touch can be found in . Comments on the general case The formula in the general case is given by where which formally reduces to Here the -notation is used for the exponential mapping of the Lie algebra and the calculus-style notation in the fraction indicates the usual formal series expansion. For more information and two full proofs in the general case, see the freely available reference. A direct formal argument An immediate way to see what the answer must be, provided it exists is the following. Existence needs to be proved separately in each case. By direct differentiation of the standard limit definition of the exponential, and exchanging the order of differentiation and limit, where each factor owes its place to the non-commutativity of and . Dividing the unit interval into sections ( since the sum indices are integers) and letting → ∞, , yields Applications Local behavior of the exponential map The inverse function theorem together with the derivative of the exponential map provides information about the local behavior of . Any map between vector spaces (here first considering matrix Lie groups) has a inverse such that is a bijection in an open set around a point in the domain provided is invertible. From () it follows that this will happen precisely when is invertible. This, in turn, happens when the eigenvalues of this operator are all nonzero. The eigenvalues of are related to those of as follows. If is an analytic function of a complex variable expressed in a power series such that for a matrix converges, then the eigenvalues of will be , where are the eigenvalues of , the double subscript is made clear below. In the present case with and , the eigenvalues of are where the are the eigenvalues of . Putting one sees that is invertible precisely when The eigenvalues of are, in turn, related to those of . Let the eigenvalues of be . Fix an ordered basis of the underlying vector space such that is lower triangular. Then with the remaining terms multiples of with . Let be the corresponding basis for matrix space, i.e. . Order this basis such that if . One checks that the action of is given by with the remaining terms multiples of . This means that is lower triangular with its eigenvalues on the diagonal. The conclusion is that is invertible, hence is a local bianalytical bijection around , when the eigenvalues of satisfy In particular, in the case of matrix Lie groups, it follows, since is invertible, by the inverse function theorem that is a bi-analytic bijection in a neighborhood of in matrix space. Furthermore, , is a bi-analytic bijection from a neighborhood of in to a neighborhood of . The same conclusion holds for general Lie groups using the manifold version of the inverse function theorem. It also follows from the implicit function theorem that itself is invertible for sufficiently small. Derivation of a Baker–Campbell–Hausdorff formula If is defined such that an expression for , the Baker–Campbell–Hausdorff formula, can be derived from the above formula, Its left-hand side is easy to see to equal Y. Thus, and hence, formally, However, using the relationship between and given by , it is straightforward to further see that and hence Putting this into the form of an integral in t from 0 to 1 yields, an integral formula for that is more tractable in practice than the explicit Dynkin's series formula due to the simplicity of the series expansion of . Note this expression consists of and nested commutators thereof with or . A textbook proof along these lines can be found in and . Derivation of Dynkin's series formula Dynkin's formula mentioned may also be derived analogously, starting from the parametric extension whence so that, using the above general formula, Since, however, the last step by virtue of the Mercator series expansion, it follows that and, thus, integrating, It is at this point evident that the qualitative statement of the BCH formula holds, namely lies in the Lie algebra generated by and is expressible as a series in repeated brackets . For each , terms for each partition thereof are organized inside the integral . The resulting Dynkin's formula is then For a similar proof with detailed series expansions, see . Combinatoric details Change the summation index in () to and expand in a power series. To handle the series expansions simply, consider first . The -series and the -series are given by respectively. Combining these one obtains This becomes where is the set of all sequences of length subject to the conditions in . Now substitute for in the LHS of (). Equation then gives or, with a switch of notation, see An explicit Baker–Campbell–Hausdorff formula, Note that the summation index for the rightmost in the second term in () is denoted , but is not an element of a sequence . Now integrate , using , Write this as This amounts to where using the simple observation that for all . That is, in (), the leading term vanishes unless equals or , corresponding to the first and second terms in the equation before it. In case , must equal , else the term vanishes for the same reason ( is not allowed). Finally, shift the index, , This is Dynkin's formula. The striking similarity with (99) is not accidental: It reflects the Dynkin–Specht–Wever map, underpinning the original, different, derivation of the formula. Namely, if is expressible as a bracket series, then necessarily Putting observation and theorem () together yields a concise proof of the explicit BCH formula. See also Matrix logarithm Remarks Notes References ; translation from Google books. Veltman, M, 't Hooft, G & de Wit, B (2007). "Lie Groups in Physics", online lectures. External links Mathematical physics Matrix theory Lie groups Exponentials
Derivative of the exponential map
[ "Physics", "Mathematics" ]
1,761
[ "Lie groups", "Mathematical structures", "Applied mathematics", "Theoretical physics", "E (mathematical constant)", "Algebraic structures", "Exponentials", "Mathematical physics" ]
44,140,597
https://en.wikipedia.org/wiki/Industrial%20corridor
An industrial corridor is a package of infrastructure spending allocated to a specific geographical area, with the intent to stimulate industrial development. An industrial corridor aims to create an area with a cluster of manufacturing or another industry. Such corridors are often created in areas that have pre-existing infrastructure, such as ports, highways and railroads. These modalities are arranged such that an "arterial" modality, such as a highway or railroad, receives "feeder" roads or railways. Concerns when creating corridors include correctly assessing demand and viability, transport options for goods and workers, land values, and economic incentives for companies. Infrastructure corridors generally deliver services such as communications, transport, energy, water, waste management. The development of infrastructure corridors is often a link between rural areas and urban growth. In the 21st century, industrial corridors are often viewed as opportunities for jobs and economic development in a region. Infrastructure can bring enhanced prospects to underdeveloped regions, longer-term economic growth, and international competition. There are infrastructure corridors in both developing world countries such as South Africa and Brazil in addition to advanced countries such as the United States and Canada. The increased movement from rural areas to metropolitan areas will advance industrial corridors in population centers. United States Chicago Southeast Chicago has historically been the location for significant and intensive manufacturing in the city, focusing on the production of steel. The Chicago region is the leading rail hub on the continent and has the largest inland intermodal port in the United States. The region also has a highly developed highway system, with access to more than ten interstate highways; a Port district and river system that connects to the Great Lakes, Mississippi River, and Atlantic Ocean. With nearly 250 million square feet of industrial space, the City of Chicago's industrial inventory accounts for more than 20 percent of the total industrial inventory in the region. Chicago's industrial corridors constitute the city's primary resource of space for industrial development and encompass about 12 percent of City land with over 16,935 acres zoned primarily for manufacturing. India There are 11 National Industrial Corridors (NIC) and numerous state level industrial corridors. The NIC are as follows: Note, East Coast Economic Corridor is the name for the combination of Coastal India NICs. Delhi–Mumbai Industrial Corridor (DMIC): with Delhi–Mumbai Expressway and Western Dedicated Freight Corridor as its backbone, is intended to increase economic efficiency in the region and increase international competition. It aims to create smart, sustainable industrial cities with high speed, high-capacity connectivity provided by the Western Dedicated Freight Corridor (DFC) to reduce logistic costs. The corridor will reduce the travel time for containers from 50 h, by the existing freight train, to 17 h by a proposed freight corridor and approximately 14 days by road to 14 hours by the proposed freight corridor. These corridors are expected to improve economic activities in the region and increase the national competitiveness overall. This project incorporates Nine Mega Industrial zones of about 200-250 sq. km., high speed freight line, three ports, and six airports, a six-lane intersection-free expressway connecting Mumbai to Delhi and a 4000 MW power plant. The Delhi-Mumbai Industrial Corridor is a mega infrastructure project of USD 90 billion. Funds for the projects are from the Indian government, Japanese loans, investment by Japanese firms and through Japan depository receipts issued by Indian companies. Delhi–Nagpur Industrial Corridor (DNIC) Amritsar–Kolkata Industrial Corridor (AKIC) Chennai Bangalore (Bengluru) Industrial Corridor (CBIC) Extension of CBIC to Kochi via Coimbatore Vizag–Chennai Industrial Corridor (VCIC) Bengaluru–Mumbai Industrial Corridor (BMIC) Odisha Economic Corridor (OEC) Hyderabad Nagpur Industrial Corridor (HNIC) Hyderabad Warangal Industrial Corridor (HWIC) Hyderabad Bengaluru Industrial Corridor (HBIC) Some of the state industrial corridors are: Gujarat: Udhna–Palsana Industrial Corridor Haryana Anupgarh-Hisar-Pithoragarh Industrial Corridor (AHPIC): via Anupgarh, Pipran, Nohar, Bhadra, Hisar, Madha (Narnaund), Gatoli (Julana), Butana (Gohana), Patti Kalyana (Samalkha), Chapprauli, Sardhana, Hastinapur, Noorpur, Kashipur, Bazpur, Haldwani, Khashu, Khetikhan, Lohaghat, Pithoragarh, with Kanra-Lwali-Kutoli-SidhiaKhet backup spur. Bathinda–Hisar–Alwar–Korba–Raigarh Industrial Corridor (BHAKRIC): via Bhatinda, Raman, Kalanwali, Sahuwala, Sirsa, Dhabi Kalan, Adampur, Balsamand, Chaudhariwas, Harita (with Harita–Kaimri–Hisar spur), Patodi, Kairu, Jui Khurd (with Bahal–Jui–Ateli spur), Badhra (with Koharu–Kadma–Kosli–Patli spur), Madhogarh, Nangal Sirohi (Mahendragarh), Bachhod (with Bachhod–Neemrana–Ateli–Uttawar–Kashipur–Sherpur–Tappal (Jewar) spur), Alwar, Sirmathura, Mohana, Karera, Pichhore, Lalitpur, Shahgarh, Katni, Korba, Raigarh. Ludhiana–Hisar–Jaipur-Kota Industrial Corridor (LHJKIC) Trans-Haryana Industrial Corridor (THIC) Uttar Pradesh Delhi–Dehradun Industrial Corridor Africa Africa, having long been an underinvested continent is now home to some of the world’s fastest growing economies. The urban population is forecast to grow by over 60% by 2060. Africa was home to 17 percent of the world population in 2020, and is expected to have 26 percent of the global population in 2050. Likewise, Africa's demand for electricity will quadruple from 2010 to 2040. Across Africa, regional development banks invested the most in development corridors (30.8%), with the African Development Bank funding the majority (24.3%) of all projects. Outside of Africa, the regional development banks that invested in the most projects are the Export-Import Bank of China (3.8%), the European Investment Bank (2.8%) and the Arab Bank for Economic Development in Africa (1.2% ea.). National governments funded about 29.8% of all projects. Development corridors can widen inequalities between stakeholders who are not party to the planning process but affected by it. The high financing costs for industrial corridors can also leave an unsustainable burden of debt, particularly for many of the African countries with high debt service costs. Environmental effects Industrial zone development corridors can lead to significant biodiversity loss, habitat fragmentation, pollution, spread invasive species, increase illegal logging, poaching and fires, severely affect river deltas and coastal and marine ecosystems, and consume large volumes of greenhouse gas intensive products such as steel and cement. Air pollution and health effects Mexico The population in this region is exposed to a multipollutant environment, including high levels of sulfur dioxide, submicrometric particles, and black carbon. Additionally, frequent adverse meteorological conditions in the morning may exacerbate acute and chronic exposition to these pollutants. Korea A study based in five Korean cities found that found that the incidence of lung cancer increased by approximately three times among residents living within 2 km of a petrochemical plant. Additionally, the risk of lung cancer was significantly higher among residents living in industrial complexes than that in the control area even after adjusting for age, sex, smoking, occupational exposure, education, and BMI. Other health concerns were found to include a 40% increased risk of acute eye disorder in the industrial area compared with the control area. The prevalence of the risks of lung and uterine cancers in the industrial area was statistically significantly higher at 3.45 and 1.88 times, respectively. Challenges Challenges with planning and implementing, lack of clarity and consistency of national objectives and standards leads to industrial corridors varying in characteristics between countries and jurisdictions. Moreover, general challenges may include: mixed access to designations, complex and inflexible approval processes, need for robust and integrated decision-making, efficiency and adequacy of the land acquisition process, financing infrastructure development, and accurately forecasting usage (esp. infrastructure). Additional challenges within a region can include regional instability and geopolitical shifts, isolation of corridor from existing economic activities, topographic challenges, lack of skilled labor, inconsistent quality of work, and high maintenance costs. See also Economic corridor Industrial park References Transport infrastructure
Industrial corridor
[ "Physics" ]
1,813
[ "Physical systems", "Transport", "Transport infrastructure" ]
65,552,296
https://en.wikipedia.org/wiki/Electricity%20and%20Magnetism%20%28book%29
Electricity and Magnetism is a standard textbook in electromagnetism originally written by Nobel laureate Edward Mills Purcell in 1963. Along with David Griffiths' Introduction to Electrodynamics, this book is one of the most widely adopted undergraduate textbooks in electromagnetism. A Sputnik-era project funded by the National Science Foundation grant, the book is influential for its use of relativity in the presentation of the subject at the undergraduate level. In 1999, it was noted by Norman Foster Ramsey Jr. that the book was widely adopted and has many foreign translations. The 1965 edition, now supposed to be freely available due to a condition of the federal grant, was originally published as a volume of the Berkeley Physics Course (see below for more on the legal status). The third edition, released in 2013, was written by David J. Morin for Cambridge University Press and included the adoption of SI units. Background The Berkeley Series was influenced by MIT's Physical Science Study Committee that was formed shortly before Sputnik was launched in 1956. The satellite could be seen from rooftops at MIT with times published in the local Boston newspapers. The space race caused a shake-up in the US scientific establishment and it led to new approaches to science education in the US. Contents (3rd edition) Electrostatics: charges and fields The electric potential Electric fields around conductors Electric currents The fields of moving charges The magnetic field Electromagnetic induction Alternating-current circuits Maxwell's equations and electromagnetic waves Electric fields in matter Magnetic fields in matter Reception In 1966, Benjamin F. Bayman reviewed the first edition. Bayman both commended and criticized the book. He questioned whether the book is appropriate for college sophomores to read, and commended the book, calling it as a "beautiful book on electricity and magnetism". Bayman highlights the chapters that deal with magnetic and electric fields in matter. According to a 1998 review of the second edition, the first edition "has not aged" and was "the best introductory textbook I have seen". The reviewer points out that the Berkeley Physics Series limitations and the book's dearth of references to wave phenomena are its two biggest issues. The review states that the "results are spectacular" and that problems were resolved in the latest edition. The main criticism of the book, according to a 2012 review of a second edition, is that it doesn't provide answers for the problems that are presented at the conclusion of each chapter. The reviewer notes that the lack of many calculation examples in the text made this issue worse. Another issue raised was the book's usage of cgs units rather than SI units. The review continues, stating that "despite the criticism, this text is very beautifully written and gives a well-structured and clear insight into the topic" and "can be recommended to any student" for use in an introductory course on electromagnetic. Norman Foster Ramsey Jr. called it an "excellent introductory textbook" in his 1999 obituary for Purcell. Roy Schwitters writes in a Physics Today review of Andrew Zangwill's Electrodynamics that he advises undergraduates to pick up the third edition of this book. Jermey N. A. Mathews listed it as one of the five books that stood out in Physics Today in 2013, acknowledging that there were issues with the previous writings, however, the publication noted that "clearly, Purcell's E&M matures slowly." In 2013, Michael Belsley noted that the third edition of the textbook was a significant improvement, especially appreciating its treatment of magnetism as a relativistic phenomenon. In 2013, Conquering the Physics GRE described the third edition as an elegant introduction that emphasizes physical concepts over mathematical formalism. In 2013, Sam Nolan praised it as an excellent updated introduction to the classic 50-year-old text. Another review referred to the third edition as a welcome update to the original work. Another review referred to the third edition as a welcome update to the original work. Legal status Because it was funded by the National Science Foundation, the original editions of the Berkeley Physics Series contained notices on their copyright pages stating that the books were to be available royalty-free in five years. The copyright page of the original 1965 edition of Electricity and Magnetism includes a notice stating that it is available for use by authors and publishers on a royalty-free basis after 1970. The authors got lump-sum payments but did not receive royalties. The copyright page of the 1965 edition says to obtain a royalty-free license from Education Development Center. Copyright © 1963, 1964, 1965 by Education Development Center, Inc. (successor by merger to Education Services Incorporated). ... Education Development Center, Inc., Newton, Massachusetts ... The copyright owner will give permission for the use of the original work in the English language after January 1, 1975. For conditions of use, permission to use, and for other permissions, apply to the copyright owner. — Tata McGraw-Hill edition Education Development Center's copyright to the 1965 edition now belongs to Edward Mills Purcell's sons, Dennis W. Purcell (Harvard 1962) and Frank B. Purcell (Harvard 1965). Benjamin Crowell, a retired Fullerton College physics teacher, wrote that Cambridge University Press refused to provide him the contact information for the copyright owner, but instead forwarded the request to the copyright owner. Crowell wrote that this made it effectively impossible to obtain the royalty-free license promised under the original government contract and that this uncertainty places an open-source version of the first edition in legal limbo. The reporting of the Electricity and Magnetism Open Access book project refers to electronic versions of the royalty-free first edition currently available on the internet. Original publication history International editions See also List of textbooks in electromagnetism References External links Purcell, Electricity and Magnetism, 1st edition - an unfinished, due to legal status, Open Access book project, in LaTeX format, depending on the royalty-free license 1965 non-fiction books 1985 non-fiction books 2013 non-fiction books Electromagnetism Physics textbooks Undergraduate education
Electricity and Magnetism (book)
[ "Physics" ]
1,230
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
41,273,889
https://en.wikipedia.org/wiki/Infinite%20loop%20space%20machine
In topology, a branch of mathematics, given a topological monoid X up to homotopy (in a nice way), an infinite loop space machine produces a group completion of X together with infinite loop space structure. For example, one can take X to be the classifying space of a symmetric monoidal category S; that is, . Then the machine produces the group completion . The space may be described by the K-theory spectrum of S. In 1977 Robert Thomason proved the equivalence of all infinite loop space machines (he was just 25 years old at the moment.) He published this result next year in a joint paper with John Peter May. References J. P. May and R. Thomason The uniqueness of infinite loop space machines Homotopy theory Topological spaces Topology
Infinite loop space machine
[ "Physics", "Mathematics" ]
160
[ "Mathematical structures", "Space (mathematics)", "Topological spaces", "Topology stubs", "Topology", "Space", "Geometry", "Spacetime" ]
41,275,177
https://en.wikipedia.org/wiki/Ante%20Graovac
Ante Graovac (15 July 1945 in Split – 13 November 2012 in Zagreb) was a Croatian scientist known for his contribution to chemical graph theory. He was director of 26 successful annual meetings MATH/CHEM/COMP held in Dubrovnik. He was secretary of the International Academy of Mathematical Chemistry. Selected publications . References Croatian scientists Croatian chemists 1945 births 2012 deaths Mathematical chemistry
Ante Graovac
[ "Chemistry", "Mathematics" ]
80
[ "Drug discovery", "Applied mathematics", "Theoretical chemistry", "Mathematical chemistry", "Molecular modelling" ]
41,278,027
https://en.wikipedia.org/wiki/Photoredox%20catalysis
Photoredox catalysis is a branch of photochemistry that uses single-electron transfer. Photoredox catalysts are generally drawn from three classes of materials: transition-metal complexes, organic dyes, and semiconductors. While organic photoredox catalysts were dominant throughout the 1990s and early 2000s, soluble transition-metal complexes are more commonly used today. Photochemistry of transition metal sensitizers Sensitizers absorb light to give redox-active excited states. For many metal-based sensitizers, excitation is realized as a metal-to-ligand charge transfer, whereby an electron moves from the metal (e.g., a d orbital) to an orbital localized on the ligands (e.g. the π* orbital of an aromatic ligand). This initial excited electronic state relaxes to a singlet excited state through internal conversion, a process where energy is dissipated as vibrational energy (heat) rather than as electromagnetic radiation. This singlet excited state can relax further by two distinct processes: the catalyst may fluoresce, radiating a photon and returning to the original singlet ground state, or it can move to the lowest energy triplet excited state (a state where two unpaired electrons have the same spin) by a second non-radiative process termed intersystem crossing. Direct relaxation of the excited triplet to the ground state, termed phosphorescence, requires both emission of a photon and inversion of the spin of the excited electron. This pathway is slow because it is spin-forbidden so the triplet excited state has a substantial average lifetime. For the common photosensitizer, tris-(2,2’-bipyridyl)ruthenium (abbreviated as [Ru(bipy)3]2+ or [Ru(bpy)3]2+), the lifetime of the triplet excited state is approximately 1100 ns. This lifetime is sufficient for other relaxation pathways (specifically, electron-transfer pathways) to occur before decay of the catalyst to its ground state. The long-lived triplet excited state accessible by photoexcitation is both a more potent reducing agent and a more potent oxidizing agent than the ground state of the catalyst. Since sensitizer is coordinatively saturated, electron transfer must occur by an outer sphere process, where the electron tunnels between the catalyst and the substrate. Outer sphere electron transfer Marcus' theory of outer sphere electron transfer predicts that such a tunneling process will occur most quickly in systems where the electron transfer is thermodynamically favorable (i.e. between strong reductants and oxidants) and where the electron transfer has a low intrinsic barrier. The intrinsic barrier of electron transfer derives from the Franck–Condon principle, stating that electronic transition takes place more quickly given greater overlap between the initial and final electronic states. Interpreted loosely, this principle suggests that the barrier of an electronic transition is related to the degree to which the system seeks to reorganize. For an electronic transition with a system, the barrier is related to the "overlap" between the initial and final wave functions of the excited electron–i.e. the degree to which the electron needs to "move" in the transition. In an intermolecular electron transfer, a similar role is played by the degree to which the nuclei seek to move in response to the change in their new electronic environment. Immediately after electron transfer, the nuclear arrangement of the molecule, previously an equilibrium, now represents a vibrationally excited state and must relax to its new equilibrium geometry. Rigid systems, whose geometry is not greatly dependent on oxidation state, therefore experience less vibrational excitation during electron transfer, and have a lower intrinsic barrier. Photocatalysts such as [Ru(bipy)3]2+, are held in a rigid arrangement by flat, bidentate ligands arranged in an octahedral geometry around the metal center. Therefore, the complex does not undergo much reorganization during electron transfer. Since electron transfer of these complexes is fast, it is likely to take place within the duration of the catalyst's active state, i.e. during the lifetime of the triplet excited state. Catalyst regeneration To regenerate the ground state, the catalyst must participate in a second outer-sphere electron transfer. In many cases, this electron transfer takes place with a stoichiometric two-electron reductant or oxidant, although in some cases this step involves a second reagent. Since the electron transfer step of the catalytic cycle takes place from the triplet excited state, it competes with phosphorescence as a relaxation pathway. Stern–Volmer experiments measure the intensity of phosphorescence while varying the concentration of each possible quenching agent. When the concentration of the actual quenching agent is varied, the rate of electron transfer and the degree of phosphorescence is affected. This relationship is modeled by the equation: Here, I and I0 denote the emission intensity with and without quenching agent present, kq the rate constant of the quenching process, τ0 the excited-state lifetime in the absence of quenching agent and [Q] the concentration of quenching agent. Thus, if the excited-state lifetime of the photoredox catalyst is known from other experiments, the rate constant of quenching in the presence of a single reaction component can be determined by measuring the change in emission intensity as the concentration of quenching agent changes. Photophysical properties Redox potentials The redox potentials of photoredox catalysts must be matched to the reaction's other components. While ground state redox potentials are easily measured by cyclic voltammetry or other electrochemical methods, measuring the redox potential of an electronically excited state cannot be accomplished directly by these methods. However, two methods exist that allow estimation of the excited-state redox potentials and one method exists for the direct measurement of these potentials. To estimate the excited-state redox potentials, one method is to compare the rates of electron transfer from the excited state to a series of ground-state reactants whose redox potentials are known. A more common method to estimate these potentials is to use an equation developed by Rehm and Weller that describes the excited-state potentials as a correction of the ground-state potentials: In these formulas, E*1/2 represents the reduction or oxidation potential of the excited state, E1/2 represents the reduction or oxidation potential of the ground state, E0,0 represents the difference in energy between the zeroth vibrational states of the ground and excited states and wr represents the work function, an electrostatic interaction that arises due to the separation of charges that occurs during electron-transfer between two chemical species. The zero-zero excitation energy, E0,0 is usually approximated by the corresponding transition in the fluorescence spectrum. This method allows calculation of approximate excited-state redox potentials from more easily measured ground-state redox potentials and spectroscopic data. Direct measurement of the excited-state redox potentials is possible by applying a method known as phase-modulated voltammetry. This method works by shining light onto an electrochemical cell in order to generate the desired excited-state species, but to modulate the intensity of the light sinusoidally, so that the concentration of the excited-state species is not constant. In fact, the concentration of excited-state species in the cell should change exactly in phase with the intensity of light incident on the electrochemical cell. If the potential applied to the cell is strong enough for electron transfer to occur, the change in concentration of the redox-competent excited state can be measured as an alternating current (AC). Furthermore, the phase shift of the AC current relative to the intensity of the incident light corresponds to the average lifetime of an excited-state species before it engages in electron transfer. Charts of redox potentials for the most common photoredox catalysts are available for quick access. Ligand electronegativity The relative reducing and oxidizing natures of these photocatalysts can be understood by considering the ligands' electronegativity and the catalyst complex's metal center. More electronegative metals and ligands can stabilize electrons better than their less electronegative counterparts. Therefore, complexes with more electronegative ligands are more oxidizing than less electronegative ligand complexes. For example, the ligands 2,2'-bipyridine and 2,2'-phenylpyridine are isoelectronic structures, containing the same number and arrangement of electrons. Phenylpyridine replaces one of the nitrogen atoms in bipyridine with a carbon atom. Carbon is less electronegative than nitrogen is, so it holds electrons less tightly. Since the remainder of the ligand molecule is identical and phenylpyridine holds electrons less tightly than bipyridine, it is more strongly electron-donating and less electronegative as a ligand. Hence, complexes with phenylpyridine ligands are more strongly reducing and less strongly oxidizing than equivalent complexes with bipyridine ligands. Similarly, a fluorinated phenylpyridine ligand is more electronegative than phenylpyridine so complexes with fluorine-containing ligands are more strongly oxidizing and less strongly reducing than equivalent unsubstituted phenylpyridine complexes. The metal center's electronic influence on the complex is more complex than the ligand effect. According to the Pauling scale of electronegativity, both ruthenium and iridium have an electronegativity of 2.2. If this was the sole factor relevant to redox potentials, then complexes of ruthenium and iridium with the same ligands should be equally powerful photoredox catalysts. However, considering the Rehm-Weller equation, the spectroscopic properties of the metal play a role in determining the redox properties of the excited state. In particular, the parameter E0,0 is related to the emission wavelength of the complex and therefore, to the size of the Stokes shift - the difference in energy between the maximum absorption and emission of a molecule. Typically, ruthenium complexes have large Stokes shifts and hence, low energy emission wavelengths and small zero-zero excitation energies when compared to iridium complexes. In effect, while ground-state ruthenium complexes can be potent reductants, the excited-state complex is a far less potent reductant or oxidant than its equivalent iridium complex. This makes iridium preferred for the development of general organic transformations because the stronger redox potentials of the excited catalyst allows the use of weaker stoichiometric reductants and oxidants or the use of less reactive substrates. Counter-ion identity It is often the case that these photocatalysts are balanced with a counter-ion, as is the case with the example complex tris-(2,2’-bipyridyl)ruthenium which is accompanied by two anions to balance the overall charge of the ion pair to zero. However, there are transition metal photoredox catalysts that exist without a counter-ion such as tris(2-phenylpyridine)iridium (often abbreviated Ir(ppy)3). The significance of these counter-ions are dependent on the ion association between the photoredox catalyst and its counter-ion(s) and is dependent on the solvent used for the reaction. Although photophysical properties such as redox potential, excitation energy, and ligand electronegative have often been considered key parameters for the use and reactivity of these complexes, counter-ion identity has been shown to play a significant role in low-polarity solvents. Particularly, it has been shown that having a tightly associated counter-ion increases the rate of electron-transfer when reducing a substrate but significantly reduces the rate of electron-transfer when oxidizing a substrate. This is believed to occur because the counter-ion essentially "blocks" the electron-transfer into the photoredox complex by shielding the more positively charged region of the complex; whereas, having the tight counter-ion association pushes the electron density further from the photoredox catalyst's metal-center, making it easier to be transferred from the catalyst (of course this only applies to the case where the photoredox catalyst is a cation and the counter-ion is an anion). Counter-ion identity thus is an additional parameter to consider when developing new photoredox reactions. Applications Reductive dehalogenation The earliest application of photoredox catalysis to Reductive dehalogenation were limited by narrow substrate scope or competing reductive coupling. Unactivated carbon-iodine bonds can be reduced using the strongly reducing photocatalyst tris-(2,2’-phenylpyridine)iridium (Ir(ppy)3). The increased reduction potential of Ir(ppy)3 compared to [Ru(bipy)3]2+ allows direct reduction of the carbon-iodine bond without interacting with a stoichiometric reductant. Thus, the iridium complex transfers an electron to the substrate, causing fragmentation of the substrate and oxidizing the catalyst to the Ir(IV) oxidation state. The oxidized photocatalyst is returned to its original oxidation state by oxidising a reaction additives. Like tin-mediated radical dehalogenation reactions, photocatalytic reductive dehalogenation can be used to initiate cascade cyclizations Oxidative generation of iminium ions Iminium ions are potent electrophiles useful for generating C-C bonds in complex molecules. However, the condensation of amines with carbonyl compounds to form iminium ions is often unfavorable, sometimes requiring harsh dehydrating conditions. Thus, alternative methods for iminium ion generation, particularly by oxidation from the corresponding amine, are a valuable synthesis tool. Iminium ions can be generated from activated amines using Ir(dtbbpy)(ppy)2PF6 as a photoredox catalyst. This transformation is proposed to occur by oxidation of the amine to the aminium radical cation by the excited photocatalyst. This is followed by hydrogen atom transfer to a superstoichimetric oxidant, such as trichloromethyl radical (CCl3 to form the iminium ion). The iminium ion is then quenched by reaction with a nucleophile. Related transformations of amines with a wide variety of other nucleophiles have been investigated, such as cyanide (Strecker reaction), silyl enol ethers (Mannich reaction), dialkylphosphates, allyl silanes (aza-Sakurai reaction), indoles (Friedel-Crafts reaction), and copper acetylides. Similar photoredox generation of iminium ions has furthermore been achieved using purely organic photoredox catalysts, such as Rose Bengal and Eosin Y. An asymmetric variant of this reaction utilizes acyl nucleophile equivalents generated by N-heterocyclic carbene catalysis. This reaction method sidesteps the problem of poor enantioinduction from chiral photoredox catalysts by moving the source of enantioselectivity to the N-heterocyclic carbene. Oxidative generation of oxocarbenium ions The development of orthogonal protecting groups is a problem in organic synthesis because these protecting groups allow each instance of a common functional group, such as the hydroxyl group, to be distinguished during the synthesis of a complex molecule. A very common protecting group for the hydroxyl functional group is the para-methoxy benzyl (PMB) ether. This protecting group is chemically similar to the less electron-rich benzyl ether. Typically, selective cleavage of a PMB ether in the presence of a benzyl ether uses strong stoichiometric oxidants such as 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) or ceric ammonium nitrate (CAN). PMB ethers are far more susceptible to oxidation than benzyl ethers since they are more electron-rich. The selective deprotection of PMB ethers can be achieved through the use of bis-(2-(2',4'-difluorophenyl)-5-trifluoromethylpyridine)-(4,4'-ditertbutylbipyridine)iridium(III) hexafluorophosphate (Ir[dF(CF3)ppy]2(dtbbpy)PF6) and a mild stoichiometric oxidant such as bromotrichloromethane, BrCCl3. The photoexcited iridium catalyst is reducing enough to fragment the bromotrichloromethane to form a trichloromethyl radical, bromide anion, and the Ir(IV) complex. The electron-poor fluorinated ligands makes the iridium complex oxidising enough to accept an electron from an electron-rich arene such as a PMB ether. After the arene is oxidized, it will readily participate in hydrogen atom transfer with trichloromethyl radical to form chloroform and an oxocarbenium ion, which is readily hydrolyzed to reveal the free hydroxide. This reaction was demonstrated to be orthogonal to many common protecting groups when a base was added to neutralise the HBr produced. Cycloadditions Cycloadditions and other pericyclic reactions are powerful transforms in organic synthesis because of their potential to rapidly generate complex molecular architectures and particularly because of their capacity to set multiple adjacent stereocenters in a highly controlled manner. However, only certain cycloadditions are allowed under thermal conditions according to the Woodward–Hoffmann rules of orbital symmetry, or other equivalent models such as frontier molecular orbital theory (FMO) or the Dewar-Zimmermann model. Cycloadditions that are not thermally allowed, such as the [2+2] cycloaddition, can be enabled by photochemical activation of the reaction. Under uncatalyzed conditions, this activation requires the use of high energy ultraviolet light capable of altering the orbital populations of the reactive compounds. Alternatively, metal catalysts such as cobalt and copper have been reported to catalyze thermally-forbidden [2+2] cycloadditions via single electron transfer. The required change in orbital populations can be achieved by electron transfer with a photocatalyst sensitive to lower energy visible light. Yoon demonstrated the efficient intra- and intermolecular [2+2] cycloadditions of activated olefins: particularly enones and styrenes. Enones, or electron-poor olefins, were discovered to react via a radical-anion pathway, utilizing diisopropylethylamine as a transient source of electrons. For this electron-transfer, [Ru(bipy)3]2+ was discovered to be an efficient photocatalyst. The anionic nature of the cyclization proved to be crucial: performing the reaction in acid rather than with a lithium counterion favored a non-cycloaddition pathway. Zhao et al. likewise discovered that a still different cyclization pathway is available to chalcones with a samarium counterion. Conversely, electron-rich styrenes were found to react via a radical-cation mechanism, utilizing methyl viologen or molecular oxygen as a transient electron sink. While [Ru(bipy)3]2+ proved to be a competent catalyst for intramolecular cyclizations using methyl viologen, it could not be used with molecular oxygen as an electron sink or for intermolecular cyclizations. For intermolecular cyclizations, Yoon et al. discovered that the more strongly oxidizing photocatalyst [Ru(bpm)3]2+ and molecular oxygen provided a catalytic system better suited to access the radical cation necessary for the cycloaddition to occur. [Ru(bpz)3]2+, a still more strongly oxidizing photocatalyst, proved to be problematic because although it could catalyze the desired [2+2] cycloaddition, it was also strong enough to oxidize the cycloadduct and catalyze the retro-[2+2] reaction. This comparison of photocatalysts highlights the importance of tuning the redox properties of a photocatalyst to the reaction system as well as demonstrating the value of polypyridyl compounds as ligands, due to the ease with which they can be modified to adjust the redox properties of their complexes. Photoredox-catalyzed [2+2] cycloadditions can also be effected with a triphenylpyrylium organic photoredox catalyst. In addition to the thermally-forbidden [2+2] cycloaddition, photoredox catalysis can be applied to the [4+2] cyclization (Diels–Alder reaction). Bis-enones, similar to the substrates used for the photoredox [2+2] cyclization, but with a longer linker joining the two enone functional groups, undergo intramolecular radical-anion hetero-Diels–Alder reactions more rapidly than [2+2] cycloaddition. Similarly, electron-rich styrenes participate in intra- or intermolecular Diels–Alder cyclizations via a radical cation mechanism. [Ru(bipy)3]2+ was a competent catalyst for intermolecular, but not intramolecular, Diels–Alder cyclizations. This photoredox-catalyzed Diels–Alder reaction allows cycloaddition between two electronically mismatched substrates. The normal electronic demand for the Diels–Alder reaction calls for an electron-rich diene to react with an electron-poor olefin (or "dienophile"), while the inverse electron-demand Diels–Alder reaction takes place between the opposite case of an electron-poor diene and a very electron-rich dienophile. The photoredox case, since it takes place by a different mechanism than the thermal Diels–Alder reaction, allows cycloaddition between an electron-rich diene and an electron-rich dienophile, allowing access to new classes of Diels–Alder adducts. The synthetic value of Yoon's photoredox-catalyzed styrene Diels–Alder reaction was demonstrated via the total synthesis of the natural product Heitziamide A. This synthesis demonstrates that the thermal Diels–Alder reaction favors the undesired regioisomer, but the photoredox-catalyzed reaction gives the desired regioisomer in improved yield. Photoredox organocatalysis Organocatalysis is a subfield of catalysis that explores the potential of organic small molecules as catalysts, particularly for the enantioselective creation of chiral molecules. One strategy in this subfield is the use of chiral secondary amines to activate carbonyl compounds. In this case, amine condensation with the carbonyl compound generates a nucleophilic enamine. The chiral amine is designed so that one face of the enamine is sterically shielded and so that only the unshielded face is free to react. Despite the power of this approach to catalyze the enantioselective functionalization of carbonyl compounds, certain valuable transformations, such as the catalytic enantioselective α-alkylation of aldehydes, remained elusive. The combination of organocatalysis and photoredox methods provides a catalytic solution to this problem. In this approach for the α-alkylation of aldehydes, [Ru(bipy)3]2+ reductively fragments an activated alkyl halide, such as bromomalonate or phenacyl bromide, which can then add to catalytically-generated enamine in an enantioselective manner. The oxidized photocatalyst then oxidatively quenches the resulting α-amino radical to form an iminium ion, which hydrolyzes to give the functionalized carbonyl compound. This photoredox transformation was shown to be mechanistically distinct from another organocatalytic radical process termed singly-occupied molecular orbital (SOMO) catalysis. SOMO catalysis employs superstoichiometric ceric ammonium nitrate (CAN) to oxidize the catalytically-generated enamine to the corresponding radical cation, which can then add to a suitable coupling partner such as allyl silane. This type of mechanism is excluded for the photocatalytic alkylation reaction because whereas enamine radical cation was observed to cyclize onto pendant olefins and open cyclopropane radical clocks in SOMO catalysis, these structures were unreactive in the photoredox reaction. This transformation include alkylations with other classes of activated alkyl halides of synthetic interest. In particular, the use of the photocatalyst Ir(dtbbpy)(ppy)2+ allows the enantioselective α-trifluoromethylation of aldehydes while the use of Ir(ppy)3 allowed the enantioselective coupling of aldehydes with electron-poor benzylic bromides. Zeitler et al. also investigated the productive merger of photoredox and organocatalytic methods to achieve enantioselective alkylation of aldehydes. The same chiral imidazolidinone organocatalyst was used to form enamine and introduce chirality. However, the organic photoredox catalyst Eosin Y was used rather than a ruthenium or iridium complex. Direct β-arylation of saturated aldehydes and ketones can be effected through the combination of photoredox and organocatalytic methods. The previous method to accomplish direct β-functionalization of a saturated carbonyl consists of a one-pot consists of a two-step process, both catalyzed by a secondary amine organocatalyst: stoichiometric reduction of an aldehyde with IBX followed by addition of an activated alkyl nucleophile to the beta-position of the resulting enal. This transformation, which like other photoredox processes takes place by a radical mechanism, is limited to the addition of highly electrophilic arenes to the beta position. The severe limitations on the arene component scope in this reaction is due primarily to the need for an arene radical anion that is stable enough not to react directly with enamine or enamine radical cation. In the proposed mechanism, the activated photoredox catalyst is quenched oxidatively by an electron-deficient arene, such as 1,4-dicyanobenzene. The photocatalyst then oxidizes an enamine species, transiently generated by the condensation of an aldehyde with a secondary amine cocatalyst, such as the optimal isopropyl benzylamine. The resulting enamine radical cation usually reacts as a 3 π-electron system, but due to the stability of the radical coupling partners, deprotonation of the β-methylene position gives rise to a 5 π-electron system with strong radical character at the newly accessed β-carbon. Although this reaction relies on the use of a secondary amine organocatalyst to generate the enamine species which is oxidized in the proposed mechanism, no enantioselective variant of this reaction exists. The development of this direct β-arylation of aldehydes led to related reactions for the β-functionalization of cyclic ketones. In particular, β-arylation of cyclic ketones has been achieved under similar reaction conditions, but using azepane as the secondary amine cocatalyst. A photocatalytic "homo-aldol" reaction works for cyclic ketones, allowing the coupling of the beta-position of the ketone to the ipso carbon of aryl ketones, such as benzophenone and acetophenone. In addition to the azepane cocatalyst, this reaction requires the use of the more strongly reducing photoredox catalyst Ir(ppy)3 and the addition of lithium hexafluoroarsenide (LiAsF6) to promote single-electron reduction of the aryl ketone. Additions to olefins The use of photoredox catalysis to generate reactive heteroatom-centered radicals was first explored in the 1990s. [Ru(bipy)3]2+ was found to catalyze the fragmentation of tosylphenylselenide to phenylselenolate anion and tosyl radical and that a radical chain propagation mechanism allowed the addition of tosyl radical and phenylseleno- radical across the double bond of electron rich alkyl vinyl ethers. Since phenylselenolate anion is readily oxidized to diphenyldiselenide, the low quantities of diphenyldiselenide observed was taken as an indication that photoredox-catalyzed fragmentation of tosylphenylselenide was only important as an initiation step, and that most of the reactivity was due to a radical chain process. Heteroaromatic additions to olefins include multicomponent oxy- and aminotrifluoromethylation reactions. These reactions use Umemoto's reagent, a sulfonium salt that serves as an electrophilic source of the trifluoromethyl group and that is precedented to react via a single-electron transfer pathway. Thus, single-electron reduction of Umemoto's reagent releases trifluoromethyl radical, which adds to the reactive olefin. Subsequently, single-electron oxidation of the alkyl radical generated by this addition produces a cation which can be trapped by water, an alcohol, or a nitrile. In order to achieve high levels of regioselectivity, this reactivity has been explored mainly for styrenes, which are biased towards formation of the benzylic radical intermediate. Hydrotrifluoromethylation of styrenes and aliphatic alkenes can be effected with a mesityl acridinium organic photoredox catalyst and Langlois' reagent as the source of CF3 radical. In this reaction, it was found that trifluoroethanol and substoichiometric amounts of an aromatic thiol, such as methyl thiosalicylate, employed in tandem served as the best source of hydrogen radical to complete the catalytic cycle. Intramolecular hydroetherifications and hydroaminations proceed with anti-Markovnikov selectivity. One mechanism invokes the single-electron oxidation of the olefin, trapping the radical cation by a pendant hydroxyl or amine functional group, and quenching the resulting alkyl radical by H-atom transfer from a highly labile donor species. Extensions of this reactivity to intermolecular systems have resulted in i) a new synthetic route to complex tetrahydrofurans by a "polar-radical-crossover cycloaddition" (PRCC reaction) of an allylic alcohol with an olefin, and ii) the anti-Markovnikov addition of carboxylic acids to olefins. Sulfoximidation Sulfoximidation of electron-rich arenes is enabled by photoredox catalysis. See also Photosensitizer References Catalysis Photochemistry
Photoredox catalysis
[ "Chemistry" ]
6,743
[ "Catalysis", "Chemical kinetics", "nan" ]
41,279,835
https://en.wikipedia.org/wiki/Technetium%20%2899mTc%29%20etarfolatide
{{DISPLAYTITLE:Technetium (99mTc) etarfolatide}} Technetium (99mTc) etarfolatide is an investigational non-invasive, folate receptor-targeting companion imaging agent that is being developed by Endocyte. Etarfolatide consists of a small molecule targeting the folate receptor and an imaging agent, which is based on technetium-99m. This companion imaging agent identifies cells expressing the folate receptor, including cancer and inflammatory cells. Etarfolatide is currently being investigated together with the corresponding small molecule drug conjugate (SMDC) vintafolide in a Phase 3 study in platinum-resistant ovarian cancer and in a Phase 2b study in non-small cell lung cancer. It identifies patients with metastases that are positive for the folate receptor and therefore more likely to respond to treatment with vintafolide. Other folate receptor targeting SMDCs for the treatment of cancer, inflammatory diseases and kidney disease are in preclinical development and will also utilize etarfolatide as companion imaging agent. The European Medicines Agency (EMA) is reviewing the Marketing Authorization Application (MAA) filings for both vintafolide and etarfolatide, for the treatment of patients with folate receptor-positive platinum-resistant ovarian cancer in combination with pegylated liposomal doxorubicin (PLD). References Technetium compounds Folates Technetium-99m Radiopharmaceuticals
Technetium (99mTc) etarfolatide
[ "Chemistry" ]
325
[ "Pharmacology", "Medicinal radiochemistry", "Medicinal chemistry stubs", "Chemicals in medicine", "Radiopharmaceuticals", "Pharmacology stubs" ]
41,280,550
https://en.wikipedia.org/wiki/FlAsH-EDT2
{{DISPLAYTITLE:FlAsH-EDT2}} FlAsH-EDT2 is an organoarsenic compound with molecular formula C24H18As2O5S4. Its structure is based around a fluorescein core with two 1,3,2-dithiarsolane substituents. It is used in bioanalytical research as a fluorescent label for visualising proteins in living cells. FlAsH-EDT2 is an abbreviation for fluorescin arsenical hairpin binder-ethanedithiol, and is a pale yellow or pinkish fluorogenic solid. It has a semi-structural formula (C2H4AsS2)2-(C13H5O3)-C6H4COOH, representing the dithiarsolane substituents bound to the hydroxyxanthone core, attached to an o-substituted molecule of benzoic acid. FlAsH-EDT2 is used for site-specific labelling, selectively binding to proteins containing the tetracysteine (TC) motif Cys-Cys-Xxx-Xxx-Cys-Cys and becoming fluorescent when bound. It displays non-specific binding to endogenous cysteine-rich proteins, meaning it binds to sites other than the one of interest (CCXXCC). Further optimization of the TC motif has revealed improved FlAsH binding affinity for a CCPGCC motif, and higher quantum yield when the tetracysteine motif is flanked with specific residues (HRWCCPGCCKTF or FLNCCPGCCMEP). Preparation FlAsH-EDT2 can be prepared in three steps from fluorescein (see figure). Formation of FlAsH-TC adduct Many studies show that trivalent arsenic compounds bind to pairs of cysteine residues. This binding is responsible for the toxicity of many arsenic compounds. Binding is reversed by 1,2-ethanedithiol, which binds tightly to arsenic compounds, as shown by the stability of FlAsH-EDT2. Such strong sulfur-arsenic bond can be, again, regulated by designing a peptide domain that exhibits higher affinity toward the arsenic, such as tetracysteine motif. By modulating the distance between the two pairs of cysteine residues and the space between the arsenic centers of FlAsH-EDT2, a cooperative and entropically favored dithiol arsenic bond could be achieved. The binding of FlAsH-EDT2 is thus subject to equilibration. The FlAsH-peptide adduct formation can be favored in low concentration of EDT (below 10 μM) and be reversed in high concentration of EDT (above 1  mM). Properties FlAsH becomes fluorescent upon the binding of tetracysteine motif. It is excited at 508 nm and emits 528 nm, a green-yellow, of free fluorescein. The quantum yield is 0.49 for 250 nM FlAsH is bound to a model tetracysteine-containing peptide in a phosphate-buffered saline at pH 7.4. Generally, FlAsH-EDT2 has 0.1-0.6 fluorescence quantum efficiencies with several μM detection limits for diffuse cytosolic tag and 30 - 80 extinction coefficients L mmol−1 cm−1. The FlAsH-peptide complex also has demonstrated fluorescence resonance energy transfer (FRET) from fluorescent proteins, such as from enhanced cyan fluorescent protein (ECFP) of Green Fluorescent Protein (GFP). Application FlAsH-EDT2 enables less toxic and more specific fluorescent labeling that is membrane permeable. The modification of the fluorescein moiety also allows multicolor analysis. It has been proven to be a good alternative to green fluorescent proteins (GFP) with the advantage that FlAsH-EDT2 is much smaller (molar mass < 1 kDa) as compared to GFPs (~30 kDa), therefore minimizing the perturbation of activity of the protein under the study. Use In the past, FlAsH-EDT2 has been widely used to study a number of in vivo cellular events and subcellular structures in animal cells, Ebola virus matrix protein, and protein misfolding. With the electron microscopic imaging, FlAsH-EDT2 is also used to study the processes of protein trafficking in situ. More recently, it was used in an extended study of plant cells like Arabidopsis and tobacco. References Biochemistry detection reactions Fluorescent dyes Hydroxyarenes Benzoic acids Triarylmethane dyes Organoarsenic dithiolates Arsenic heterocycles
FlAsH-EDT2
[ "Chemistry", "Biology" ]
948
[ "Microbiology techniques", "Biochemistry detection reactions", "Biochemical reactions" ]
41,280,949
https://en.wikipedia.org/wiki/Drop%20impact
In fluid dynamics, drop impact occurs when a drop of liquid strikes a solid or liquid surface. The resulting outcome depends on the properties of the drop, the surface, and the surrounding fluid, which is most commonly a gas. On a dry solid surface When a liquid drop strikes a dry solid surface, it generally spreads on the surface, and then will retract if the impact is energetic enough to cause the drop to spread out more than it would generally spread due to its static receding contact angle. The specific outcome of the impact depends mostly upon the drop size, velocity, surface tension, viscosity, and also upon the surface roughness and the contact angle between the drop and the surface. Droplet impact parameters such as contact time and impact regime can be modified and controlled by different passive and active methods. Summary of possible outcomes "Deposition" is said to occur when the drop spreads on the surface at impact and remains attached to the surface during the entire impact process without breaking up. This outcome is representative of impact of small, low-velocity drops onto smooth wetting surfaces. The "prompt splash" outcome occurs when the drop strikes a rough surface, and is characterized by the generation of droplets at the contact line (where solid, gas, and liquid meet) at the beginning of the process of spreading of the drop on the surface, when the liquid has a high outward velocity. At reduced surface tension, the liquid layer can detach from the wall, resulting in a "corona splash". On a wetting surface, "receding breakup" can occur as the liquid retracts from its maximum spreading radius, due to the fact that the contact angle decreases during retraction, causing some drops to be left behind by the receding drop. On superhydrophobic surfaces, the retracting drop can break up into a number of fingers which are each capable of further breakup, likely due to capillary instability. Such satellite droplets have been observed to break off from the impacting drop both during the spreading and retracting phases. "Rebound" and "partial rebound" outcomes can occur when a drop recedes after impact. As the drop recedes to the impact point, the kinetic energy of the collapsing drop causes the liquid to squeeze upward, forming a vertical liquid column. The case where drop stays partially on the surface but launches one or more drops at its top is known as partial rebound, whereas the case where the entire drop leaves the solid surface due to this upward motion is known as complete rebound. The difference between rebound and partial rebound is caused by the receding contact angle of the drop on the surface. For low values a partial rebound occurs, while for high values a complete rebound occurs (assuming that the drop recedes with enough kinetic energy). Addition of polymers like xanthan into water alters its rheological properties, transitioning it from a Newtonian fluid to a viscoelastic one. Consequently, this modification affects the shape of the droplet upon rebounding from a solid surface. On superhydrophobic surfaces Small drop deformation On superhydrophobic surfaces, liquid drops are observed to bounce off of the solid surface. Richard and Quéré showed that a small liquid drop was able to bounce off of a solid surface over 20 times before coming to rest. Of particular interest is the length of time that the drop remains in contact with the solid surface. This is important in applications such as heat transfer and aircraft icing. To find a relationship between drop size and contact time for low Weber number impacts (We << 1) on superhydrophobic surfaces (which experience little deformation), a simple balance between inertia () and capillarity () can be used, as follows: where is the drop density, R is the drop radius, is the characteristic time scale, and is the drop surface tension. This yields . The contact time is independent of velocity in this regime. The minimum contact time for a low deformation drop (We << 1) is approximated by the lowest-order oscillation period for a spherical drop., giving the characteristic time a prefactor of approximately 2.2. For large-deformation drops (We > 1), similar contact times are seen even though dynamics of impact are different, as discussed below. If the droplet is split into multiple droplets, the contact time is reduced. By creating tapered surfaces with large spacing, the impacting droplet will exhibit the counterintuitive pancake bouncing, characterized by the droplet bouncing off at the end of spreading without retraction, resulting in ~80% contact time reduction. Significant drop deformation As the Weber number increases, the drop deformation upon impact also increases. The drop deformation pattern can be split up into regimes based on the Weber number. At We << 1, there is not significant deformation. For We on the order of 1, the drop experiences significant deformation, and flattens somewhat on the surface. When We ~ 4, waves form on the drop. When We ~ 18, satellite droplet(s) break off of the drop, which is now an elongated vertical column. For large We (for which the magnitude depends on the specific surface structure), many satellite drops break off during spreading and/or retraction of the drop. On a wet solid surface When a liquid drop strikes a wet solid surface (a surface covered with a thin layer of liquid that exceeds the height of surface roughness), either spreading or splashing will occur. If the velocity is below a critical value, the liquid will spread on the surface, similar to deposition described above. If the velocity exceeds the critical velocity, splashing will occur and shock wave can be generated. Splashing on thin fluid films occurs in the form of a corona, similar to that seen for dry solid surfaces. Under proper conditions, droplet hitting a liquid interface can also display a superhydrophobic-like bouncing, characterized by the contact time, spreading dynamics and restitution coefficient independent of the underlying liquid properties. On a liquid surface When a liquid drop strikes the surface of a liquid reservoir, it will either float, bounce, coalesce with the reservoir, or splash. In the case of floating, a drop will float on the surface for several seconds. Cleanliness of the liquid surface is reportedly very important in the ability of drops to float. Drop bouncing can occur on perturbed liquid surfaces. If the drop is able to rupture a thin film of gas separating it from the liquid reservoir, it can coalesce. Finally, higher Weber number drop impacts (with greater energy) produce splashing. In the splashing regime, the striking drop creates a crater in the fluid surface, followed by a crown around the crater. Additionally, a central jet, called the Rayleigh jet or Worthington jet, protrudes from the center of the crater. If the impact energy is high enough, the jet rises to the point where it pinches off, sending one or more droplets upward out of the surface. See also Splash (fluid mechanics) References Fluid dynamics
Drop impact
[ "Chemistry", "Engineering" ]
1,443
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
41,281,214
https://en.wikipedia.org/wiki/Strontian%20process
The strontian process is an obsolete chemical method to recover sugar from molasses. Its use in Europe peaked in the middle of the 19th century. The name strontian comes from the Scottish village Strontian where the source mineral strontianite (strontium carbonate) was first found. Chemistry Strontium carbonate is a recycled coreactant in this process. Strontium carbonate is calcined with carbon in the presence of steam to form strontium hydroxide. The strontium and carbon dioxide formed are rejoined later in the process, forming strontium carbonate once again. SrCO3 + C + H2O + O2 = Sr(OH)2 + 2 CO2 In a molasses solution kept near 100 °C, the hydroxide reacts with soluble sugars to form water and the poorly soluble strontium saccharide which is filtered out, but kept awash in near-boiling water. Sr(OH)2 + 2C12H22O11 = SrO(C12H22O11)2 + H2O The saccharate liquid is cooled to 10 °C, cracking off one of the sugars SrO(C12H22O11)2 = SrO(C12H22O11) + C12H22O11 The carbon dioxide (from the calcination) is bubbled through the saccharate solution, cracking off the second sugar and reforming the strontium carbonate, which is filtered off. SrO(C12H22O11) + CO2 = SrCO3 + C12H22O11 The sugar is then extracted through evaporating the remaining solution. There are two types of strontium saccharide: one at low temperature, the strontium monosaccharide; and the second at high temperature, the strontium disaccharide. History Molasses is the first stage output of several different sugar production processes, and contains more than 50% sugar. The French chemists Hippolyte Leplay and Augustin-Pierre Dubrunfaut developed a process for extracting sugar from molasses, reacting them with barium oxide, to give the insoluble barium-saccharates. In 1849, they expanded their patent to include strontium salts. Apparently, this patent application had the only purpose to legally secure the so-called baryte process, since the strontian process from Leplay and Dubrunfaut probably wouldn't work as described. Only later, through the work of Carl Scheibler (patents dated 1881, 1882, and 1883), was it possible to apply the strontian process on an industrial basis. According to Scheibler the procedure must be carried out at boiling temperatures. Repercussion in Germany The Scheibler procedure came into use in the Dessauer Sugar Refinery (in Dessau), through Emil Fleischer. In the Münsterland region, its arrival caused a ″gold fever″ breakout, regarding the strontianite mining. One of the biggest mines, at Drensteinfurt, was named after Dr. Reichardt, the director of the Dessauer Sugar Refinery. A further place the strontian process came to be used was the Sugar Factory Rositz (in Rositz). Yet by 1883, the demand for strontianite had begun to shrink. First, it was replaced by another strontium mineral (celestine), that could be imported from England, in a cheaper way. Second, the prices for sugar decreased so much, that the production from molasses was no longer worthwhile. Literature (further reading) Börnchen, Martin : Strontianit, Exhibition guide from the University Library of the Free University of Berlin, 2005 (PDF; 6,5 MB). In German. Heriot, T. H. P.: The Manufacture of Sugar from the Cane and Beet, Green and Company, 1920, pp. 341–342 (archive online). Krause, G.: Der Schiedsspruch in Sachen des Scheibler'schen Monostrontiumsaccharat-Patentes, Chemiker Zeitung, nr. 32, 19th April, 1885, (PDF; 4,94 MB). In German.'' References Chemical processes Industrial processes Catalysis Strontium Strontium minerals History of sugar Sugar production
Strontian process
[ "Chemistry" ]
916
[ "Catalysis", "Chemical processes", "nan", "Chemical process engineering", "Chemical kinetics" ]
41,282,920
https://en.wikipedia.org/wiki/Flow%20cytometry%20bioinformatics
Flow cytometry bioinformatics is the application of bioinformatics to flow cytometry data, which involves storing, retrieving, organizing and analyzing flow cytometry data using extensive computational resources and tools. Flow cytometry bioinformatics requires extensive use of and contributes to the development of techniques from computational statistics and machine learning. Flow cytometry and related methods allow the quantification of multiple independent biomarkers on large numbers of single cells. The rapid growth in the multidimensionality and throughput of flow cytometry data, particularly in the 2000s, has led to the creation of a variety of computational analysis methods, data standards, and public databases for the sharing of results. Computational methods exist to assist in the preprocessing of flow cytometry data, identifying cell populations within it, matching those cell populations across samples, and performing diagnosis and discovery using the results of previous steps. For preprocessing, this includes compensating for spectral overlap, transforming data onto scales conducive to visualization and analysis, assessing data for quality, and normalizing data across samples and experiments. For population identification, tools are available to aid traditional manual identification of populations in two-dimensional scatter plots (gating), to use dimensionality reduction to aid gating, and to find populations automatically in higher-dimensional space in a variety of ways. It is also possible to characterize data in more comprehensive ways, such as the density-guided binary space partitioning technique known as probability binning, or by combinatorial gating. Finally, diagnosis using flow cytometry data can be aided by supervised learning techniques, and discovery of new cell types of biological importance by high-throughput statistical methods, as part of pipelines incorporating all of the aforementioned methods. Open standards, data and software are also key parts of flow cytometry bioinformatics. Data standards include the widely adopted Flow Cytometry Standard (FCS) defining how data from cytometers should be stored, but also several new standards under development by the International Society for Advancement of Cytometry (ISAC) to aid in storing more detailed information about experimental design and analytical steps. Open data is slowly growing with the opening of the CytoBank database in 2010, and FlowRepository in 2012, both of which allow users to freely distribute their data, and the latter of which has been recommended as the preferred repository for MIFlowCyt-compliant data by ISAC. Open software is most widely available in the form of a suite of Bioconductor packages, but is also available for web execution on the GenePattern platform. Data collection Flow cytometers operate by hydrodynamically focusing suspended cells so that they separate from each other within a fluid stream. The stream is interrogated by one or more lasers, and the resulting fluorescent and scattered light is detected by photomultipliers. By using optical filters, particular fluorophores on or within the cells can be quantified by peaks in their emission spectra. These may be endogenous fluorophores such as chlorophyll or transgenic green fluorescent protein, or they may be artificial fluorophores covalently bonded to detection molecules such as antibodies for detecting proteins, or hybridization probes for detecting DNA or RNA. The ability to quantify these has led to flow cytometry being used in a wide range of applications, including but not limited to: Monitoring of CD4 count in HIV Diagnosis of various cancers Analysis of aquatic microbiomes Sperm sorting Measuring telomere length Until the early 2000s, flow cytometry could only measure a few fluorescent markers at a time. Through the late 1990s into the mid-2000s, however, rapid development of new fluorophores resulted in modern instruments capable of quantifying up to 18 markers per cell. More recently, the new technology of mass cytometry replaces fluorophores with rare-earth elements detected by time of flight mass spectrometry, achieving the ability to measure the expression of 34 or more markers. At the same time, microfluidic qPCR methods are providing a flow cytometry-like method of quantifying 48 or more RNA molecules per cell. The rapid increase in the dimensionality of flow cytometry data, coupled with the development of high-throughput robotic platforms capable of assaying hundreds to thousands of samples automatically have created a need for improved computational analysis methods. Data Flow cytometry data is in the form of a large matrix of intensities over M wavelengths by N events. Most events will be a particular cell, although some may be doublets (pairs of cells which pass the laser closely together). For each event, the measured fluorescence intensity over a particular wavelength range is recorded. The measured fluorescence intensity indicates the amount of that fluorophore in the cell, which indicates the amount that has bound to detector molecules such as antibodies. Therefore, fluorescence intensity can be considered a proxy for the amount of detector molecules present on the cell. A simplified, if not strictly accurate, way of considering flow cytometry data is as a matrix of M measurements times N cells where each element corresponds to the amounts of molecules. Steps in computational flow cytometry data analysis The process of moving from primary FCM data to disease diagnosis and biomarker discovery involves four major steps: Data pre-processing (including compensation, transformation and normalization) Cell population identification (a.k.a. gating) Cell population matching for cross sample comparison Relating cell populations to external variables (diagnosis and discovery) Saving of the steps taken in a particular flow cytometry workflow is supported by some flow cytometry software, and is important for the reproducibility of flow cytometry experiments. However, saved workspace files are rarely interchangeable between software. An attempt to solve this problem is the development of the Gating-ML XML-based data standard (discussed in more detail under the standards section), which is slowly being adopted in both commercial and open source flow cytometry software. The CytoML R package is also filling the gap by importing/exporting the Gating-ML that is compatible with FlowJo, CytoBank and FACS Diva softwares. Data pre-processing Prior to analysis, flow cytometry data must typically undergo pre-processing to remove artifacts and poor quality data, and to be transformed onto an optimal scale for identifying cell populations of interest. Below are various steps in a typical flow cytometry preprocessing pipeline. Compensation When more than one fluorochrome is used with the same laser, their emission spectra frequently overlap. Each particular fluorochrome is typically measured using a bandpass optical filter set to a narrow band at or near the fluorochrome's emission intensity peak. The result is that the reading for any given fluorochrome is actually the sum of that fluorochrome's peak emission intensity, and the intensity of all other fluorochromes' spectra where they overlap with that frequency band. This overlap is termed spillover, and the process of removing spillover from flow cytometry data is called compensation. Compensation is typically accomplished by running a series of representative samples each stained for only one fluorochrome, to give measurements of the contribution of each fluorochrome to each channel. The total signal to remove from each channel can be computed by solving a system of linear equations based on this data to produce a spillover matrix, which when inverted and multiplied with the raw data from the cytometer produces the compensated data. The processes of computing the spillover matrix, or applying a precomputed spillover matrix to compensate flow cytometry data, are standard features of flow cytometry software. Transformation Cell populations detected by flow cytometry are often described as having approximately log-normal expression. As such, they have traditionally been transformed to a logarithmic scale. In early cytometers, this was often accomplished even before data acquisition by use of a log amplifier. On modern instruments, data is usually stored in linear form, and transformed digitally prior to analysis. However, compensated flow cytometry data frequently contains negative values due to compensation, and cell populations do occur which have low means and normal distributions. Logarithmic transformations cannot properly handle negative values, and poorly display normally distributed cell types. Alternative transformations which address this issue include the log-linear hybrid transformations Logicle and Hyperlog, as well as the hyperbolic arcsine and the Box–Cox. A comparison of commonly used transformations concluded that the biexponential and Box–Cox transformations, when optimally parameterized, provided the clearest visualization and least variance of cell populations across samples. However, a later comparison of the flowTrans package used in that comparison indicated that it did not parameterize the Logicle transformation in a manner consistent with other implementations, potentially calling those results into question. Quality control Particularly in newer, high-throughput experiments, there is a need for visualization methods to help detect technical errors in individual samples. One approach is to visualize summary statistics, such as the empirical distribution functions of single dimensions of technical or biological replicates to ensure they are the similar. For more rigor, the Kolmogorov–Smirnov test can be used to determine if individual samples deviate from the norm. The Grubbs's test for outliers may be used to detect samples deviating from the group. A method for quality control in higher-dimensional space is to use probability binning with bins fit to the whole data set pooled together. Then the standard deviation of the number of cells falling in the bins within each sample can be taken as a measure of multidimensional similarity, with samples that are closer to the norm having a smaller standard deviation. With this method, higher standard deviation can indicate outliers, although this is a relative measure as the absolute value depends partly on the number of bins. With all of these methods, the cross-sample variation is being measured. However, this is the combination of technical variations introduced by the instruments and handling, and actual biological information that is desired to be measured. Disambiguating the technical and the biological contributions to between-sample variation can be a difficult to impossible task. Normalization Particularly in multi-centre studies, technical variation can make biologically equivalent populations of cells difficult to match across samples. Normalization methods to remove technical variance, frequently derived from image registration techniques, are thus a critical step in many flow cytometry analyses. Single-marker normalization can be performed using landmark registration, in which peaks in a kernel density estimate of each sample are identified and aligned across samples. Identifying cell populations The complexity of raw flow cytometry data (dozens of measurements for thousands to millions of cells) makes answering questions directly using statistical tests or supervised learning difficult. Thus, a critical step in the analysis of flow cytometric data is to reduce this complexity to something more tractable while establishing common features across samples. This usually involves identifying multidimensional regions that contain functionally and phenotypically homogeneous groups of cells. This is a form of cluster analysis. There are a range of methods by which this can be achieved, detailed below. Gating The data generated by flow-cytometers can be plotted in one or two dimensions to produce a histogram or scatter plot. The regions on these plots can be sequentially separated, based on fluorescence intensity, by creating a series of subset extractions, termed "gates". These gates can be produced using software, e.g. FlowJo, FCS Express, WinMDI, CytoPaint (aka Paint-A-Gate), VenturiOne, Cellcion, CellQuest Pro, Cytospec, Kaluza. or flowCore. In datasets with a low number of dimensions and limited cross-sample technical and biological variability (e.g., clinical laboratories), manual analysis of specific cell populations can produce effective and reproducible results. However, exploratory analysis of a large number of cell populations in a high-dimensional dataset is not feasible. In addition, manual analysis in less controlled settings (e.g., cross-laboratory studies) can increase the overall error rate of the study. In one study, several computational gating algorithms performed better than manual analysis in the presence of some variation. However, despite the considerable advances in computational analysis, manual gating remains the main solution for the identification of specific rare cell populations that are not well-separated from other cell types. Gating guided by dimension reduction The number of scatter plots that need to be investigated increases with the square of the number of markers measured (or faster since some markers need to be investigated several times for each group of cells to resolve high-dimensional differences between cell types that appear to be similar in most markers). To address this issue, principal component analysis has been used to summarize the high-dimensional datasets using a combination of markers that maximizes the variance of all data points. However, PCA is a linear method and is not able to preserve complex and non-linear relationships. More recently, two dimensional minimum spanning tree layouts have been used to guide the manual gating process. Density-based down-sampling and clustering was used to better represent rare populations and control the time and memory complexity of the minimum spanning tree construction process. More sophisticated dimension reduction algorithms are yet to be investigated. Automated gating Developing computational tools for identification of cell populations has been an area of active research only since 2008. Many individual clustering approaches have recently been developed, including model-based algorithms (e.g., flowClust and FLAME), density based algorithms (e.g. FLOCK and SWIFT, graph-based approaches (e.g. SamSPECTRAL) and most recently, hybrids of several approaches (flowMeans and flowPeaks). These algorithms are different in terms of memory and time complexity, their software requirements, their ability to automatically determine the required number of cell populations, and their sensitivity and specificity. The FlowCAP (Flow Cytometry: Critical Assessment of Population Identification Methods) project, with active participation from most academic groups with research efforts in the area, is providing a way to objectively cross-compare state-of-the-art automated analysis approaches. Other surveys have also compared automated gating tools on several datasets. Probability binning methods Probability binning is a non-gating analysis method in which flow cytometry data is split into quantiles on a univariate basis. The locations of the quantiles can then be used to test for differences between samples (in the variables not being split) using the chi-squared test. This was later extended into multiple dimensions in the form of frequency difference gating, a binary space partitioning technique where data is iteratively partitioned along the median. These partitions (or bins) are fit to a control sample. Then the proportion of cells falling within each bin in test samples can be compared to the control sample by the chi squared test. Finally, cytometric fingerprinting uses a variant of frequency difference gating to set bins and measure for a series of samples how many cells fall within each bin. These bins can be used as gates and used for subsequent analysis similarly to automated gating methods. Combinatorial gating High-dimensional clustering algorithms are often unable to identify rare cell types that are not well separated from other major populations. Matching these small cell populations across multiple samples is even more challenging. In manual analysis, prior biological knowledge (e.g., biological controls) provides guidance to reasonably identify these populations. However, integrating this information into the exploratory clustering process (e.g., as in semi-supervised learning) has not been successful. An alternative to high-dimensional clustering is to identify cell populations using one marker at a time and then combine them to produce higher-dimensional clusters. This functionality was first implemented in FlowJo. The flowType algorithm builds on this framework by allowing the exclusion of the markers. This enables the development of statistical tools (e.g. RchyOptimyx) that can investigate the importance of each marker and exclude high-dimensional redundancies. Diagnosis and discovery After identification of the cell population of interest, a cross sample analysis can be performed to identify phenotypical or functional variations that are correlated with an external variable (e.g., a clinical outcome). These studies can be partitioned into two main groups: Diagnosis In these studies, the goal usually is to diagnose a disease (or a sub-class of a disease) using variations in one or more cell populations. For example, one can use multidimensional clustering to identify a set of clusters, match them across all samples, and then use supervised learning to construct a classifier for prediction of the classes of interest (e.g., this approach can be used to improve the accuracy of the classification of specific lymphoma subtypes). Alternatively, all the cells from the entire cohort can be pooled into a single multidimensional space for clustering before classification. This approach is particularly suitable for datasets with a high amount of biological variation (in which cross-sample matching is challenging) but requires technical variations to be carefully controlled. Discovery In a discovery setting, the goal is to identify and describe cell populations correlated with an external variable (as opposed to the diagnosis setting in which the goal is to combine the predictive power of multiple cell types to maximize the accuracy of the results). Similar to the diagnosis use-case, cluster matching in high-dimensional space can be used for exploratory analysis but the descriptive power of this approach is very limited, as it is hard to characterize and visualize a cell population in a high-dimensional space without first reducing the dimensionality. Finally, combinatorial gating approaches have been particularly successful in exploratory analysis of FCM data. Simplified Presentation of Incredibly Complex Evaluations (SPICE) is a software package that can use the gating functionality of FlowJo to statistically evaluate a wide range of different cell populations and visualize those that are correlated with the external outcome. flowType and RchyOptimyx (as discussed above) expand this technique by adding the ability of exploring the impact of independent markers on the overall correlation with the external outcome. This enables the removal of unnecessary markers and provides a simple visualization of all identified cell types. In a recent analysis of a large (n=466) cohort of HIV+ patients, this pipeline identified three correlates of protection against HIV, only one of which had been previously identified through extensive manual analysis of the same dataset. Data formats and interchange Flow Cytometry Standard Flow Cytometry Standard (FCS) was developed in 1984 to allow recording and sharing of flow cytometry data. Since then, FCS became the standard file format supported by all flow cytometry software and hardware vendors. The FCS specification has traditionally been developed and maintained by the International Society for Advancement of Cytometry (ISAC). Over the years, updates were incorporated to adapt to technological advancements in both flow cytometry and computing technologies with FCS 2.0 introduced in 1990, FCS 3.0 in 1997, and the most current specification FCS 3.1 in 2010. FCS used to be the only widely adopted file format in flow cytometry. Recently, additional standard file formats have been developed by ISAC. netCDF ISAC is considering replacing FCS with a flow cytometry specific version of the Network Common Data Form (netCDF) file format. netCDF is a set of freely available software libraries and machine independent data formats that support the creation, access, and sharing of array-oriented scientific data. In 2008, ISAC drafted the first version of netCDF conventions for storage of raw flow cytometry data. Archival Cytometry Standard (ACS) The Archival Cytometry Standard (ACS) is being developed to bundle data with different components describing cytometry experiments. It captures relations among data, metadata, analysis files and other components, and includes support for audit trails, versioning and digital signatures. The ACS container is based on the ZIP file format with an XML-based table of contents specifying relations among files in the container. The XML Signature W3C Recommendation has been adopted to allow for digital signatures of components within the ACS container. An initial draft of ACS has been designed in 2007 and finalized in 2010. Since then, ACS support has been introduced in several software tools including FlowJo and Cytobank. Gating-ML The lack of gating interoperability has traditionally been a bottleneck preventing reproducibility of flow cytometry data analysis and the usage of multiple analytical tools. To address this shortcoming, ISAC developed Gating-ML, an XML-based mechanism to formally describe gates and related data (scale) transformations. The draft recommendation version of Gating-ML was approved by ISAC in 2008 and it is partially supported by tools like FlowJo, the flowUtils, CytoML libraries in R/BioConductor, and FlowRepository. It supports rectangular gates, polygon gates, convex polytopes, ellipsoids, decision trees and Boolean collections of any of the other types of gates. In addition, it includes dozens of built in public transformations that have been shown to potentially useful for display or analysis of cytometry data. In 2013, Gating-ML version 2.0 was approved by ISAC's Data Standards Task Force as a Recommendation. This new version offers slightly less flexibility in terms of the power of gating description; however, it is also significantly easier to implement in software tools. Classification Results (CLR) The Classification Results (CLR) File Format has been developed to exchange the results of manual gating and algorithmic classification approaches in a standard way in order to be able to report and process the classification. CLR is based in the commonly supported CSV file format with columns corresponding to different classes and cell values containing the probability of an event being a member of a particular class. These are captured as values between 0 and 1. Simplicity of the format and its compatibility with common spreadsheet tools have been the major requirements driving the design of the specification. Although it was originally designed for the field of flow cytometry, it is applicable in any domain that needs to capture either fuzzy or unambiguous classifications of virtually any kinds of objects. Public data and software As in other bioinformatics fields, development of new methods has primarily taken the form of free open source software, and several databases have been created for depositing open data. AutoGate AutoGate performs compensation, gating, preview of clusters, exhaustive projection pursuit (EPP), multi-dimension scaling and phenogram, produces a visual dendogram to express HiD readiness. It is free to researchers and clinicians at academic, government, and non-profit institutions. Bioconductor The Bioconductor project is a repository of free open source software, mostly written in the R programming language. As of July 2013, Bioconductor contained 21 software packages for processing flow cytometry data. These packages cover most of the range of functionality described earlier in this article. GenePattern GenePattern is a predominantly genomic analysis platform with over 200 tools for analysis of gene expression, proteomics, and other data. A web-based interface provides easy access to these tools and allows the creation of automated analysis pipelines enabling reproducible research. Recently, a GenePattern Flow Cytometry Suite has been developed in order to bring advanced flow cytometry data analysis tools to experimentalists without programmatic skills. It contains close to 40 open source GenePattern flow cytometry modules covering methods from basic processing of flow cytometry standard (i.e., FCS) files to advanced algorithms for automated identification of cell populations, normalization and quality assessment. Internally, most of these modules leverage functionality developed in BioConductor. Much of the functionality of the Bioconductor packages for flow cytometry analysis has been packaged up for use with the GenePattern workflow system, in the form of the GenePattern Flow Cytometry Suite. FACSanadu FACSanadu is an open source portable application for visualization and analysis of FCS data. Unlike Bioconductor, it is an interactive program aimed at non-programmers for routine analysis. It supports standard FCS files as well as COPAS profile data. hema.to hema.to is a web service for the classification of flow cytometry data of patients suspected to have lymphoma. The artificial intelligence within the tool uses a deep convolutional neural network to recognize patterns of distinct subtypes. All data and code is open access. It processes raw data, which makes gating unnecessary. For best performance on new data, fine tuning by knowledge transfer is required. Public databases The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt), requires that any flow cytometry data used in a publication be available, although this does not include a requirement that it be deposited in a public database. Thus, although the journals Cytometry Part A and B, as well as all journals from the Nature Publishing Group require MIFlowCyt compliance, there is still relatively little publicly available flow cytometry data. Some efforts have been made towards creating public databases, however. Firstly, CytoBank, which is a complete web-based flow cytometry data storage and analysis platform, has been made available to the public in a limited form. Using the CytoBank code base, FlowRepository was developed in 2012 with the support of ISAC to be a public repository of flow cytometry data. FlowRepository facilitates MIFlowCyt compliance, and as of July 2013 contained 65 public data sets. Datasets In 2012, the flow cytometry community has started to release a set of publicly available datasets. A subset of these datasets representing the existing data analysis challenges is described below. For comparison against manual gating, the FlowCAP-I project has released five datasets, manually gated by human analysts, and two of them gated by eight independent analysts. The FlowCAP-II project included three datasets for binary classification and also reported several algorithms that were able to classify these samples perfectly. FlowCAP-III included two larger datasets for comparison against manual gates as well as one more challenging sample classification dataset. As of March 2013, public release of FlowCAP-III was still in progress. The datasets used in FlowCAP-I, II, and III either have a low number of subjects or parameters. However, recently several more complex clinical datasets have been released including a dataset of 466 HIV-infected subjects, which provides both 14 parameter assays and sufficient clinical information for survival analysis. Another class of datasets are higher-dimensional mass cytometry assays. A representative of this class of datasets is a study which includes analysis of two bone marrow samples using more than 30 surface or intracellular markers under a wide range of different stimulations. The raw data for this dataset is publicly available as described in the manuscript, and manual analyses of the surface markers are available upon request from the authors. Open problems Despite rapid development in the field of flow cytometry bioinformatics, several problems remain to be addressed. Variability across flow cytometry experiments arises from biological variation among samples, technical variations across instruments used, as well as methods of analysis. In 2010, a group of researchers from Stanford University and the National Institutes of Health pointed out that while technical variation can be ameliorated by standardizing sample handling, instrument setup and choice of reagents, solving variation in analysis methods will require similar standardization and computational automation of gating methods. They further opined that centralization of both data and analysis could aid in decreasing variability between experiments and in comparing results. This was echoed by another group of Pacific Biosciences and Stanford University researchers, who suggested that cloud computing could enable centralized, standardized, high-throughput analysis of flow cytometry experiments. They also emphasised that ongoing development and adoption of standard data formats could continue to aid in reducing variability across experiments. They also proposed that new methods will be needed to model and summarize results of high-throughput analysis in ways that can be interpreted by biologists, as well as ways of integrating large-scale flow cytometry data with other high-throughput biological information, such as gene expression, genetic variation, metabolite levels and disease states. See also Flow cytometry Bioinformatics Proteomics Flow Cytometry Standard References Flow cytometry Bioinformatics
Flow cytometry bioinformatics
[ "Chemistry", "Engineering", "Biology" ]
6,060
[ "Bioinformatics", "Biological engineering", "Flow cytometry" ]
47,243,902
https://en.wikipedia.org/wiki/Franklin%27s%20electrostatic%20machine
Franklin's electrostatic machine is a high-voltage static electricity-generating device used by Benjamin Franklin in the mid-18th century for research into electrical phenomena. Its key components are a glass globe which turned on an axis via a crank, a cloth pad in contact with the spinning globe, a set of metal needles to conduct away the charge developed on the globe by its friction with the pad, and a Leyden jara high-voltage capacitorto accumulate the charge. Franklin's experiments with the machine eventually led to new theories about electricity and inventing the lightning rod. Background Franklin was not the first to build an electrostatic generator. European scientists developed machines to generate static electricity decades earlier. In 1663, Otto von Guericke generated static electricity with a device that used a sphere of sulfur. Francis Hauksbee developed a more advanced electrostatic generator around 1704 using a glass bulb that had a vacuum. He later replaced the globe with a glass tube of about emptied of air. The glass tube was a less effective static generator than the globe, but it became more popular because it was easier to use. Machines that generated static electricity with a glass disc were popular and widespread in Europe by 1740. In 1745, German cleric Ewald Georg von Kleist and Dutch scientist Pieter van Musschenbroek discovered independently that the electric charge from these machines could be stored in a Leyden jar, named after the city of Leiden in the Netherlands. In 1745, Peter Collinson, a businessman from London who corresponded with American and European scientists, donated a German "glass tube" along with instructions how to make static electricity, to Franklin's Library Company of Philadelphia. Collinson was the library's London agent and provided the latest technology news from Europe. Franklin wrote a letter to Collinson on March 28, 1747, thanking him, and saying the tube and instructions had motivated several colleagues and him to begin serious experiments with electricity. In 1746, Franklin began working on electrical experiments with Ebenezer Kinnersley after he bought all of Archibald Spencer's electrical equipment that he used in his lectures. Later, he was also associated with Thomas Hopkinson and Philip Syng in experimentation with electricity. In the summer of 1747 they had received an electrical system from Thomas Penn. While no records exists to tell exactly what parts were included in the system, historian J. A. Leo LeMay believes it was a combination of an electricity generating machine, a Leyden jar, a glass tube, and a stool that was electrically insulated from the ground. This gave Franklin a complete system to experiment with generating and storing electricity. When amber, sulfur, or glass are rubbed with certain materials, they produce electrical effects. Franklin theorized this "electrical fire" was collected from this other material somehow, and not produced by the friction on the object. He decided to retire early from his printing business, still in his early forties, to spend more time studying electricity. In 1748, Franklin turned over his entire printing business to his partner David Hall. He moved into a new Philadelphia home with his wife, where he built a laboratory to conduct experiments and research new electrical theories. Franklin experimented not only with the electrostatic machine with the glass globe, but also with the Leyden jar. He kept a detailed journal of his research in a diary called "Electrical Minutes" that has since been lost. Franklin's machine was given to Library Company of Philadelphia by Franklin's grandson in 1792, and is currently on display at the Franklin Institute. Description Franklin's machine used a belt and pulley system that could be operated by one person turning a crank. A large pulley was attached to the crank handle, and a much smaller pulley was attached to a large glass globe. An iron axle passed through the globe. This allowed the globe to be rotated at high speed. When the crank was turned, the glass globe rubbed against a leather pad, which generated a large static charge, similar to the electrical charge that could be created by rubbing a glass tube with wool cloth by hand. The machine was unique improvement over others made in Europe at the time, as the glass globe could be spun faster with much less labor. A few revolutions of the handle were all that were needed to charge a Leyden jar. The electricity produced by the machine, in the form of sparks, passed through a set of metal needles positioned close to the spinning globe. The electric charge continued passing through a beaded iron chain, which acted as a conductor, to a Leyden jar that received the electricity. Franklin called the sparks produced by the machine "electrical fire". The glass globes, known as "electerizing globes", were made of glass that was scientifically designed to produce static electricity effectively. Franklin specified the materials to be used in the glass formula, and the globes were manufactured by Caspar Wistar, a close associate of Franklin. Wistarburgh Glass Works also made scientific glass for the Leyden jars Franklin used in the 1750s. Electrical principles Franklin's experiments with Leyden jars progressed to connecting several Leyden jars together in a series, with "one hanging on the tail of the other". All of the jars in the series could be charged simultaneously, which multiplied the electrical effect. A similar apparatus had been created earlier by Daniel Gralath. Franklin called this device an "electrical battery", but that term later came to have a different meaning, referring instead to a set of one or more galvanic cells. At that time, the word "battery" was a military term for a group of cannons. Franklin was the first to apply the terms "positive" and "negative" to electricity. Through his research, Franklin was among first to prove the electrical principal of conservation of charge in 1747: a similar discovery was made independently in 1746 by William Watson. Franklin wrote detailed letters and documents about his experiments with the electrostatic machine and Leyden jars. In 1749, Franklin made a list of several ways in which lightning was similar to electricity. He concluded that lightning was essentially nothing more than giant electric sparks, similar to the sparks from the static charges produced by his electrostatic machine. He referred to static electricity as "electric fire", "electric matter", or "electric fluid". The term "electric fluid" was based on the idea that a jar could be filled and refilled when it became empty. That led to the revolutionary idea of "electrical fire" as a type of motion or current flow rather than a type of explosion. Several 18th-century electric terms were derived from his name. For example, static electricity was known as "Franklin current", and "Franklinization" is a form of electrotherapy where Franklin shocked patients with strong static charges, to treat patients with various illnesses. Lightning rod invention Franklin invented the lightning rod based on what he learned from experiments with his electrostatic machine. Franklin and his associates observed that pointed objects were more effective than blunt objects at "drawing off" and "throwing off" sparks from static electricity. This discovery was first reported by Hopkinson. Franklin wondered if this discovery could be used in a practical invention. He thought something could be made to attract the electricity out of storm clouds, but first he had to verify that lightning bolts really are giant electric sparks. He wrote Collinson and Cadwallader Colden letters about this theory, and he described the kite experiment in the October 19, 1752 issue of the Pennsylvania Gazette. (Tom Tucker of the Isothermal Community College doubts the account, however, because of ambiguities in the account and points that out in his book Bolt of Fate: Benjamin Franklin and his Electric Kite Hoax. Others disagree with this view, arguing that Franklin would not make up such a fake story because he valued the integrity of the scientific community.) To test his theory, Franklin proposed a potentially deadly experiment, to be performed during an electrical storm, where a person would stand on an insulated stool inside a sentry box, and hold out a long, pointed iron rod to attract a lightning bolt. A similar but less dangerous version of this experiment was first performed successfully in France On May 10, 1752, and later repeated several more times throughout Europe, though after a fatality in 1753 it was less frequently tried. Franklin declared that this "sentry-box experiment" showed that lightning and electricity were one and the same. Franklin realized that wooden buildings could be protected from lightning strikes, and the deadly fires that often resulted, by placing a pointed iron on a rooftop, with the other end of the rod placed deep into the ground. The sharp point of the lightning rod would attract the electrical discharge from the cloud, and the lightning bolt would hit the iron rod instead of the wooden building. The electric charge from the lightning would flow through the rod directly into the earth, bypassing the structure, and preventing a fire. Franklin's friend Kinnersley traveled throughout the eastern United States in the 1750s demonstrating man-made "lightning" on model thunder houses to show a how an iron rod placed into the ground would protect a wooden structure. He explained that lightning followed the same principles as the sparks from Franklin's electrostatic machine. These lectures by Kinnersley were widely advertised, and were one of the ways Franklin's lightning rod was demonstrated to the general public. Legacy Franklin distributed copies of the electrostatic machine to many of his close associates to encourage them to study electricity. Between 1747 and 1750, Franklin sent many letters to his friend Collinson in London about his experiments with the electrostatic machine and the Leyden jar, including his observations and theories on the principles of electricity. These letters were collected and published in 1751 in a book entitled Experiments and Observations on Electricity. While Joseph Priestley was writing about the history of electricity, Franklin encouraged him to use an electrostatic machine to perform the experiments he was writing about. Priestly designed and used his own variations of Franklin's machine. While replicating the electrical experiments, some unanswered questions prompted Priestly to design additional experiments, leading to additional discoveries. In 1767, he published a 700-page book on his findings called The History and Present State of Electricity. Eighteenth-century scientific laboratories usually contained some form of hand-operated electrostatic machine. Italian scientist Luigi Galvani had an electrostatic generator in his laboratory, where experiments with frog legs led him to conclude that animals generated a vital force, an animal electricity. Another Italian scientist, Alessandro Volta, disagreed with Galvani's claim that the electrical effects were due to something peculiar to living matter, and he demonstrated that electricity can be generated merely by placing wet, salty material in between two different metals. This led directly to the invention of the first practical electric battery, the voltaic pile. After Franklin's death, two iconic artifacts from his research, the original "battery" of Leyden jars, and the "glass tube" that was a gift from Collinson in 1747, were given to the Royal Society in 1836 by Thomas Hopkinson's grandson Joseph Hopkinson, in accordance with Franklin's will. See also Wistarburgh Glass Works Corbett's electrostatic machine Van de Graaff generator References Citations Sources External links Benjamin Franklin's electrical apparatus (electrostatic machine) at Smithsonian National Museum of American History The Amazing Adventures of Ben Franklin – Scientist & Inventor / Opposites Attract with picture of glass globe on top Franklin's Electrostatic Generator information and picture from University of Maryland Electrical and Computer Engineering Dept. Electrical generators Electrostatics Historical scientific instruments
Franklin's electrostatic machine
[ "Physics", "Technology" ]
2,334
[ "Physical systems", "Electrical generators", "Machines" ]
47,246,468
https://en.wikipedia.org/wiki/C11H18N4O3
{{DISPLAYTITLE:C11H18N4O3}} The molecular formula C11H18N4O3 may refer to: Argpyrimidine, an advanced glycation end-product Imuracetam, a drug of the racetam family Molecular formulas
C11H18N4O3
[ "Physics", "Chemistry" ]
61
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
57,497,987
https://en.wikipedia.org/wiki/%2B%20h.c.
+ h.c. is an abbreviation for "plus the H ermitian c onjugate"; it means is that there are additional terms which are the Hermitian conjugates of all of the preceding terms, and is a convenient shorthand to omit half the terms actually present. Context and use The notation convention "+ h.c." is common in quantum mechanics in the context of writing out formulas for Lagrangians and Hamiltonians, which conventionally are both required to be Hermitian operators. The expression means The mathematics of quantum mechanics is based on complex numbers, whereas almost all observations (measurements) are only real numbers. Adding its own conjugate to an operator guarantees that the combination is Hermitian, which in turn guarantees that the combined operator's eigenvalues will be real numbers, suitable for predicting values of observations / measurements. Dagger and asterisk notation In the expressions above, is used as the symbol for the Hermitian conjugate (also called the conjugate transpose) of , defined as applying both the complex conjugate and the transpose transformations to the operator , in any order. The dagger () is an old notation in mathematics, but is still widespread in quantum-mechanics. In mathematics (particularly linear algebra) the Hermitian conjugate of is commonly written as , but in quantum mechanics the asterisk () notation is sometimes used for the complex conjugate only, and not the combined conjugate transpose (Hermitian conjugate). References Hamiltonian mechanics Quantum mechanics Lagrangian mechanics Abbreviations Operator theory
+ h.c.
[ "Physics", "Mathematics" ]
330
[ "Theoretical physics", "Classical mechanics", "Lagrangian mechanics", "Quantum mechanics", "Hamiltonian mechanics", "Dynamical systems" ]
57,498,426
https://en.wikipedia.org/wiki/Periodic%20table%20of%20topological%20insulators%20and%20topological%20superconductors
The periodic table of topological insulators and topological superconductors, also called tenfold classification of topological insulators and superconductors, is an application of topology to condensed matter physics. It indicates the mathematical group for the topological invariant of the topological insulators and topological superconductors, given a dimension and discrete symmetry class. The ten possible discrete symmetry families are classified according to three main symmetries: particle-hole symmetry, time-reversal symmetry and chiral symmetry. The table was developed between 2008–2010 by the collaboration of Andreas P. Schnyder, Shinsei Ryu, Akira Furusaki and Andreas W. W. Ludwig; and independently by Alexei Kitaev. Overview These table applies to topological insulators and topological superconductors with an energy gap, when particle-particle interactions are excluded. The table is no longer valid when interactions are included. The topological insulators and superconductors are classified here in ten symmetry classes (A,AII,AI,BDI,D,DIII,AII,CII,C,CI) named after Altland–Zirnbauer classification, defined here by the properties of the system with respect to three operators: the time-reversal operator , charge conjugation and chiral symmetry . The symmetry classes are ordered according to the Bott clock (see below) so that the same values repeat in the diagonals. An X in the table of "Symmetries" indicates that the Hamiltonian of the symmetry is broken with respect to the given operator. A value of ±1 indicates the value of the operator squared for that system. The dimension indicates the dimensionality of the systes: 1D (chain), 2D (plane) and 3D lattices. It can be extended up to any number of positive integer dimension. Below, there can be four possible group values that are tabulated for a given class and dimension: A value of 0 indicates that there is no topological phase for that class and dimension. The group indicates that the topological invariant can take integer values (e.g. ±0,±1,±2,...). The group of indicates that the topological invariant can take even values (e.g. ±0,±2,±4,...). The group of indicates that the topological invariant can take two values (e.g ±1). Physical examples The non-chiral Su–Schrieffer–Heeger model (), can be associated with symmetry class BDI with an integer topological invariant due to gauge invariance. The problem is similar to the integer quantum Hall effect and the quantum anomalous Hall effect (both in ) which are A class, with integer Chern number. Contrarily, the Kitaev chain (), is an example of symmetry class D, with a binary topological invariant. Similarly, the superconductors () are also in class D, but with a topological invariant. The quantum spin Hall effect () described by Kane–Mele model is an example of AII class, with a topological invariant. Construction Discrete symmetry classes There are ten discrete symmetry classes of topological insulators and superconductors, corresponding to the ten Altland–Zirnbauer classes of random matrices. They are defined by three symmetries of the Hamiltonian , (where , and , are the annihilation and creation operators of mode , in some arbitrary spatial basis) : time-reversal symmetry, particle-hole (or charge conjugation) symmetry, and chiral (or sublattice) symmetry. Chiral symmetry is a unitary operator , that acts on , as a unitary rotation (,) and satisfies . A Hamiltonian possesses chiral symmetry when , for some choice of (on the level of first-quantised Hamiltonians, this means and are anticommuting matrices). Time-reversal symmetry (TRS) is an antiunitary operator , that acts on , (where , is an arbitrary complex coefficient, and , denotes complex conjugation) as . It can be written as where is the complex conjugation operator and is a unitary matrix. Either or . A Hamiltonian with time reversal symmetry satisfies , or on the level of first-quantised matrices, , for some choice of . Charge conjugation or particle-hole symmetry (PHS) is also an antiunitary operator which acts on as , and can be written as where is unitary. Again either or depending on what is. A Hamiltonian with particle hole symmetry satisfies , or on the level of first-quantised Hamiltonian matrices, , for some choice of . In the Bloch Hamiltonian formalism for crystal structures, where the Hamiltonian acts on modes of crystal momentum , the chiral symmetry, TRS, and PHS conditions become (chiral symmetry) (time-reversal symmetry), (particle-hole symmetry). It is evident that if two of these three symmetries are present, then the third is also present, due to the relation . The aforementioned discrete symmetries label 10 distinct discrete symmetry classes, which coincide with the Altland–Zirnbauer classes of random matrices. Equivalence classes of Hamiltonians A bulk Hamiltonian in a particular symmetry group is restricted to be a Hermitian matrix with no zero-energy eigenvalues (i.e. so that the spectrum is "gapped" and the system is a bulk insulator) satisfying the symmetry constraints of the group. In the case of dimensions, this Hamiltonian is a continuous function of the parameters in the Bloch momentum vector in the Brillouin zone; then the symmetry constraints must hold for all . Given two Hamiltonians and , it may be possible to continuously deform into while maintaining the symmetry constraint and gap (that is, there exists continuous function such that for all the Hamiltonian has no zero eigenvalue and symmetry condition is maintained, and and ). Then we say that and are equivalent. However, it may also turn out that there is no such continuous deformation. in this case, physically if two materials with bulk Hamiltonians and , respectively, neighbor each other with an edge between them, when one continuously moves across the edge one must encounter a zero eigenvalue (as there is no continuous transformation that avoids this). This may manifest as a gapless zero energy edge mode or an electric current that only flows along the edge. An interesting question is to ask, given a symmetry class and a dimension of the Brillouin zone, what are all the equivalence classes of Hamiltonians. Each equivalence class can be labeled by a topological invariant; two Hamiltonians whose topological invariant are different cannot be deformed into each other and belong to different equivalence classes. Classifying spaces of Hamiltonians For each of the symmetry classes, the question can be simplified by deforming the Hamiltonian into a "projective" Hamiltonian, and considering the symmetric space in which such Hamiltonians live. These classifying spaces are shown for each symmetry class: For example, a (real symmetric) Hamiltonian in symmetry class AI can have its positive eigenvalues deformed to +1 and its negative eigenvalues deformed to -1; the resulting such matrices are described by the union of real Grassmannians Classification of invariants The strong topological invariants of a many-band system in dimensions can be labeled by the elements of the -th homotopy group of the symmetric space. These groups are displayed in this table, called the periodic table of topological insulators: There may also exist weak topological invariants (associated to the fact that the suspension of the Brillouin zone is in fact equivalent to a sphere wedged with lower-dimensional spheres), which are not included in this table. Furthermore, the table assumes the limit of an infinite number of bands, i.e. involves Hamiltonians for . The table also is periodic in the sense that the group of invariants in dimensions is the same as the group of invariants in dimensions. In the case of no ant-iunitary symmetries, the invariant groups are periodic in dimension by 2. For nontrivial symmetry classes, the actual invariant can be defined by one of the following integrals over all or part of the Brillouin zone: the Chern number, the Wess-Zumino winding number, the Chern–Simons invariant, the Fu–Kane invariant. Dimensional reduction and Bott clock The periodic table also displays a peculiar property: the invariant groups in dimensions are identical to those in dimensions but in a different symmetry class. Among the complex symmetry classes, the invariant group for A in dimensions is the same as that for AIII in dimensions, and vice versa. One can also imagine arranging each of the eight real symmetry classes on the Cartesian plane such that the coordinate is if time reversal symmetry is present and if it is absent, and the coordinate is if particle hole symmetry is present and if it is absent. Then the invariant group in dimensions for a certain real symmetry class is the same as the invariant group in dimensions for the symmetry class directly one space clockwise. This phenomenon was termed the Bott clock by Alexei Kitaev, in reference to the Bott periodicity theorem. The Bott clock can be understood by considering the problem of Clifford algebra extensions. Near an interface between two inequivalent bulk materials, the Hamiltonian approaches a gap closing. To lowest order expansion in momentum slightly away from the gap closing, the Hamiltonian takes the form of a Dirac Hamiltonian . Here, are a representation of the Clifford Algebra , while is an added "mass term" that and anticommutes with the rest of the Hamiltonian and vanishes at the interface (thus giving the interface a gapless edge mode at ). The term for the Hamiltonian on one side of the interface cannot be continuously deformed into the term for the Hamiltonian on the other side of the interface. Thus (letting be an arbitrary positive scalar) the problem of classifying topological invariants reduces to the problem of classifying all possible inequivalent choices of to extend the Clifford algebra to one higher dimension, while maintaining the symmetry constraints. See also Symmetry-protected topological order References External links Insulators Superconductors Topology Condensed matter physics
Periodic table of topological insulators and topological superconductors
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,124
[ "Superconductivity", "Phases of matter", "Materials science", "Topology", "Space", "Condensed matter physics", "Geometry", "Superconductors", "Spacetime", "Matter" ]
57,504,451
https://en.wikipedia.org/wiki/Algorithms%20and%20Combinatorics
Algorithms and Combinatorics () is a book series in mathematics, and particularly in combinatorics and the design and analysis of algorithms. It is published by Springer Science+Business Media, and was founded in 1987. Books The books published in this series include: The Simplex Method: A Probabilistic Analysis (Karl Heinz Borgwardt, 1987, vol. 1) Geometric Algorithms and Combinatorial Optimization (Martin Grötschel, László Lovász, and Alexander Schrijver, 1988, vol. 2; 2nd ed., 1993) Systems Analysis by Graphs and Matroids (Kazuo Murota, 1987, vol. 3) Greedoids (Bernhard Korte, László Lovász, and Rainer Schrader, 1991, vol. 4) Mathematics of Ramsey Theory (Jaroslav Nešetřil and Vojtěch Rödl, eds., 1990, vol. 5) Matroid Theory and its Applications in Electric Network Theory and in Statics (Andras Recszki, 1989, vol. 6) Irregularities of Partitions: Papers from the meeting held in Fertőd, July 7–11, 1986 (Gábor Halász and Vera T. Sós, eds., 1989, vol. 8) Paths, Flows, and VLSI-Layout: Papers from the meeting held at the University of Bonn, Bonn, June 20–July 1, 1988 (Bernhard Korte, László Lovász, Hans Jürgen Prömel, and Alexander Schrijver, eds., 1990, vol. 9) New Trends in Discrete and Computational Geometry (János Pach, ed., 1993, vol. 10) Discrete Images, Objects, and Functions in (Klaus Voss, 1993, vol. 11) Linear Optimization and Extensions (Manfred Padberg, 1999, vol. 12) The Mathematics of Paul Erdős I (Ronald Graham and Jaroslav Nešetřil, eds., 1997, vol. 13) The Mathematics of Paul Erdős II (Ronald Graham and Jaroslav Nešetřil, eds., 1997, vol. 14) Geometry of Cuts and Metrics (Michel Deza and Monique Laurent, 1997, vol. 15) Probabilistic Methods for Algorithmic Discrete Mathematics (M. Habib, C. McDiarmid, J. Ramirez-Alfonsin, and B. Reed, 1998, vol. 16) Modern Cryptography, Probabilistic Proofs and Pseudorandomness (Oded Goldreich, 1999, vol. 17) Geometric Discrepancy: An Illustrated Guide (Jiří Matoušek, 1999, vol. 18) Applied Finite Group Actions (Adalbert Kerber, 1999, vol. 19) Matrices and Matroids for Systems Analysis (Kazuo Murota, 2000, vol. 20; corrected ed., 2010) Combinatorial Optimization (Bernhard Korte and Jens Vygen, 2000, vol. 21; 5th ed., 2012) The Strange Logic of Random Graphs (Joel Spencer, 2001, vol. 22) Graph Colouring and the Probabilistic Method (Michael Molloy and Bruce Reed, 2002, Vol. 23) Combinatorial Optimization: Polyhedra and Efficiency (Alexander Schrijver, 2003, vol. 24. In three volumes: A. Paths, flows, matchings; B. Matroids, trees, stable sets; C. Disjoint paths, hypergraphs) Discrete and Computational Geometry: The Goodman-Pollack Festschrift (B. Aronov, S. Basu, J. Pach, and M. Sharir, eds., 2003, vol. 25) Topics in Discrete Mathematics: Dedicated to Jarik Nešetril on the Occasion of his 60th birthday (M. Klazar, J. Kratochvíl, M. Loebl, J. Matoušek, R. Thomas, and P. Valtr, eds., 2006, vol. 26) Boolean Function Complexity: Advances and Frontiers (Stasys Jukna, 2012, Vol. 27) Sparsity: Graphs, Structures, and Algorithms (Jaroslav Nešetřil and Patrice Ossona de Mendez, 2012, vol. 28) Optimal Interconnection Trees in the Plane (Marcus Brazil and Martin Zachariasen, 2015, vol. 29) Combinatorics and Complexity of Partition Functions (Alexander Barvinok, 2016, vol. 30) References Publications established in 1987 Series of mathematics books Springer Science+Business Media books Algorithms Combinatorics
Algorithms and Combinatorics
[ "Mathematics" ]
930
[ "Discrete mathematics", "Algorithms", "Mathematical logic", "Applied mathematics", "Combinatorics" ]
57,506,816
https://en.wikipedia.org/wiki/Hall%20circles
Hall circles (also known as M-circles and N-circles) are a graphical tool in control theory used to obtain values of a closed-loop transfer function from the Nyquist plot (or the Nichols plot) of the associated open-loop transfer function. Hall circles have been introduced in control theory by Albert C. Hall in his thesis. Construction Consider a closed-loop linear control system with open-loop transfer function given by transfer function and with a unit gain in the feedback loop. The closed-loop transfer function is given by . To check the stability of T(s), it is possible to use the Nyquist stability criterion with the Nyquist plot of the open-loop transfer function G(s). Note, however, that only the Nyquist plot of G(s) does not give the actual values of T(s). To get this information from the G(s)-plane, Hall proposed to construct the locus of points in the G(s)-plane such that T(s) has constant magnitude and also the locus of points in the G(s)-plane such that T(s) has constant phase angle. Given a positive real value M representing a fixed magnitude, and denoting G(s) by z, the points satisfying are given by the points z in the G(s)-plane such that the ratio of the distance between z and 0 and the distance between z and -1 is equal to M. The points z satisfying this locus condition are circles of Apollonius, and this locus is known in the context of control systems as M-circles. Given a positive real value N representing a phase angle, the points satisfying are given by the points z in the G(s)-plane such that the angle between -1 and z and the angle between 0 and z is constant. In other words, the angle opposed to the line segment between -1 and 0 must be constant. This implies that the points z satisfying this locus condition are arcs of circles, and this locus is known in the context of control systems as N-circles. Usage To use the Hall circles, a plot of M and N circles is done over the Nyquist plot of the open-loop transfer function. The points of the intersection between these graphics give the corresponding value of the closed-loop transfer function. Hall circles are also used with the Nichols plot and in this setting, are also known as Nichols chart. Rather than overlaying directly the Hall circles over the Nichols plot, the points of the circles are transferred to a new coordinate system where the ordinate is given by and the abscissa is given by . The advantage of using Nichols chart is that adjusting the gain of the open loop transfer function directly reflects in up and down translation of the Nichols plot in the chart. See also Nyquist-plot Nichols plot Notes References Control theory Algorithms Control engineering
Hall circles
[ "Mathematics", "Engineering" ]
586
[ "Applied mathematics", "Algorithms", "Mathematical logic", "Control theory", "Control engineering", "Dynamical systems" ]
50,503,497
https://en.wikipedia.org/wiki/Dwell%20time%20%28GNSS%29
The dwell time in GNSS is the time required to test for the presence of a satellite signal for a certain combination of parameters. A search process detects whether a GNSS satellite is present or not in an area of the sky, based on correlation of a received signal with a reference signal stored in the receiver. The dwell times are associated with the performance of a certain detector. They can be classified into single dwell times, when the decision is taken in one step, and multiple dwell times, when the decision is taken in two or more steps. References Works cited N. Couronneau, P.J. Duffett-Smith, and A. Mitelman, Calculating Time-to-First-Fix, GPS World, Nov 2011 (GPS World 2011) E.S. Lohan, A. Lakhzouri, and M. Renfors, Selection of the multiple-dwell hybrid-search strategy for the acquisition of Galileo signals in fading channels, in CDROM Proc. of IEEE PIMRC, Sep 2004. (Lohan2004) Global Positioning System Time
Dwell time (GNSS)
[ "Physics", "Mathematics", "Technology", "Engineering" ]
224
[ "Physical quantities", "Time", "Wireless locating", "Quantity", "Aircraft instruments", "Aerospace engineering", "Global Positioning System", "Spacetime", "Wikipedia categories named after physical quantities" ]
50,504,058
https://en.wikipedia.org/wiki/Multipath%20mitigation
Multipath mitigation is a term typically used in Code Division Multiple Access (CDMA) communications and in GNSS navigation to describe the methods that try to compensate for or cancel the effects of the Non Line Of Sight (NLOS) propagation. The multipath effect occurs when a signal is received not only through a Line of Sight (LOS) path, but also through one or several NLOS paths. The multipath, if not addressed or compensated, can significantly reduce the performance of the communication and navigation receivers. Various multipath mitigation methods can be used to estimate and remove the undesired NLOS components. Chip manufactures of CDMA and GNSS receivers, such as Qualcomm, Leica, NovAtel, Septentrio, etc. typically have multipath mitigation algorithms supported by their chipsets. One of the first works in the field of GPS multipath mitigation is in References M. Z. H. Bhuiyan, J. Zhang, E.S. Lohan, W. Wang, and S. Sand, “Analysis of multipath mitigation techniques with land mobile satellite channel model” in Radioengineering journal, ISSN 1210-2512, vol. 21(4), Dec 2012, pp. 1067-1078 [Bhuiyan2012] B. J. H. van den Brekel and D. J. R. van Nee, "GPS multipath mitigation by antenna movements," in Electronics Letters, vol. 28, no. 25, pp. 2286-2288, 3 Dec. 1992. (Brekel1992) Radio technology
Multipath mitigation
[ "Technology", "Engineering" ]
343
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
54,186,007
https://en.wikipedia.org/wiki/Engineering%20controls%20for%20nanomaterials
Engineering controls for nanomaterials are a set of hazard control methods and equipment for workers who interact with nanomaterials. Engineering controls are physical changes to the workplace that isolate workers from hazards, and are considered the most important set of methods for controlling the health and safety hazards of nanomaterials after systems and facilities have been designed. The primary hazard of nanomaterials is health effects from inhalation of aerosols containing nanoparticles. Many engineering controls developed for other industries can be used or adapted for protecting workers from exposure to nanomaterials, including ventilation and filtering using laboratory fixtures such as fume hoods, containment using gloveboxes, and other non-ventilation controls such as sticky mats. Research is ongoing as to what engineering controls are most effective for nanomaterials. Background Engineering controls Controlling exposures to occupational hazards is considered the fundamental method of protecting workers. Traditionally, a hierarchy of controls has been used as a means of determining how to implement feasible and effective controls, which typically include elimination, substitution, engineering controls, administrative controls, and personal protective equipment. Methods earlier in the list are considered generally more effective in reducing the risk associated with a hazard, with process changes and engineering controls recommended as the primary means for reducing exposures, and personal protective equipment being the approach of last resort. Following the hierarchy is intended to lead to the implementation of inherently safer systems, ones where the risk of illness or injury has been substantially reduced. Engineering controls are physical changes to the workplace that isolate workers from hazards by containing them in an enclosure, or removing contaminated air from the workplace through ventilation and filtering. Well-designed engineering controls are typically passive, in the sense of being independent of worker interactions, which reduces the potential for worker behavior to impact exposure levels. They also ideally do not interfere with productivity and ease of processing for the worker, because otherwise the operator may be motivated to circumvent the controls. The initial cost of engineering controls can be higher than administrative controls or personal protective equipment, but the long-term operating costs are frequently lower, and can sometimes provide cost savings in other areas of the process. Nanomaterials Nanomaterials have at least one primary dimension of less than 100 nanometers, and often have properties different from those of their bulk components that are technologically useful. Because nanotechnology is a recent development, the health and safety effects of exposures to nanomaterials, and what levels of exposure may be acceptable, is not yet fully understood. Processing and manufacturing of nanomaterials involve a wide range of hazards. The types of engineering controls optimal for each situation is influenced by the quantity and dustiness of the material as well as the duration of the task. For example, stronger engineering controls are indicated if dry nanomaterials cannot be substituted with a suspension, or if procedures such as sonication or cutting of a solid matrix containing nanomaterials cannot be eliminated. As with any new technology, the earliest exposures are expected to occur among workers conducting research in laboratories and pilot plants. It is recommended that researchers handling engineered nanomaterials in these contexts perform that work in a manner that is protective of their safety and health. Control measures for nanoparticles, dusts, and other hazards are most effective when implemented within the context of a comprehensive occupational safety and health management system, the critical elements of which include management commitment and employee involvement, worksite analysis, hazard prevention and control, and sufficient training for employees, supervisors, and managers. Ventilation Ventilation systems are distinguished as being either local or general. Local exhaust ventilation operates at or near the source of contamination, often in conjunction with an enclosure, while general exhaust ventilation operates on an entire room through a building's HVAC system. Local exhaust ventilation Local exhaust ventilation (LEV) is the application of an exhaust system at or near the source of contamination. If properly designed, it will be much more efficient at removing contaminants than dilution ventilation, requiring lower exhaust volumes, less make-up air, and, in many cases, lower costs. By applying exhaust at the source, contaminants are removed before they get into the general work environment. Examples of local exhaust systems include fume hoods, vented balance enclosures, and biosafety cabinets. Exhaust hoods lacking an enclosure are less preferable, and laminar flow hoods are not recommended because they direct air outwards towards the worker. In 2006, a survey was conducted of international nanotechnology firms and research laboratories that reported manufacturing, handling, researching, or using nanomaterials. All organizations participating in the survey reported using some type of engineering control. The most common exposure control used was the traditional laboratory fume hood, with two-thirds of firms reporting its use. Fume hoods Fume hoods are recommended to have an average inward velocity of 80–100 feet per minute (fpm) at the face of the hood. For higher toxicity materials, a higher face velocity of 100–120 fpm is recommended in order to provide better protection. However, face velocities exceeding 150 fpm are not believed to improve performance, and could increase hood leakage. New fume hoods specifically designed for nanotechnology are being developed primarily based on low-turbulence balance enclosures, which were initially developed for the weighing of pharmaceutical powders; these nanomaterial handling enclosures provide adequate containment at lower face velocities, typically operating at 65–85 fpm. They are useful for weighing operations, which disturb the nanomaterial and increase its aerosolization. It is recommended that air exiting a fume hood should be passed through a HEPA filter and exhausted outside the work environment, with used filters being handled as hazardous waste. Turbulence can cause nanomaterials to exit the front of the hood, and can be avoided by keeping the sash in the proper position, keeping the interior of the hood uncluttered with equipment, and not making fast movements while working. High face velocities can result in loss of powdered nanomaterials; while as of 2012 there was little research on the effectiveness for low-flow fume hoods, there was evidence that air curtain hoods were effective at containing nanoparticles. Other enclosures Biosafety cabinets are designed to contain bioaerosols, which have a similar size to engineered nanoparticles, and are believed to be effective with nanoparticles. However, common biosafety cabinets are more prone to turbulence. As with fume hoods, they are recommended to be exhausted outside the facility. Dedicated large-scale ventilated enclosures for large pieces of equipment can also be used. General exhaust ventilation General exhaust ventilation (GEV), also called dilution ventilation, is different from local exhaust ventilation because instead of capturing emissions at their source and removing them from the air, general exhaust ventilation allows the contaminant to be emitted into the workplace air and then dilutes the concentration of the contaminant to an acceptable level. GEV is inefficient and costly as compared to local exhaust ventilation, and given the lack of established exposure limits for most nanomaterials, they are not recommended to be relied upon for controlling exposure. However, GEV can provide negative room pressure to prevent contaminants from exiting the room. The use of supply and exhaust air throughout the facility can provide pressurization schemes that reduce the number of workers exposed to potentially hazardous materials, for example keeping production areas at a negative pressure with respect to nearby areas. For general exhaust ventilation in laboratories, a nonrecirculating system is used with 4–12 air changes per hour when used in tandem with local exhaust ventilation, and sources of contamination are placed close to the air exhaust and downwind of workers, and away from windows or doors that may cause air drafts. Control verification Several control verification techniques can be used to assess room airflow patterns and verify the proper operation of LEV systems. It is considered important to confirm that an LEV system is operating as designed by regularly measuring exhaust airflows. A standard measurement, hood static pressure, provides information on airflow changes that affect hood performance. For hoods designed to prevent exposures to hazardous airborne contaminants, the American Conference of Governmental Industrial Hygienists recommends the installation of a fixed hood static pressure gauge. Additionally, Pitot tubes, hot-wire anemometers, smoke generators, and dry ice tests can be used to qualitatively measure hood slot/face and duct air velocity, while tracer-gas leak testing is a quantitative method. Standardized testing and certification procedures such as ANSI Z9.5 and ASHRAE 110 can be used, as can qualitative indicators of proper installation and functionality such as inspection of gaskets and hoses. Containment Containment refers to the physical isolation of a process or a piece of equipment to prevent the release of the hazardous material into the workplace. It can be used in conjunction with ventilation measures to provide an enhanced level of protection for nanomaterial workers. Examples include placing equipment that may release nanomaterials in a separate room. Standard dust control methods such as enclosures for conveyor systems or using a sealed system for bag filling are effective at reducing respirable dust concentrations. Non-ventilation engineering controls can also include devices developed for the pharmaceutical industry, including isolation containment systems. One of the most common flexible isolation systems is glovebox containment, which can be used as an enclosure around small-scale powder processes, such as mixing and drying. Rigid glovebox isolation units also provide a method for isolating the worker from the process and are often used for medium-scale operations involving transfer of powders. Glovebags are similar to rigid gloveboxes, but they are flexible and disposable. They are used for small operations for containment or protection from contamination. Gloveboxes are sealed systems that provide a high degree of operator protection, but are more difficult to use due to limited mobility and size of operation. Transferring materials into and out of the enclosure also is an exposure risk. In addition, some gloveboxes are configured to use positive pressure, which can increase the risk of leaks. Another non-ventilation control used in this industry is the continuous liner system, which allows the filling of product containers while enclosing the material in a polypropylene bag. This system is often used for off-loading materials when the powders are to be packed into drums. Other engineering controls Other non-ventilation engineering controls in general cover a range of control measures, such as guards and barricades, material treatment, or additives. One example is placing walk-off sticky mats at room exits. Antistatic devices can be used when handling nanomaterials to reduce their electrostatic charge, making them less likely to disperse or adhere to clothing. Water spray application is also an effective method for reducing respirable dust concentrations. References External links Effective workplace safety and health management systems from the U.S. Occupational Safety and Health Administration Nanomaterials Industrial hygiene Safety engineering Occupational hazards Chemical safety Industrial safety devices
Engineering controls for nanomaterials
[ "Chemistry", "Materials_science", "Engineering" ]
2,271
[ "Systems engineering", "Chemical accident", "Safety engineering", "nan", "Nanotechnology", "Industrial safety devices", "Chemical safety", "Nanomaterials" ]
54,186,204
https://en.wikipedia.org/wiki/Normustine
Normustine, also known as bis(2-chloroethyl)carbamic acid, is a nitrogen mustard and alkylating antineoplastic agent (i.e., chemotherapy agent). It is a metabolite of a number of antineoplastic agents that have been developed for the treatment of tumors, including estramustine phosphate, alestramustine, cytestrol acetate, and ICI-85966 (stilbostat), but only the former of which has actually been marketed. References Alkylating antineoplastic agents Carbamates Human drug metabolites Nitrogen mustards Organochlorides Chloroethyl compounds
Normustine
[ "Chemistry" ]
144
[ "Chemicals in medicine", "Human drug metabolites" ]
54,191,159
https://en.wikipedia.org/wiki/Conservative%20temperature
Conservative temperature is a thermodynamic property of seawater. It is derived from the potential enthalpy and is recommended under the TEOS-10 standard (Thermodynamic Equation of Seawater - 2010) as a replacement for potential temperature as it more accurately represents the heat content in the ocean. Motivation Conservative temperature was initially proposed by Trevor McDougall in 2003. The motivation was to find an oceanic variable representing the heat content that is conserved during both pressure changes and turbulent mixing. In-situ temperature is not sufficient for this purpose, as the compression of a water parcel with depth causes an increase of the temperature despite the absence of any external heating. Potential temperature can be used to combat this issue, as it is referenced to a specific pressure and so ignores these compressive effects. In fact, potential temperature is a conservative variable in the atmosphere for air parcels in dry adiabatic conditions, and has been used in ocean models for many years. However, turbulent mixing processes in the ocean destroy potential temperature, sometimes leading to large errors when it is assumed to be conservative. By contrast, the enthalpy of the parcel is conserved during turbulent mixing. However, it suffers from a similar problem to the in-situ temperature in that it also has a strong pressure dependence. Instead, potential enthalpy is proposed to remove this pressure dependence. Conservative temperature is then proportional to the potential enthalpy. Derivation Potential enthalpy The fundamental thermodynamic relation is given by: where is the specific enthalpy, is the pressure, is the density, is the temperature, is the specific entropy, is the salinity and is the relative chemical potential of salt in seawater. During a process that does not lead to the exchange of heat or salt, entropy and salinity can be assumed constant. Therefore, taking the partial derivative of this relation with respect to pressure yields: By integrating this equation, the potential enthalpy is defined as the enthalpy at a reference pressure : Here the enthalpy and density are defined in terms of the three state variables: salinity, potential temperature and pressure. Conversion to conservative temperature Conservative temperature is defined to be directly proportional to potential enthalpy. It is rescaled to have the same units (Kelvin) as the in-situ temperature: where = 3989.24495292815 J kg−1K−1 is a reference value of the specific heat capacity, chosen to be as close as possible to the spatial average of the heat capacity over the entire ocean surface. Conservative properties of potential enthalpy Conservation form The first law of thermodynamics can be written in the form: or equivalently: where denotes the internal energy, represents the flux of heat and is the rate of dissipation, which is small compared to the other terms and can therefore be neglected. The operator is the material derivative with respect to the fluid flow , and is the nabla operator. In order to show that potential enthalpy is conservative in the ocean, it must be shown that the first law of thermodynamics can be rewritten in conservation form. Taking the material derivative of the equation of potential enthalpy yields: where and . It can be shown that the final two terms on the right-hand side of this equation are as small or even smaller than the dissipation rate discarded earlier and the equation can therefore be approximated as: Combining this with the first law of thermodynamics yields the equation: which is in the desired conservation form. Comparison to potential temperature Given that conservative temperature was initially introduced to correct errors in the oceanic heat content, it is important to compare the relative errors made by assuming that conservative temperature is conserved to those originally made by assuming that potential temperature is conserved. These errors occur from non-conservation effects that are due to entirely different processes; for conservative temperature heat is lost due to work done by compression, whereas for potential temperature this is due to surface fluxes of heat and freshwater. It can be shown that these errors are approximately 120 times smaller for conservative temperature than for potential temperature, making it far more accurate as a representation of the conservation of heat in the ocean. Usage TEOS-10 framework Conservative temperature is recommended under the TEOS-10 framework as the replacement for potential temperature in ocean models. Other developments in TEOS-10 include: Replacement of practical salinity with the absolute salinity as the primary salinity variable, Introduction of preformed salinity as a conservative variable under biogeochemical processes, Defining all oceanic variables with respect to the Gibbs function. Models Conservative temperature has been implemented in several ocean general circulation models such as those involved in the Coupled Model Intercomparison Project Phase 6 (CMIP6). However, as these models have predominantly used potential temperature in previous generations, not all models have decided to switch to conservative temperature. References Physical oceanography Enthalpy
Conservative temperature
[ "Physics", "Chemistry", "Mathematics" ]
1,006
[ "Thermodynamic properties", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Enthalpy", "Physical oceanography" ]
54,192,420
https://en.wikipedia.org/wiki/Geometrically%20necessary%20dislocations
Geometrically necessary dislocations are like-signed dislocations needed to accommodate for plastic bending in a crystalline material. They are present when a material's plastic deformation is accompanied by internal plastic strain gradients. They are in contrast to statistically stored dislocations, with statistics of equal positive and negative signs, which arise during plastic flow from multiplication processes like the Frank-Read source. Dislocations in crystalline materials Statistically stored dislocations As straining progresses, the dislocation density increases and the dislocation mobility decreases during plastic flow. There are different ways through which dislocations can accumulate. Many of the dislocations are accumulated by multiplication, where dislocations encounters each other by chance. Dislocations stored in such progresses are called statistically stored dislocations, with corresponding density . In other words, they are dislocations evolved from random trapping processes during plastic deformation. Geometrically necessary dislocations In addition to statistically stored dislocation, geometrically necessary dislocations are accumulated in strain gradient fields caused by geometrical constraints of the crystal lattice. In this case, the plastic deformation is accompanied by internal plastic strain gradients. The theory of geometrically necessary dislocations was first introduced by Nye in 1953. Since geometrically necessary dislocations are present in addition to statistically stored dislocations, the total density is the accumulation of two densities, e.g. , where is the density of geometrically necessary dislocations. Concept Single crystal The plastic bending of a single crystal can be used to illustrate the concept of geometrically necessary dislocation, where the slip planes and crystal orientations are parallel to the direction of bending. The perfect (non-deformed) crystal has a length and thickness . When the crystal bar is bent to a radius of curvature , a strain gradient forms where a tensile strain occurs in the upper portion of the crystal bar, increasing the length of upper surface from to . Here is positive and its magnitude is assumed to be . Similarly, the length of the opposite inner surface is decreased from to due to the compression strain caused by bending. Thus, the strain gradient is the strain difference between the outer and inner crystal surfaces divided by the distance over which the gradient exists . Since , . The surface length divided by the interatomic spacing is the number of crystal planes on this surface. The interatomic spacing is equal to the magnitude of Burgers vector . Thus the numbers of crystal planes on the outer (tension) surface and inner (compression) surface are and , respectively. Therefore, the concept of geometrically necessary dislocations is introduced that the same sign edge dislocations compensate the difference in the number of atomic planes between surfaces. The density of geometrically necessary dislocations is this difference divided by the crystal surface area . More precisely, the orientation of the slip plane and direction with respect to the bending should be considered when calculating the density of geometrically necessary dislocations. In a special case when the slip plane normals are parallel to the bending axis and the slip directions are perpendicular to this axis, ordinary dislocation glide instead of geometrically necessary dislocation occurs during bending process. Thus, a constant of order unity is included in the expression for the density of geometrically necessary dislocations . Polycrystalline material Between the adjacent grains of a polycrystalline material, geometrically necessary dislocations can provide displacement compatibility by accommodating each crystal's strain gradient. Empirically, it can be inferred that such dislocations regions exist because crystallites in a polycrystalline material do not have voids or overlapping segments between them. In such a system, the density of geometrically necessary dislocations can be estimated by considering an average grain. Overlap between two adjacent grains is proportional to where is average strain and is the diameter of the grain. The displacement is proportional to multiplied by the gage length, which is taken as for a polycrystal. This divided by the Burgers vector, b, yields the number of dislocations, and dividing by the area () yields the density which, with further geometrical considerations, can be refined to . Nye's tensor Nye has introduced a set of tensor (so-called Nye's tensor) to calculate the geometrically necessary dislocation density. For a three dimension dislocations in a crystal, considering a region where the effects of dislocations is averaged (i.e. the crystal is large enough). The dislocations can be determined by Burgers vectors. If a Burgers circuit of the unit area normal to the unit vector has a Burgers vector () where the coefficient is Nye's tensor relating the unit vector and Burgers vector . This second-rank tensor determines the dislocation state of a special region. Assume , where is the unit vector parallel to the dislocations and is the Burgers vector, n is the number of dislocations crossing unit area normal to . Thus, . The total is the sum of all different values of . Assume a second-rank tensor to describe the curvature of the lattice, , where is the small lattice rotations about the three axes and is the displacement vector. It can be proved that where for , and for . The equation of equilibrium yields . Since , thus . By substituting for , . Due to the zero solution for equations with are zero and the symmetry of and , only nine independent equations remain of all twenty-seven possible permutations of . The Nye's tensor can be determined by these nine differential equations. Thus the dislocation potential can be written as , where . Measurement The uniaxial tensile test has largely been performed to obtain the stress-strain relations and related mechanical properties of bulk specimens. However, there is an extra storage of defects associated with non-uniform plastic deformation in geometrically necessary dislocations, and ordinary macroscopic test alone, e.g. uniaxial tensile test, is not enough to capture the effects of such defects, e.g. plastic strain gradient. Besides, geometrically necessary dislocations are in the micron scale, where a normal bending test performed at millimeter-scale fails to detect these dislocations. Only after the invention of spatially and angularly resolved methods to measure lattice distortion via electron backscattered diffraction by Adams et al. in 1997, experimental measurements of geometrically necessary dislocations became possible. For example, Sun et al. in 2000 studied the pattern of lattice curvature near the interface of deformed aluminum bicrystals using diffraction-based orientation imaging microscopy. Thus the observation of geometrically necessary dislocations was realized using the curvature data. But due to experimental limitations, the density of geometrically necessary dislocation for a general deformation state was hard to measure until a lower bound method was introduced by Kysar et al. at 2010. They studied wedge indentation with a 90 degree included angle into a single nickel crystal (and later the included angles of 60 degree and 120 degree were also available by Dahlberg et al.). By comparing the orientation of the crystal lattice in the after-deformed configuration to the undeformed homogeneous sample, they were able to determine the in-plane lattice rotation and found it an order of magnitude larger than the out-of-plane lattice rotations, thus demonstrating the plane strain assumption. The Nye dislocation density tensor has only two non-zero components due to two-dimensional deformation state and they can be derived from the lattice rotation measurements. Since the linear relationship between two Nye tensor components and densities of geometrically necessary dislocations is usually under-determined, the total density of geometrically necessary dislocations is minimized subject to this relationship. This lower bound solution represents the minimum geometrically necessary dislocation density in the deformed crystal consistent with the measured lattice geometry. And in regions where only one or two effective slip systems are known to be active, the lower bound solution reduces to the exact solution for geometrically necessary dislocation densities. Application Because is in addition to the density of statistically stored dislocations , the increase in dislocation density due to accommodated polycrystals leads to a grain size effect during strain hardening; that is, polycrystals of finer grain size will tend to work-harden more rapidly. Geometrically necessary dislocations can provide strengthening, where two mechanisms exists in different cases. The first mechanism provides macroscopic isotropic hardening via local dislocation interaction, e.g. jog formation when an existing geometrically necessary dislocation is cut through by a moving dislocation. The second mechanism is kinematic hardening via the accumulation of long range back stresses. Geometrically necessary dislocations can lower their free energy by stacking one atop another (see Peach-Koehler formula for dislocation-dislocation stresses) and form low-angle tilt boundaries. This movement often requires the dislocations to climb to different glide planes, so an annealing at elevated temperature is often necessary. The result is an arc that transforms from being continuously bent to discretely bent with kinks at the low-angle tilt boundaries. References Crystallographic defects
Geometrically necessary dislocations
[ "Chemistry", "Materials_science", "Engineering" ]
1,949
[ "Crystallographic defects", "Crystallography", "Materials degradation", "Materials science" ]
54,194,309
https://en.wikipedia.org/wiki/Density%20ratio
The density ratio of a column of seawater is a measure of the relative contributions of temperature and salinity in determining the density gradient. At a density ratio of 1, temperature and salinity are said to be compensated: their density signatures cancel, leaving a density gradient of zero. The formula for the density ratio, , is: where θ is the potential temperature S is the salinity z is the vertical coordinate (with subscript denoting differentiation by z) ρ is the density α = −ρ−1∂ρ/∂θ is the thermal expansion coefficient β = ρ−1∂ρ/∂S is the haline contraction coefficient When a water column is "doubly stable"—both temperature and salinity contribute to the stable density gradient—the density ratio is negative (a doubly unstable water column would also have a negative density ratio but does not commonly occur). When either the temperature- or salinity-induced stratification is statically unstable, while the overall density stratification is statically stable, double-diffusive instability exists in the water column. Double-diffusive instability can be separated into two different regimes of statically stable density stratification: a salt fingering regime (warm salty overlying cool fresh) when the density ratio is greater than 1, and a diffusive convection regime (cool fresh overlying warm salty) when the density ratio is between 0 and 1. Density ratio may also be used to describe thermohaline variability over a non-vertical spatial interval, such as across a front in the mixed layer. Diffusive density ratio In place of the density ratio, sometimes the diffusive density ratio is used, which is defined as Turner Angle If the signs of both the numerator and denominator are reversed, the density ratio remains unchanged. A related quantity which avoids this ambiguity as well as the infinite values possible when the denominator vanishes is the Turner angle, , which was introduced by Barry Ruddick and named after Stewart Turner. It is defined by The Turner angle is related to the density ratio by See also Spice (oceanography) Double diffusive convection References Physical oceanography
Density ratio
[ "Physics" ]
453
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
54,194,735
https://en.wikipedia.org/wiki/Cenegermin
Cenegermin, sold under the brand name Oxervate, also known as recombinant human nerve growth factor, is a recombinant form of human nerve growth factor. Cenegermin is a peripherally selective agonist of the tropomyosin receptor kinase A (TrkA) and low-affinity nerve growth factor receptor (p75NTR). The most common side effects include eye pain and inflammation, increased lacrimation (watery eyes), pain in the eyelid and sensation of a foreign body in the eye. It was approved for medical use in the European Union in July 2017, and in the United States in 2018. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication. Medical uses Cenegermin is indicated for the treatment of neurotrophic keratitis. Society and culture Names Cenegermin is the international nonproprietary name. It is also known as human beta-nerve growth factor (beta-NGF)-(1-118) peptide (non-covalent dimer) produced in Escherichia coli. Cenegermin is sold under the brand name Oxervate. References Further reading Neurotrophic factors Ophthalmology drugs Orphan drugs Recombinant proteins
Cenegermin
[ "Chemistry", "Biology" ]
271
[ "Biotechnology products", "Recombinant proteins", "Signal transduction", "Neurotrophic factors", "Neurochemistry" ]
54,195,920
https://en.wikipedia.org/wiki/Nanostructured%20film
A nanostructured film is a film resulting from engineering of nanoscale features, such as dislocations, grain boundaries, defects, or twinning. In contrast to other nanostructures, such as nanoparticles, the film itself may be up to several microns thick, but possesses a large concentration of nanoscale features homogeneously distributed throughout the film. Like other nanomaterials, nanostructured films have sparked much interest as they possess unique properties not found in bulk, non-nanostructured material of the same composition. In particular, nanostructured films have been the subject of recent research due to their superior mechanical properties, including strength, hardness, and corrosion resistance compared to regular films of the same material. Examples of nanostructured films include those produced by grain boundary engineering, such as nano-twinned ultra-fine grain copper, or dual phase nanostructuring, such as crystalline metal and amorphous metallic glass nanocomposites. Synthesis and characterization Nanostructured films are commonly created using magnetron sputtering from an appropriate target material. Films can be elemental in nature, formed by sputtering from a pure metal target such as copper, or composed of compound materials. Varying parameters such as the sputtering rate, substrate temperature, and sputtering interrupts allow the creation of films with a variety of different nanostructured elements. Control over nano-twinning, tailoring of specific types of grain boundaries, and restricting the movement and propagation of dislocations have been demonstrated using films produced via magnetron sputtering. Methods used to characterize nanostructured films include transmission electron microscopy, scanning electron microscopy, electron backscatter diffraction, focused ion beam milling, and nanoindentation. These techniques are used as they allow imaging of nanoscale structures, including dislocations, twinning, grain boundaries, film morphology, and atomic structure. Material properties Nanostructured films are of interest due to their superior mechanical and physical properties compared to their normal equivalent. Elemental nanostructured films composed of pure copper were found to possess good thermal stability due to the nano-twinned film possessing a higher fraction of grain boundaries. In addition to possessing higher thermal stability, copper films that were highly nano-twinned were found to have a better corrosion resistance than copper films with a low concentration of nano-twins. Control of the fraction of grains in a material with nano-twins present has great potential for less expensive alloys and coatings with a good degree of corrosion resistance. Compound nanostructured films composed of crystalline MgCu2 cores encapsulated by amorphous glassy shells of the same material were shown to possess a near-ideal mechanical strength. The crystalline MgCu2 cores, typically less than 10 nm in size, were found to substantially strengthen the material by restricting the movement of dislocations and grains. The cores were also found to contribute to overall material strength by restricting the movement of shear bands in the material. This nanostructured film differs from both crystalline metals and amorphous metallic glasses, both of which exhibit behaviors such as the reverse Hall-Petch and shear-band softening effects that prevent them from reaching ideal strength values. Applications Nanostructured films with superior mechanical properties allow previously unusable materials to be utilized in new applications, enabling advances fields where coatings are heavily utilized, such as aerospace, energy, and other engineering fields. Production scalability of nanostructured films has already been demonstrated, and the ubiquity of sputtering techniques in industry is predicted to facilitate the incorporation of nanostructured films into existing applications. See also Nanomaterials Nanostructure Sputter deposition References Nanomaterials
Nanostructured film
[ "Materials_science" ]
760
[ "Nanotechnology", "Nanomaterials" ]
54,195,933
https://en.wikipedia.org/wiki/Static%20fatigue
Static fatigue, sometimes referred to as delayed fracture, describes the progressive cracking and eventual failure of materials under a constant, sustained stress. (It is different from fatigue, which refers to the deformation and eventual failure of materials subjected to cyclical stresses.) With static fatigue materials experience damage or failure under stress levels that are lower than their normal ultimate tensile strengths. The exact details vary with the material type and environmental factors, such as moisture presence and temperature. This phenomenon is closely related to stress corrosion cracking. Typical occurrence Stress corrosion cracking Stress corrosion cracking (SCC) happens when a stressed material is in a corrosive (chemically destructive) environment. One example of SSC embrittlement is when moisture increases static fatigue degradation of glass. SCC is also seen in hydrogen embrittlement and embrittlement of some polymers. Plastic Deformation (Plastic Flow) Plastic deformation happens when stresses flatten, bend, or twist a material until it cannot return to its original shape. This can create cracks in the material and decrease its lifetime. Testing Static fatigue tests can be used to determine the lifespan of a material with different loads and environmental conditions. However, accurately assessing a material's true static fatigue life presents challenges, as these tests often require an extended duration and there is significant variability in the results. Examples of Static Fatigue and Stresses on Materials Plastic pipes under water or other fluids experience hydrodynamic forces that can result in fatigue. The pipes reach failure sooner as temperatures and exposure to aggressive substances increase. References Continuum mechanics Corrosion Deformation (mechanics) Fracture mechanics Materials degradation
Static fatigue
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
319
[ "Structural engineering", "Fracture mechanics", "Continuum mechanics", "Deformation (mechanics)", "Metallurgy", "Classical mechanics", "Corrosion", "Materials science", "Electrochemistry", "Materials degradation" ]
60,573,633
https://en.wikipedia.org/wiki/Stem%20Cells%20and%20Development
Stem Cells and Development is a biweekly peer-reviewed scientific journal covering cell biology, with a specific focus on biomedical applications of stem cells. It was established in 1992 as the Journal of Hematotherapy, and was renamed the Journal of Hematotherapy & Stem Cell Research in 1999. The journal obtained its current name in 2004. It is published by Mary Ann Liebert, Inc. and the editor-in-chief is Graham C. Parker (Wayne State University School of Medicine). According to the Journal Citation Reports, the journal has a 2018 impact factor of 3.147. References External links Academic journals established in 1992 Biweekly journals Stem cell research Molecular and cellular biology journals Mary Ann Liebert academic journals English-language journals
Stem Cells and Development
[ "Chemistry", "Biology" ]
150
[ "Stem cell research", "Translational medicine", "Molecular and cellular biology journals", "Tissue engineering", "Molecular biology" ]
60,574,037
https://en.wikipedia.org/wiki/Aligner%20%28semiconductor%29
An aligner, or mask aligner, is a system that produces integrated circuits (IC) using the photolithography process. It holds the photomask over the silicon wafer while a bright light is shone through the mask and onto the photoresist. The "alignment" refers to the ability to place the mask over precisely the same location repeatedly as the chip goes through multiple rounds of lithography. Aligners were a major part of IC manufacture from the 1960s into the late 1970s, when they began to be replaced by the stepper. Currently, mask aligners are still used in academia and research, as projects often involve devices made using photolithography in smaller batches. In a mask aligner, there is a one-to-one correspondence between the mask pattern and the wafer pattern. The mask covers the entire surface of the wafer which is exposed in its entirety in one shot. This was the standard for the 1:1 mask aligners that were succeeded by steppers and scanners with reduction optics. There are several distinct generations of aligner technology. The early contact aligners placed the mask in direct contact with the top surface of the wafer, which often damaged the pattern when the mask was lifted off again. Used only briefly, proximity aligners held the mask slightly above the surface to avoid this problem, but were difficult to work with and required considerable manual adjustment. Finally, the Micralign projection aligner, introduced by Perkin-Elmer in 1973, held the mask entirely separate from the chip and made the adjustment of the image much simpler. Through these stages of development, yields improved from perhaps 10% to about 70%, leading to a corresponding reduction in chip prices. Components A typical mask aligner consists of the following parts: Microscope, used to see the position of the wafer substrate and mask, and their relative alignment Wafer holder (or chuck), used to immobilize the wafer, often using a vacuum line below. Mask holder, placing the mask immediately above the wafer. Relative position between the wafer and the mask can be adjusted using a typical microscope stage mechanism. UV light source, which illuminates through the photomask to project its shadow onto the wafer below. Comparison with stepper The projection aligner is similar to the wafer stepper in concept, but with one key difference. The aligner uses a mask that holds the pattern for the entire wafer, which may require large masks. The stepper uses a smaller mask on the wafer repeatedly, and steps across the surface to repeat the pattern of the chip layer. This reduces mask costs dramatically and allows a single wafer to be used for different integrated circuit layouts or mask designs in a single run. More importantly, by focussing the light source onto a single area of the wafer, the stepper can produce much higher resolutions, thus allowing for smaller features on chips (minimum feature size). The disadvantage to the stepper is that each chip on the wafer has to be individually imaged, and thus the process of exposing the wafer as a whole is much slower. References External links Lithography (microfabrication)
Aligner (semiconductor)
[ "Materials_science", "Technology", "Engineering" ]
652
[ "Computer engineering", "Microtechnology", "Computer engineering stubs", "Computing stubs", "Nanotechnology", "Lithography (microfabrication)" ]
60,574,741
https://en.wikipedia.org/wiki/Striation%20%28fatigue%29
Striations are marks produced on the fracture surface that show the incremental growth of a fatigue crack. A striation marks the position of the crack tip at the time it was made. The term striation generally refers to ductile striations which are rounded bands on the fracture surface separated by depressions or fissures and can have the same appearance on both sides of the mating surfaces of the fatigue crack. Although some research has suggested that many loading cycles are required to form a single striation, it is now generally thought that each striation is the result of a single loading cycle. The presence of striations is used in failure analysis as an indication that a fatigue crack has been growing. Striations are generally not seen when a crack is small even though it is growing by fatigue, but will begin to appear as the crack becomes larger. Not all periodic marks on the fracture surface are striations. The size of a striation for a particular material is typically related to the magnitude of the loading characterised by stress intensity factor range, the mean stress and the environment. The width of a striation is indicative of the overall crack growth rate but can be locally faster or slower on the fracture surface. Striation features The study of the fracture surface is known as fractography. Images of the crack can be used to reveal features and understand the mechanisms of crack growth. While striations are fairly straight, they tend to curve at the ends allowing the direction of crack growth to be determined from an image. Striations generally form at different levels in metals and are separated by a tear band between them. Tear bands are approximately parallel to the direction of crack growth and produce what is known as a river pattern, so called, because it looks like the diverging pattern seen with river flows. The source of the river pattern converges to a single point that is typically the origin of the fatigue failure. Striations can appear on both sides of the mating fracture surface. There is some dispute as to whether striations produced on both sides of the fracture surface match peak-to-peak or peak-to-valley. The shape of striations may also be different on each side of the fracture surface. Striations do not occur uniformly over all of the fracture surface and many areas of a fatigue crack may be devoid of striations. Striations are most often observed in metals but also occur in plastics such as Poly(methyl_methacrylate). Small striations can be seen with the aid of a scanning electron microscope. Once the size of a striation is over 500 nm (resolving wavelength of light), they can be seen with an optical microscope. The first image of striations was taken by Zapffe and Worden in 1951 using an optical microscope. The width of a striation indicates the local rate of crack growth and is typical of the overall rate of growth over the fracture surface. The rate of growth can be predicted with a crack growth equation such as the Paris-Erdogan equation. Defects such as inclusions and grain boundaries may locally slow down the rate of growth. Variable amplitude loads produce striations of different widths and the study of these striation patterns has been used to understand fatigue. Although various cycle counting methods can be used to extract the equivalent constant amplitude cycles from a variable amplitude sequence, the striation pattern differs from the cycles extracted using the rainflow counting method. The height of a striation has been related to the stress ratio of the applied loading cycle, where and is thus a function of the minimum and maximum stress intensity of the applied loading cycle. The striation profile depends on the degree of loading and unloading in each cycle. The unloading part of the cycle causing plastic deformation on the surface of the striation. Crack extension only occurs from the rising part of the load cycle. Striation-like features Other periodic marks on the fracture surface can be mistaken for striations. Marker bands Variable amplitude loading causes cracks to change the plane of growth and this effect can be used to create marker bands on the fracture surface. When a number of constant amplitude cycles are applied they may produce a plateau of growth on the fracture surface. Marker bands (also known as progression marks or beach marks) may be produced and readily identified on the fracture surface even though the magnitude of the loads may too small to produce individual striations. In addition, marker bands may also be produced by large loads (also known as overloads) producing a region of fast fracture on the crack surface. Fast fracture can produce a region of rapid extension before blunting of the crack tip stops the growth and further growth occurs during fatigue. Fast fracture occurs through a process of microvoid coalescence where failures initiate around inter-metallic particles. The F111 aircraft was subjected to periodic proof testing to ensure any cracks present were smaller than a certain critical size. These loads left marks on the fracture surface that could be identified, allowing the rate of intermediate growth occurring in service to be measured. Marks also occur from a change in the environment where oil or corrosive environments can deposit or from excessive heat exposure and colour the fracture surface up to the current position of the crack tip. Marker bands may be used to measure the instantaneous rate of growth of the applied loading cycles. By applying a repeated sequence separated by loads that produce a distinctive pattern the growth from each segment of loading can be measured using a microscope in a technique called quantitative fractography, the rate of growth for loading segments of constant amplitude or variable amplitude loading can be directly measured from the fracture surface. Tyre tracks Tyre tracks are the marks on the fracture surface produced by something making an impression onto the surface from the repeated opening and closing of the crack faces. This can be produced by either a particle that becomes trapped between the crack faces or the faces themselves shifting and directly contacting the opposite surface. Coarse striations Coarse striations are a general rumpling of the fracture surface and do not correspond to a single loading cycle and are therefore not considered to be true striations. They are produced instead of regular striations when there is insufficient atmospheric moisture to form hydrogen on the surface of the crack tip in aluminium alloys, thereby preventing the slip planes activation. The wrinkles in the surface cross over and so do not represent the position of the crack tip. Striation formation in aluminium Environmental influence Striations are often produced in high strength aluminium alloys. In these alloys, the presence of water vapour is necessary to produce ductile striations, although too much water vapour will produce brittle striations also known as cleavage striations. Brittle striations are flatter and larger than ductile striations produced with the same load. There is sufficient water vapour present in the atmosphere to generate ductile striations. Cracks growing internally are isolated from the atmosphere and grow in a vacuum. When water vapour deposits onto the freshly exposed aluminium fracture surface, it dissociates into hydroxides and atomic hydrogen. Hydrogen interacts with the crack tip affecting the appearance and size of the striations. The growth rate increases typically by an order of magnitude, with the presence of water vapour. The mechanism is thought to be hydrogen embrittlement as a result of hydrogen being absorbed into the plastic zone at the crack tip. When an internal crack breaks through to the surface, the rate of crack growth and the fracture surface appearance will change due to the presence of water vapour. Coarse striations occur when a fatigue crack grows in a vacuum such as when growing from an internal flaw. Cracking plane In aluminium (a face-centred cubic material), cracks grow close to low index planes such as the {100} and the {110} planes (see Miller Index). Both of these planes bisect a pair of slip planes. Crack growth involving a single slip plane is term Stage I growth and crack growth involving two slip planes is termed Stage II growth. Striations are typically only observed in Stage II growth. Brittle striations are typically formed on {100} planes. Models of striation formation There have been many models developed to explain the process of how a striation is formed and their resultant shape. Some of the significant models are: Plastic blunting model of Laird Saw-tooth model of McMillan and Pelloux Coarse slip model of Neumman Shear band model by Zhang References External links Characteristics of a fatigue failure in metals Materials science Reliability engineering Fracture mechanics Materials degradation Mechanical failure modes
Striation (fatigue)
[ "Physics", "Materials_science", "Technology", "Engineering" ]
1,729
[ "Structural engineering", "Systems engineering", "Mechanical failure modes", "Applied and interdisciplinary physics", "Fracture mechanics", "Reliability engineering", "Technological failures", "Materials science", "nan", "Materials degradation", "Mechanical failure" ]
52,846,723
https://en.wikipedia.org/wiki/Evolution%20of%20molecular%20chaperones
Chaperones, also called molecular chaperones, are proteins that assist other proteins in assuming their three-dimensional fold, which is necessary for protein function. However, the fold of a protein is sensitive to environmental conditions, such as temperature and pH, and thus chaperones are needed to keep proteins in their functional fold across various environmental conditions. Chaperones are an integral part of a cell's protein quality control network by assisting in protein folding and are ubiquitous across diverse biological taxa. Since protein folding, and therefore protein function, is susceptible to environmental conditions, chaperones could represent an important cellular aspect of biodiversity and environmental tolerance by organisms living in hazardous conditions. Chaperones also affect the evolution of proteins in general, as many proteins fundamentally require chaperones to fold or are naturally prone to misfolding, and therefore mitigates protein aggregation. Evolution of chaperones The evolutionary development of chaperones is highly linked to the evolution of proteins in general, as their primary function is dependent on the presence of proteins. Proteins were selected as the main biological catalysts over ribozymes, RNA molecules capable of catalyzing biological reactions, early in cellular evolution. Diversity of monomers (4 nucleotides versus 20 amino acids), interactions during folding, and consequences of changes in sequence are some of the hypotheses that attempt to explain why proteins were selected over ribozymes. Small proteins fold spontaneously, but the development of increasingly larger proteins, which have more complex folding patterns and intramolecular interactions, would have required chaperones to prevent protein aggregation due to misfolding. Folding of early proteins would have been error-prone in ancient cell cytosol and chaperones would have been needed to assist in unfolding and re-folding. Heat shock proteins Heat shock proteins (HSPs) are a diverse class of molecular chaperones that assist in folding under stress. While originally identified in heat stress response (hence the name “heat shock”), inducible HSP expression is a consequence of all known stressors (pH, osmotic, temperature, energy depletion, ion concentration, etc.). Genetic stress, a result of deleterious mutations, also increases HSP expression. HSPs are ubiquitous across all domains of life (Bacteria, Archaea, and Eukarya) and have been found in every species for which they have been tested. HSPs are divided into families, based on sequence homology and molecular weight (hsp110, hsp100, hsp90, hsp70, hsp60, hsp40, hsp10, and small hsp families). Proteins are highly susceptible to denaturation due to environmental conditions and organisms that live in hazardous conditions should have a basal level of HSP expression. However, other adaptations, such as colonizing less hazardous microhabitats or other behavioral adaptations, could also contribute to acclimation in stressful habitats. Additionally, “normal” environments can also place stress on inhabitants (drought or seasonal changes, for example). These factors muddy the relationship between HSP expression and environmental stress resistance and HSP expression in nature is not well characterized. Elevated expression of heat shock proteins is not correlated with chronic environmental stress and is thought to be due to the costs of HSP expression. High levels of hsp70 are known to accompany deficits in cell division, reproduction, and reproductive success. Intracellularly, HSP expression shuts down normal cell functions and diverts a large amount of energy for stress resistance. Additionally, high levels of HSP is hypothesized to be toxic due to disruption of cell functions, possibly by excessive binding of client proteins. These results suggest that the costs of HSP expression are more suited to temporary stressors. Chaperone buffering Chaperones have also been implicated in the understanding the relationship between genotype and phenotype. Protein folding in itself transitions from genotype to phenotype: primary structure/amino acid sequence reflects genotype while the final, functional fold, either tertiary or quaternary structure, represents phenotype. Since chaperones are mediators of this transition by assisting in the fold of the client protein, chaperone activity is thought to modulate the adaptive evolution of the proteome. One observation in line with this hypothesis is chaperone buffering, where the activity of a chaperone masks or “buffers” deleterious or destabilizing mutations in a client protein. In Drosophila melanogaster, reduced activity of hsp90 resulted in deficient phenotypes caused by mutations in developmental pathways. Hsp70 in Drosophila was also shown to buffer deleterious mutations. Similar results have been shown in Saccharomyces cerevisiae and Arabidopsis thaliana. Work in Escherichia coli showed that the GroES/GroEL system (aka hsp10 and hsp60 respectively) similarly buffered the effect of destabilizing mutations in a phosphotriesterase. The mutation disrupted the fold of the protein, but conferred an increase in efficiency upon chaperone-assisted folding. These results illustrate a model in which evolution can act on the phenotype of a protein while the deleterious effect of the genotype is mitigated by chaperones. Chaperones and the endosymbiosis theory Chaperones are ancient proteins that have been evolutionarily conserved across all domains of life and are ubiquitous across all biological taxa. Since they are so widespread and ancient, they can be used as molecular markers in studies of ancient cellular evolution. Phylogenetic analysis using two families of HSPs (hsp10 and hsp60, also called chaperonins) support the current endosymbiosis model of the origin of mitochondria and chloroplasts. Hsp10 and hsp60 are present in all eubacteria and organelles of eukaryotes (mitochondria and chloroplasts), but not in eukaryotic cell cytosol and archaebacteria. Phylogenetic trees were generated using 56 total amino acid sequences from Gram positive and Gram negative bacteria; mitochondria from plants, animals, fungi, and protists; cyanobacteria; and chloroplasts. Any two hsp60 amino acid sequences share at least 40% similarity, with 18-20% of differences coming from conservative changes (uncharged amino acid to another uncharged amino acid). Any two hsp10 amino acid sequences share at least 30% similarity, with 15-20% conservative changes. Phylogenetic analysis using hsp10 and hsp60 yield similar results to that of rRNA and other genes. Mitochondria were found to be most closely related to the α-purple subdivision of Gram negative bacteria and chloroplasts were most similar to cyanobacteria, similar to other data supporting the endosymbiosis theory. Gram positive bacteria were found to be the most ancestral, which is also supported by other studies. References Evolutionary biology concepts Homeostasis Molecular evolution Protein biosynthesis Protein folding Molecular chaperones, evolution Proteomics
Evolution of molecular chaperones
[ "Chemistry", "Biology" ]
1,485
[ "Evolutionary processes", "Protein biosynthesis", "Molecular evolution", "Gene expression", "Evolutionary biology concepts", "Biosynthesis", "Molecular biology", "Homeostasis" ]
52,847,079
https://en.wikipedia.org/wiki/Synchronverter
Synchronverters or virtual synchronous generators are inverters which mimic synchronous generators (SG) to provide "synthetic inertia" for ancillary services in electric power systems. Inertia is a property of standard synchronous generators associated with the rotating physical mass of the system spinning at a frequency proportional to the electricity being generated. Inertia has implications towards grid stability as work is required to alter the kinetic energy of the spinning physical mass and therefore opposes changes in grid frequency. Inverter-based generation inherently lacks this property as the waveform is being created artificially via power electronics. Background Standard inverters are very low inertia elements. During transient periods, which are mostly because of faults or sudden changes in load, they follow changes rapidly and may cause a worse condition, but synchronous generators have a notable inertia that can maintain their stability. The grid is designed to operate at a specific frequency. When electric power supply and demand is perfectly balanced the grid frequency will remain at its nominal frequency. However, any imbalance in supply and demand will lead to a deviation from this nominal frequency. It is standard for electricity generation and demand to not be perfectly balanced, but the imbalance is tightly controlled such that the grid frequency remains within a small band of ±0.05Hz. A synchronous generator’s rotating mass acts as a bank of kinetic energy for the grid to counteract changes in frequency – it can either provide or absorb power from the grid – caused by an imbalance of electric power supply and demand – in the form of kinetic energy by speeding up or slowing down. The change in kinetic energy is proportional to the change in frequency. Because it takes work to speed up or slow down rotating mass, this inertia dampens the effects of active power imbalances and therefore stabilizes the frequency. Because inverter-based generation inherently lacks inertia, increasing penetration of inverter-based renewable energy generation could endanger power system reliability. Further, the variability of renewable energy sources (RES), primarily concerning photovoltaics (PV) and wind power, could amplify this issue by creating more frequent transient periods of power imbalance. Theoretically, inverter-based generation could be controlled to respond to frequency imbalances by altering its electric torque (active power output). Synthetic inertia is defined as the “controlled contribution of electrical torque from a unit that is proportional to the rate of change of frequency (RoCoF) at the terminals of the unit.” However, in order to have capacity to react to this RoCoF, the participating generators would be required to operate at levels below their maximum output, so that a portion of their output is reserved for this particular response. Further, the inherent variability of production limits the generators' capacity to provide synthetic inertia. This requirement for a reliable and fast-acting power supply makes inverter-based energy storage a better candidate for providing synthetic inertia. History Hydro-Québec began requiring synthetic inertia in 2005 as the first grid operator. To counter frequency drop, the grid operator demands a temporary 6% power boost by combining the power electronics with the rotational inertia of a wind turbine rotor. Similar requirements came into effect in Europe in 2016, and Australia in 2020. Synchronverter model Synchronverter structure can be divided into two parts: power part (see figure 2) and electronic part. The power part is energy transform and transfer path, including the bridge, filter circuit, power line, etc. The electronic part refers to measuring and control units, including sensors and digital signal processor (DSP). The important point in modeling synchronverter is to be sure that it has similar dynamic behavior to Synchronous generator (see figure 3). This model is classified into 2-order up to 7-order model, due to its complexity. However, 3-order model is widely used because of proper compromise between accuracy and complexity. where and are dq-axes components of terminal voltage. While synchronverter terminal voltage and current satisfy these equations, synchronverter can be looked as Synchronous generator. This make it possible to replace it by a synchronous generator model and solve the problems easily. Control strategy As shown in the figure 3, when the inverter is controlled as a voltage source, it consists of a synchronization unit to synchronize with the grid and a power loop to regulate the real power and reactive power exchanged with the grid. The synchronization unit often needs to provide frequency and amplitude. But when inverter is controlled as a current source, the synchronization unit is often required to provide the phase of the grid only, so it is much easier to control it as a current source. Since a synchronous generator is inherently synchronized with the grid, it is possible to integrate the synchronization function into the power controller without synchronization unit. This results in a compact control unit, as shown in the figure 4. Applications PV As mentioned before, synchronverters can be treated like synchronous generator, which make it easier to control the source, so it should be widely used in PV primary energy sources (PES). HVDC Wind turbine DC microgrid Synchronverter also is suggested to be used in microgrids because DC sources can be coordinated together with the frequency of the ac voltage, without any communication network. Battery reserve As demonstrated by the Hornsdale Power Reserve in Australia See also Intelligent hybrid inverter References Power electronics Electric power systems components Inverters
Synchronverter
[ "Engineering" ]
1,171
[ "Electronic engineering", "Power electronics" ]
52,848,120
https://en.wikipedia.org/wiki/Clebsch%20representation
In physics and mathematics, the Clebsch representation of an arbitrary three-dimensional vector field is: where the scalar fields and are known as Clebsch potentials or Monge potentials, named after Alfred Clebsch (1833–1872) and Gaspard Monge (1746–1818), and is the gradient operator. Background In fluid dynamics and plasma physics, the Clebsch representation provides a means to overcome the difficulties to describe an inviscid flow with non-zero vorticity – in the Eulerian reference frame – using Lagrangian mechanics and Hamiltonian mechanics. At the critical point of such functionals the result is the Euler equations, a set of equations describing the fluid flow. Note that the mentioned difficulties do not arise when describing the flow through a variational principle in the Lagrangian reference frame. In case of surface gravity waves, the Clebsch representation leads to a rotational-flow form of Luke's variational principle. For the Clebsch representation to be possible, the vector field has (locally) to be bounded, continuous and sufficiently smooth. For global applicability has to decay fast enough towards infinity. The Clebsch decomposition is not unique, and (two) additional constraints are necessary to uniquely define the Clebsch potentials. Since is in general not solenoidal, the Clebsch representation does not in general satisfy the Helmholtz decomposition. Vorticity The vorticity is equal to with the last step due to the vector calculus identity So the vorticity is perpendicular to both and while further the vorticity does not depend on Notes References Vector calculus Fluid dynamics Plasma theory and modeling
Clebsch representation
[ "Physics", "Chemistry", "Engineering" ]
344
[ "Plasma physics", "Chemical engineering", "Plasma theory and modeling", "Piping", "Fluid dynamics" ]
42,691,680
https://en.wikipedia.org/wiki/CeCoIn5
{{DISPLAYTITLE:CeCoIn5}} CeCoIn5 ("Cerium-Cobalt-Indium 5") is a heavy-fermion superconductor with a layered crystal structure, with somewhat two-dimensional electronic transport properties. The critical temperature of 2.3 K is the highest among all of the Ce-based heavy-fermion superconductors. Material system CeCoIn5 is a member of a rich family of heavy-fermion compounds. CeIn3 is heavy-fermion metal with cubic crystal structure that orders antiferromagnetically below 10K. With applying external pressure, antiferromagnetism in CeIn3 is continuously suppressed, and a superconducting dome emerges in the phase diagram near the antiferromagnetic quantum critical point. CeCoIn5 has a tetragonal crystal structure, and the unit cell of CeCoIn5 can be considered as 'CeIn3 with an additional CoIn2 layer per unit cell'. Closely related to CeCoIn5 is the heavy-fermion material CeRhIn5, which has the same crystal structure and which orders antiferromagnetically below 4K, but does not become superconducting at ambient pressure. At high pressure CeRhIn5 becomes superconducting with a maximum Tc slightly above 2 K at a pressure around 2 GPa, and at the same pressure the Fermi surface of CeRhIn5 changes suggesting so-called local quantum criticality. Also the compound PuCoGa5, which is a superconductor with Tc approximately 18.5 K and which can be considered an intermediate between heavy-fermion and cuprate superconductors, has the same crystal structure. Growth of single-crystalline CeCoIn5 has been very successful soon after the discovery of the material, and large single crystals of CeCoIn5, such as required for inelastic neutron scattering, have been prepared. (In contrast to some other heavy-fermion compounds where single-crystal growth is more challenging.) Superconducting properties The upper critical magnetic field Hc2 of the superconducting state of CeCoIn5 is anisotropic, in accordance with the crystal structure and other physical properties. For magnetic fields applied along the [100] direction, Hc2 amounts to approximately 11.6 T, and Hc2 for fields along the [001] directions to 4.95 T. The superconducting order parameter has d-wave symmetry, as established by several experiments, such as scanning tunneling microscopy (STM) and spectroscopy (STS). Detailed studies close to the critical field have been performed on CeCoIn5, and indications were found that certain regimes in the phase diagram of this material should be interpreted in terms of the Fulde–Ferrell–Larkin–Ovchinnikov (FFLO) phase. Subsequently, the neutron-diffraction experiments showed that this regime features a more complex phase that also exhibits incommensurate antiferromagnetic order, a so-called 'Q phase'. Evidence for a delocalization quantum phase transition without symmetry breaking is presented. References Superconductors Correlated electrons Cerium compounds Cobalt compounds Indium compounds Intermetallics
CeCoIn5
[ "Physics", "Chemistry", "Materials_science" ]
674
[ "Inorganic compounds", "Metallurgy", "Superconductivity", "Correlated electrons", "Alloys", "Intermetallics", "Condensed matter physics", "Superconductors" ]
42,693,900
https://en.wikipedia.org/wiki/Abraham%20Neyman
Abraham Neyman (born June 14, 1949, Israel) is an Israeli mathematician and game theorist, Professor of Mathematics at the Federmann Center for the Study of Rationality and the Einstein Institute of Mathematics at the Hebrew University of Jerusalem in Israel. He served as president of the Israeli Chapter of the Game Theory Society (2014–2018). Biography Neyman received his BSc in mathematics in 1970 and his MSc in mathematics in 1972 from the Hebrew University. His MSc thesis was on the subject of “The Range of a Vector Measure” and was supervised by Joram Lindenstrauss. His PhD thesis, "Values of Games with a Continuum of Players," was completed under Robert Aumann in 1977. Neyman has been professor of mathematics at the Hebrew University since 1982, including serving as the chairman of the institute of mathematics 1992–1994, as well as holding a professorship in economics, 1982–1990. He has been a member of the Center for the Study of Rationality at the Hebrew University since its inception in 1991. He held various positions at Stony Brook University of New York, 1985–2001. He has also held positions and has been visiting scholar at Cornell University, University of California at Berkeley, Stanford University, the Graduate School of Business Administration at Harvard University, and Ohio State University. Neyman has had 12 graduate students complete Ph.D. theses under his supervision, five at Stony Brook University and seven at the Hebrew University. Neyman has also served as the Game Theory Area Editor for the journal Mathematics of Operations Research (1987–1993) and on the editorial board for Games and Economic Behavior (1993–2001) and the (2001–2007). Awards and honors Neyman has been a fellow of the Econometric Society since 1989. The Game Theory Society released, in March 2016, a special issue of the in honour of Neyman, "in recognition of his important contributions to game theory". A Festschrift conference in Neyman's honour was held at Hebrew University in June 2015, on the occasion of Neyman's 66th birthday. He gave the inaugural von-Neumann lecture at the 2008 Congress of the Game Theory Society as well as delivering it at the 2012 World Congress on behalf of the recently deceased Jean-Francois Mertens. His Ph.D. thesis won two prizes from the Hebrew University: the 1977 Abraham Urbach prize for distinguished thesis in mathematics and the 1979 Aharon Katzir prize (for the best Ph. D. thesis in the Faculties of Exact Science, Mathematics, Agriculture and Medicine). In addition, Neyman won the Israeli under 20 chess championship in 1966. Research contributions Neyman has made numerous contributions to game theory, including to stochastic games, the Shapley value, and repeated games. Stochastic games Together with Jean-Francois Mertens, he proved the existence of the uniform value of zero-sum undiscounted stochastic games. This work is considered one of the most important works in the theory of stochastic games, solving a problem that had been open for over 20 years. Together with Elon Kohlberg, he applied operator techniques to study convergence properties of the discounted and finite stage values. Recently, he has pioneered a model of stochastic games in continuous time and derived uniform equilibrium existence results. He also co-edited, together with Sylvain Sorin, a comprehensive collection of works in the field of stochastic games. Repeated games Neyman has made many contributions to the theory of repeated games. One idea that appears, in different contexts, in some of his papers, is that the model of an infinitely repeated game serves also as a powerful paradigm for a long finitely repeated game. A related insight appears in a 1999 paper, where he showed that in a long finitely repeated game, an exponentially small deviation from common knowledge of the number of repetitions is enough to dramatically alter the equilibrium analysis, producing a folk-theorem-like result. Neyman is one of the pioneers and a most notable leader of the study of repeated games under complexity constraints. In his seminal paper he showed that bounded memory can justify cooperation in a finitely repeated prisoner's dilemma game. His paper was followed by many others who started working on bounded memory games. Most notable was Neyman's M.Sc. student Elchanan Ben-Porath who was the first to shed light on the strategic value of bounded complexity. The two main models of bounded complexity, automaton size and recall capacity, continued to pose intriguing open problems in the following decades. A major breakthrough was achieved when Neyman and his Ph.D. student Daijiro Okada proposed a new approach to these problems, based on information theoretic techniques, introducing the notion of strategic entropy. His students continued to employ Neyman's entropy technique to achieve a better understanding of repeated games under complexity constraints. Neyman's information theoretic approach opened new research areas beyond bounded complexity. A classic example is the communication game he introduced jointly with Olivier Gossner and Penelope Hernandez. The Shapley value Neyman has made numerous fundamental contributions to the theory of the value. In a "remarkable tour-de-force of combinatorial reasoning", he proved the existence of an asymptotic value for weighted majority games. The proof was facilitated by his fundamental contribution to renewal theory. In subsequent work Neyman proved that many of the assumptions made in these works can be relaxed, while showing that others are essential. Neyman proved the diagonality of continuous values, which had many implications on further developments of the theory. Together with Pradeep Dubey and Robert James Weber he studied the theory of semivalues, and separately demonstrated its importance in political economy. Together with Pradeep Dubey he characterized the well-known phenomenon of value correspondence, a fundamental notion in economics, originating already in Edgeworth's work and Adam Smith before him. In loose terms, it essentially states that in a large economy consisting of many economically insignificant agents, the core of the economy coincides with the perfectly competitive outcomes, which in the case of differentiable preferences is a unique element that is the Aumann–Shapley value. Another major contribution of Neyman was the introduction of the Neyman value, a far-reaching generalization of the Aumann–Shapley value to the case of non-differentiable vector measure games. Other Neyman has made contributions to other fields of mathematics, usually motivated by problems in game theory. Among these contributions are a renewal theorem for sampling without replacement (mentioned above as applied to the theory of the value), contributions to embeddings of Lp spaces, contributions to the theory of vector measures, and to the theory of non-expansive mappings. Business involvements Neyman previously served (2005–8) as director at Tradus (previously named QXL). He also held a directorship (2004–5) at Gilat Satellite Networks. In 1999, Neyman co-founded Bidorbuy, the first online auction company to operate in India and in South Africa, and serves as the chairman of the board. Since 2013, he has held a directorship at the Israeli bank Bank Mizrahi-Tefahot. References External links Neyman’s homepage Full publication list 1949 births Living people Israeli mathematicians Jewish scientists Israeli economists Game theorists Fellows of the Econometric Society Academic staff of the Hebrew University of Jerusalem Israeli Jews
Abraham Neyman
[ "Mathematics" ]
1,520
[ "Game theorists", "Game theory" ]
42,699,154
https://en.wikipedia.org/wiki/Magnetorheological%20elastomer
Magnetorheological elastomers (MREs) (also called magnetosensitive elastomers) are a class of solids that consist of polymeric matrix with embedded micro- or nano-sized ferromagnetic particles such as carbonyl iron. As a result of this composite microstructure, the mechanical properties of these materials can be controlled by the application of magnetic field. Fabrication MREs are typically prepared by curing process for polymers. The polymeric material (e.g. silicone rubber) in its liquid state is mixed with iron powder and several other additives to enhance their mechanical properties. The entire mixture is then cured at high temperature. Curing in the presence of a magnetic field causes the iron particles to arrange in chain like structures resulting in an anisotropic material. If magnetic field is not applied, then iron-particles are randomly distributed in the solid resulting in an isotropic material. Recently, in 2017, an advanced technology, 3D printing has also been used to configure the magnetic particles inside the polymer matrix. Classification MREs can be classified according to several parameters like: particles type, matrix, structure and distribution of particles: Particles magnetic properties Soft magnetic particles Hard magnetic particles Magnetostrictive particles Magnetic shape-memory particles Matrix structure Solid matrix Porous matrix Matrix electrical properties Isolating matrix Conductive matrix Distribution of particles Isotropic Anisotropic Theoretical Studies In order to understand magneto-mechanical behaviour of MREs, theoretical studies need to be performed which couple the theories of electromagnetism with mechanics. Such theories are called theories of magneto-mechanics. Programmable magnetopolymers Magnetopolymers with large remanence are typically formed by combining hard-magnetic particles with a polymer matrix. The orientation of the magnetic particles is typically controlled with an external magnetic field during the polymerization process, and then mechanically fixed after the material is synthesized. Because the Curie temperature of these magnetopolymers exceeds the temperature at which the polymer matrix would break down, they must be degaussed in order to be remagnetized. This means that the functionality of these magnetopolymers is limited and they can only be permanently programmed during manufacturing. Programmable magnetopolymers embed athermal ferromagnetic particles in droplets of low melting point materials in polymer matrices. Above the droplet melting point, the particles have rotational freedom. The uniqueness of these composites exists in their easily reprogrammable magnetization profiles. This behaviour follows from the fact that particles (1) are athermal, (2) have Curie temperatures above the droplet melting point, and (3) are fixated in solid droplets while possessing full rotational freedom in molten droplets. This easy reprogramming is a critical characteristic for such materials to be used in a wide range of applications. Applications MREs have been used for vibration isolation applications since their stiffness changes within a magnetic field References Further reading See also Conductive elastomer Elastomers Materials science Polymer physics
Magnetorheological elastomer
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
628
[ "Polymer physics", "Applied and interdisciplinary physics", "Synthetic materials", "Materials science", "Elastomers", "Polymer chemistry", "nan" ]
42,699,853
https://en.wikipedia.org/wiki/Rod%20and%20frame%20test
The rod and frame test is a psychophysical method of testing perception. It relies on the use of a rod and frame apparatus which uses a rotating rod set inside an individually rotatable drum, allowing an experimenter to vary the participant's frame of reference and thus test for their perception of vertical. Rod and frame illusion The rod and frame illusion occurs because of the effect of the orientation of the frame on the rod. In the simplest example of the rod and frame illusion, the illusion will cause the participant to perceive the rod to be oriented congruent with the orientation of the frame. When the participant is viewing the rod and frame that are both positioned at 0 degrees (or vertical), they perceive the rod as vertical with perfect accuracy. However, when the frame is tilted away from vertical, the participant's perception of vertical is affected. The participant tends to perceive the rod to be tilted in the same direction as the frame is oriented (e.g., if the frame is tilted in the counterclockwise direction, the rod will also be perceived as being tilted counterclockwise). As the tilt of the frame increases, the participants' perceived vertical increasingly deviates from true vertical. Rod and frame test To perform the rod and frame task, an apparatus consisting of a rod in a square frame is used. An example commercial apparatus can be seen in picture 1. When the participant is being tested using the apparatus, their head is fastened firmly in the chin rest to prevent the participant from collecting visual cues from outside of the apparatus. The rod and frame are shown in the center of the far end of the apparatus, which provides a frame of reference to the participant. Both the participant and the experimenter are able to adjust the orientation of the rod, while only the experimenter can adjust the frame orientation by using the appropriate knobs on the apparatus, as seen in picture 2. The experimenter is able to see the exact degree measurement of the rod and frame from vertical, while the participant sees the physical rod and frame inside the apparatus. The methods of constant stimuli, limits, and adjustment can be used to test the participants, but method of limits is most commonly used in research conducted using the rod and frame task. When using the method of limits, the experimenter sets the orientation of the rod and frame separately and then the participant is asked to adjust the rod orientation until they perceive it to be vertical. Deviation from true vertical can then be determined. Based on which way the frame is tilted, the rod can be viewed as either being tilted in the same direction as the frame (direct effect), or in the opposite direction of the frame (indirect effect). Evidence The frame of reference with respect to studies of the visual system refers to perceived reference axes. In the rod and frame illusion, there are a number of things that can influence one's frame of reference. Past research has found that one reason people experience the rod and frame illusion is due to visual-vestibular interactions. For instance, when a participant is viewing the rod and frame task while physically tilted, the participant acts as though they are tilted opposite of the orientation of the frame. This suggests that the illusion, in part, is due to the person compensating for their perceived vertical in the direction that is opposite of the frame. Other evidence proposed by researchers that is consistent with this is that, when participants are put on their sides to view the rod and frame task, they rely on their vision when their vestibular and proprioceptive senses are incongruent with those of their visual senses. These findings suggest that the rod and frame illusion is processed in a type of hierarchy, where visual input is at the top, then vestibular cues, and finally proprioceptive cues. In 2010, Lipshits found that, along with this hierarchy of processing, proprioceptive information, as opposed to gravity, is used by the body to determine which way is vertical. Lipshits says that, when we are not able to use vision to determine which way is vertical, we use other cues based on the axis of our head and body. See also Visual perception Field dependence References Frames of reference Psychophysics Perception Visual perception
Rod and frame test
[ "Physics", "Mathematics" ]
854
[ "Applied and interdisciplinary physics", "Coordinate systems", "Frames of reference", "Psychophysics", "Classical mechanics", "Theory of relativity" ]
38,487,110
https://en.wikipedia.org/wiki/Reduced%20frequency
Reduced frequency is the dimensionless number used in general for the case of unsteady aerodynamics and aeroelasticity. It is one of the parameters that defines the degree of unsteadiness of the problem. For the case of flutter analysis, lift history for the motion obtained from the Wagner analysis (Herbert A. Wagner) with varying frequency of oscillation shows that magnitude of lift decreases and a phase lag develops between the aircraft motion and the unsteady aerodynamic forces. Reduced frequency can be used to explain the amplitude attenuation and the phase lag of the unsteady aerodynamic forces compared to the quasi steady analysis (which in theory assumes no phase lag). Reduced frequency is denoted by the letter "k" and given by the expression where: ω = circular frequency b = airfoil semi-chord V = flow velocity The semi-chord is used instead of the chord due to its use in the derivation of unsteady lift based on thin airfoil theory. Based on the value of reduced frequency "k", we can roughly divide the flow into: Steady state aerodynamics – k=0 Quasi-steady aerodynamics – 0≤k≤0.05 Unsteady aerodynamics – k>0.05 [k>0.2 is considered highly unsteady] References Dimensionless numbers of fluid mechanics Fluid dynamics
Reduced frequency
[ "Chemistry", "Engineering" ]
274
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
38,489,856
https://en.wikipedia.org/wiki/Ladyzhenskaya%27s%20inequality
In mathematics, Ladyzhenskaya's inequality is any of a number of related functional inequalities named after the Soviet Russian mathematician Olga Aleksandrovna Ladyzhenskaya. The original such inequality, for functions of two real variables, was introduced by Ladyzhenskaya in 1958 to prove the existence and uniqueness of long-time solutions to the Navier–Stokes equations in two spatial dimensions (for smooth enough initial data). There is an analogous inequality for functions of three real variables, but the exponents are slightly different; much of the difficulty in establishing existence and uniqueness of solutions to the three-dimensional Navier–Stokes equations stems from these different exponents. Ladyzhenskaya's inequality is one member of a broad class of inequalities known as interpolation inequalities. Let be a Lipschitz domain in for and let be a weakly differentiable function that vanishes on the boundary of in the sense of trace (that is, is a limit in the Sobolev space of a sequence of smooth functions that are compactly supported in ). Then there exists a constant depending only on such that, in the case : and in the case : Generalizations Both the two- and three-dimensional versions of Ladyzhenskaya's inequality are special cases of the Gagliardo–Nirenberg interpolation inequality which holds whenever Ladyzhenskaya's inequalities are the special cases when and when . A simple modification of the argument used by Ladyzhenskaya in her 1958 paper (see e.g. Constantin & Seregin 2010) yields the following inequality for , valid for all : The usual Ladyzhenskaya inequality on , can be generalized (see McCormick & al. 2013) to use the weak "norm" of in place of the usual norm: See also Agmon's inequality References [] Inequalities Fluid dynamics Sobolev spaces
Ladyzhenskaya's inequality
[ "Chemistry", "Mathematics", "Engineering" ]
397
[ "Mathematical theorems", "Chemical engineering", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Piping", "Mathematical problems", "Fluid dynamics" ]
38,492,648
https://en.wikipedia.org/wiki/Weighted%20median
In statistics, a weighted median of a sample is the 50% weighted percentile. It was first proposed by F. Y. Edgeworth in 1888. Like the median, it is useful as an estimator of central tendency, robust against outliers. It allows for non-uniform statistical weights related to, e.g., varying precision measurements in the sample. Definition General case For distinct ordered elements with positive weights such that , the weighted median is the element satisfying and Special case Consider a set of elements in which two of the elements satisfy the general case. This occurs when both element's respective weights border the midpoint of the set of weights without encapsulating it; Rather, each element defines a partition equal to . These elements are referred to as the lower weighted median and upper weighted median. Their conditions are satisfied as follows: Lower Weighted Median and Upper Weighted Median and Ideally, a new element would be created using the mean of the upper and lower weighted medians and assigned a weight of zero. This method is similar to finding the median of an even set. The new element would be a true median since the sum of the weights to either side of this partition point would be equal. Depending on the application, it may not be possible or wise to create new data. In this case, the weighted median should be chosen based on which element keeps the partitions most equal. This will always be the weighted median with the lowest weight. In the event that the upper and lower weighted medians are equal, the lower weighted median is generally accepted as originally proposed by Edgeworth. Properties The sum of weights in each of the two partitions should be as equal as possible. If the weights of all numbers in the set are equal, then the weighted median reduces down to the median. Examples For simplicity, consider the set of numbers with each number having weights respectively. The median is 3 and the weighted median is the element corresponding to the weight 0.3, which is 4. The weights on each side of the pivot add up to 0.45 and 0.25, satisfying the general condition that each side be as even as possible. Any other weight would result in a greater difference between each side of the pivot. Consider the set of numbers with each number having uniform weights respectively. Equal weights should result in a weighted median equal to the median. This median is 2.5 since it is an even set. The lower weighted median is 2 with partition sums of 0.25 and 0.5, and the upper weighted median is 3 with partition sums of 0.5 and 0.25. These partitions each satisfy their respective special condition and the general condition. It is ideal to introduce a new pivot by taking the mean of the upper and lower weighted medians when they exist. With this, the set of numbers is with each number having weights respectively. This creates partitions that both sum to 0.5. It can easily be seen that the weighted median and median are the same for any size set with equal weights. Similarly, consider the set of numbers with each number having weights respectively. The lower weighted median is 2 with partition sums of 0.49 and 0.5, and the upper weighted median is 3 with partition sums of 0.5 and 0.25. In the case of working with integers or non-interval measures, the lower weighted median would be accepted since it is the lower weight of the pair and therefore keeps the partitions most equal. However, it is more ideal to take the mean of these weighted medians when it makes sense instead. Coincidentally, both the weighted median and median are equal to 2.5, but this will not always hold true for larger sets depending on the weight distribution. Algorithm The weighted median can be computed by sorting the set of numbers and finding the smallest set of numbers which sum to half the weight of the total weight. This algorithm takes time. There is a better approach to find the weighted median using a modified selection algorithm. // Main call is WeightedMedian(a, 1, n) // Returns lower median WeightedMedian(a[1..n], p, r) // Base case for single element if r = p then return a[p] // Base case for two elements // Make sure we return the mean in the case that the two candidates have equal weight if r-p = 1 then if a[p].w == a[r].w return (a[p] + a[r])/2 if a[p].w > a[r].w return a[p] else return a[r] // Partition around pivot r q = partition(a, p, r) wl, wg = sum weights of partitions (p, q-1), (q+1, r) // If partitions are balanced then we are done if wl and wg both < 1/2 then return a[q] else // Increase pivot weight by the amount of partition we eliminate if wl > wg then a[q].w += wg // Recurse on pivot inclusively WeightedMedian(a, p, q) else a[q].w += wl WeightedMedian(a, q, r) Software/source code A fast weighted median algorithm is implemented in a C extension for Python in the Robustats Python package. R has many implementations, including matrixStats::weightedMedian(), spatstat::weighted.median(), and others. See also Weighted arithmetic mean Least absolute deviations Median filter Quickselect References Means Robust statistics
Weighted median
[ "Physics", "Mathematics" ]
1,165
[ "Means", "Mathematical analysis", "Point (geometry)", "Geometric centers", "Symmetry" ]
38,495,380
https://en.wikipedia.org/wiki/Mechanical%2C%20electrical%2C%20and%20plumbing
Mechanical, Electrical, and Plumbing (MEP) refers to the installation of services which provide a functional and comfortable space for the building occupants. In residential and commercial buildings, these elements are often designed by specialized MEP engineers. MEP's design is important for planning, decision-making, accurate documentation, performance- and cost-estimation, construction, and operating/maintaining the resulting facilities. MEP specifically encompasses the in-depth design and selection of these systems, as opposed to a tradesperson simply installing equipment. For example, a plumber may select and install a commercial hot water system based on common practice and regulatory codes. A team of MEP engineers will research the best design according to the principles of engineering, and supply installers with the specifications they develop. As a result, engineers working in the MEP field must understand a broad range of disciplines, including dynamics, mechanics, fluids, thermodynamics, heat transfer, chemistry, electricity, and computers. Design and documentation As with other aspect of buildings, MEP drafting, design and documentation were traditionally done manually. Computer-aided design has some advantages over this, and often incorporates 3D modeling which is otherwise impractical. Building information modeling provides holistic design and parametric change management of the MEP design. Maintaining documentation of MEP services may also require the use of a geographical information system or asset management system. Components of MEP Mechanical The mechanical component of MEP is an important superset of HVAC services. Thus, it incorporates the control of environmental factors (psychrometrics), either for human comfort or for the operation of machines. Heating, cooling, ventilation and exhaustion are all key areas to consider in the mechanical planning of a building. In special cases, water cooling/heating, humidity control or air filtration may also be incorporated. For example, Google's data centres make extensive use of heat exchangers to cool their servers. This system creates an additional overhead of 12% of initial energy consumption. This is a vast improvement from traditional active cooling units which have an overhead of 30-70%. However, this novel and complicated method requires careful and expensive planning from mechanical engineers, who must work closely with the engineers designing the electrical and plumbing systems for a building. A major concern for people designing HVAC systems is the efficiency, i.e., the consumption of electricity and water. Efficiency is optimised by changing the design of the system on both large and small scales. Heat pumps and evaporative cooling are efficient alternatives to traditional systems, however they may be more expensive or harder to implement. The job of an MEP engineer is to compare these requirements and choose the most suitable design for the task. Electricians and plumbers usually have little to do with each other, other than keeping services out of each other's way. The introduction of mechanical systems requires the integration of the two so that plumbing may be controlled by electrics and electrics may be serviced by plumbing. Thus, the mechanical component of MEP unites the three fields. Electrical Alternating current Virtually all modern buildings integrate some form of AC mains electricity for powering domestic and everyday appliances. Such systems typically run between 100 and 500 volts, however their classifications and specifications vary greatly by geographical area (see Mains electricity by country). Mains power is typically distributed through insulated copper wire concealed in the building's subfloor, wall cavities and ceiling cavity. These cables are terminated into sockets mounted to walls, floors or ceilings. Similar techniques are used for lights ("luminaires"), however the two services are usually separated into different circuits with different protection devices at the distribution board. Whilst the wiring for lighting is exclusively managed by electricians, the selection of luminaires or light fittings may be left to building owners or interior designers in some cases. Three-phase power is commonly used for industrial machines, particularly motors and high-load devices. Provision for three-phase power must be considered early in the design stage of a building because it has different regulations to domestic power supplies, and may affect aspects such as cable routes, switchboard location, large external transformers and connection from the street. Information technology Advances in technology and the advent of computer networking have led to the emergence of a new facet of electrical systems incorporating data and telecommunications wiring. As of 2019, several derivative acronyms have been suggested for this area, including MEPIT (mechanical, electrical, plumbing and information technology) and MEPI (an abbreviation of MEPIT). Equivalent names are "low voltage", "data", and "telecommunications" or "comms". A low voltage system used for telecommunications networking is not the same as a low voltage network. The information technology sector of electrical installations is used for computer networking, telephones, television, security systems, audio distribution, healthcare systems, robotics, and more. These services are typically installed by different tradespeople to the higher-voltage mains wiring and are often contracted out to very specific trades, e.g. security installers or audio integrators. Regulations on low voltage wiring are often less strict or less important to human safety. As a result, it is more common for this wiring to be installed or serviced by competent amateurs, despite constant attempts from the electrical industry to discourage this. Plumbing Competent design of plumbing systems is necessary to prevent conflicts with other trades, and to avoid expensive rework or surplus supplies. The scope of standard residential plumbing usually covers mains pressure potable water, heated water (in conjunction with mechanical and/or electrical engineers), sewerage, stormwater, natural gas, and sometimes rainwater collection and storage. In commercial environments, these distribution systems expand to accommodate many more users, as well as the addition of other plumbing services such as hydroponics, irrigation, fuels, oxygen, vacuum/compressed air, solids transfer, and more. Plumbing systems also service air distribution/control, and therefore contribute to the mechanical part of MEP. Plumbing for HVAC systems involves the transfer of coolant, pressurized air, water, and occasionally other substances. Ducting for air transfer may also be consider plumbing, but is generally installed by different tradespeople. See also Architectural engineering Drainage Electrical wiring Heating, ventilation, and air conditioning Plumbing Telecommunication Fire protection engineering References Building engineering Electrical engineering Mechanical engineering
Mechanical, electrical, and plumbing
[ "Physics", "Engineering" ]
1,292
[ "Applied and interdisciplinary physics", "Building engineering", "Civil engineering", "Mechanical engineering", "Electrical engineering", "Architecture" ]
48,986,530
https://en.wikipedia.org/wiki/Methylestradiol
Methylestradiol, sold under the brand names Ginecosid, Ginecoside, Mediol, and Renodiol, is an estrogen medication which is used in the treatment of menopausal symptoms. It is formulated in combination with normethandrone, a progestin and androgen/anabolic steroid medication. Methylestradiol is taken by mouth. Side effects of methylestradiol include nausea, breast tension, edema, and breakthrough bleeding among others. It is an estrogen, or an agonist of the estrogen receptors, the biological target of estrogens like estradiol. Methylestradiol is or has been marketed in Brazil, Venezuela, and Indonesia. In addition to its use as a medication, methylestradiol has been studied for use as a radiopharmaceutical for the estrogen receptor. Medical uses Methylestradiol is used in combination with the progestin and androgen/anabolic steroid normethandrone (methylestrenolone) in the treatment of menopausal symptoms. Available forms Methylestradiol is marketed in combination with normethandrone in the form of oral tablets containing 0.3 mg methylestradiol and 5 mg normethandrone. Side effects Side effects of methylestradiol include nausea, breast tension, edema, and breakthrough bleeding. Pharmacology Pharmacodynamics Methylestradiol is an estrogen, or an agonist of the estrogen receptor. It shows somewhat lower affinity for the estrogen receptor than estradiol or ethinylestradiol. Methylestradiol is an active metabolite of the androgens/anabolic steroids methyltestosterone (17α-methyltestosterone), metandienone (17α-methyl-δ1-testosterone), and normethandrone (17α-methyl-19-nortestosterone), and is responsible for their estrogenic side effects, such as gynecomastia and fluid retention. Pharmacokinetics Due to the presence of its C17α methyl group, methylestradiol cannot be deactivated by oxidation of the C17β hydroxyl group, resulting in improved metabolic stability and potency relative to estradiol. This is analogous to the case of ethinylestradiol and its C17α ethynyl group. Chemistry Methylestradiol, or 17α-methylestradiol (17α-ME), also known as 17α-methylestra-1,3,5(10)-triene-3,17β-diol, is a synthetic estrane steroid and a derivative of estradiol. It is specifically the derivative of estradiol with a methyl group at the C17α positions. Closely related steroids include ethinylestradiol (17α-ethynylestradiol) and ethylestradiol (17α-ethylestradiol). The C3 cyclopentyl ether of methylestradiol has been studied and shows greater oral potency than methylestradiol in animals, similarly to quinestrol (ethinylestradiol 3-cyclopentyl ether) and quinestradol (estriol 3-cyclopentyl ether). History Methylestradiol was first marketed, alone as Follikosid and in combination with methyltestosterone as Klimanosid, in 1955. Society and culture Generic names Methylestradiol has not been assigned an or other formal name designations. Its generic name in English and German is methylestradiol, in French is méthylestradiol, and in Spanish is metilestadiol. It is also known as 17α-methylestradiol. Brand names Methylestradiol is or has been marketed under the brand names Ginecosid, Ginecoside, Mediol, and Renodiol, all in combination with normethandrone. Availability Methylestradiol is or has been marketed in Brazil, Venezuela, and Indonesia. References Tertiary alcohols Estranes Human drug metabolites Synthetic estrogens
Methylestradiol
[ "Chemistry" ]
878
[ "Chemicals in medicine", "Human drug metabolites" ]
48,987,289
https://en.wikipedia.org/wiki/Free%20stationing
In surveying, free stationing (also known as resection) is a method of determining a location of one unknown point in relation to known points. There is a zero point of reference called a total station. The instrument can be freely positioned so that all survey points are at a suitable sight from the instrument. When setting up the total station on a known point, it is often not possible to see all survey points of interest. When performing a resection (free stationing) with the total station, bearings and distances are measured to at least two known points of a control network. With use of a handheld computer, recorded data can be related to local polar coordinates, defined by the horizontal circle of the total station. By a geometric transformation, these polar coordinates are transformed to the coordinate system of the control network. Error can be distributed by least squares adjustment. Upon completion of observations and calculations, a coordinate is produced, and the position and orientation of the total station in relation to where the control network is established. Comparison of methods Angular resection and triangulation: only bearings are measured to the known points. Trilateration: only distances are measured to the known points. Free stationing and triangulateration: both bearings and distances are measured to the known points. Naming Because bearings and distances are measured in a full resection (free stationing), the result may have a different mathematical solution. This method has different names in other languages, e.g. in German: Freie Standpunktwahl (free stationing). Naming is also regulated by the German Institute for Standardization DIN 18 709. Different mathematical solution By measuring bearings and distances, local polar coordinates are recorded. The orientation of this local polar coordinate system is defined by the 0° horizontal circle of the total station (polar axis L). The pole of this local polar coordinate system is the vertical axis (pole O) of the total stations. The polar coordinates (r,f) with the pole are transformed using surveying software on a data collector to the Cartesian coordinates (x,y) of the known points. The coordinates for the position of the total station are then calculated. In a resection (triangulation) measuring bearings only, there can be a problem with an infinite number of solutions known as a "danger circle", or "inscribed angle theorem". Back-sight points The back-sight points of the control network should cover and surround the stationing site. The position of the total station is not part of the area. This is the area where you want to measure with this station setup. Topographic points or stakeout points should not be measured outside this area. If measured outside this area, the errors in orientation will be extrapolated instead of being interpolated. While it is possible to use only two known control points in a resection (free stationing), it is recommended to use three control points. There is no redundancy for orientation, using two points only. When performing a resection (free-stationing) on more than 4 points, diminishing returns are achieved in the returned results. Advantages The surveyor may freely set a station point: Where there is best visibility to all points that must be staked out or recorded Where there are no obstructions or traffic Where there is the highest safety for the operator and the instrument Because of the range and accuracy of total stations, the method of a resection (free stationing) permits a great freedom of positioning the total station. For this reason, this method is one of the most used station set ups. Application With the calculated coordinates and orientation of the total station, it can be used to set out points in construction surveying, machine guidance, site plan or other types of surveys. References External links Topcon Magnet Field 1.0 Help Leica SmartWorx Viva Field Software Datasheet CarlsonSurvCE Reference Manual 12d Field – Helmert Resection Trimble: Advantages and Disadvantages of the Stationing Programs Trimble: Design of the Backsight Point Configuration Trimble: Problems in Resection Without Redundancy Trimble: The Influence of Weights in Resection Trimble: Neighborhood Adjustment Surveying Surveying instruments Geodesy Civil engineering
Free stationing
[ "Mathematics", "Engineering" ]
858
[ "Applied mathematics", "Construction", "Surveying", "Civil engineering", "Geodesy" ]
48,987,892
https://en.wikipedia.org/wiki/Isotropic%20position
In the fields of machine learning, the theory of computation, and random matrix theory, a probability distribution over vectors is said to be in isotropic position if its covariance matrix is equal to the identity matrix. Formal definitions Let be a distribution over vectors in the vector space . Then is in isotropic position if, for vector sampled from the distribution, A set of vectors is said to be in isotropic position if the uniform distribution over that set is in isotropic position. In particular, every orthonormal set of vectors is isotropic. As a related definition, a convex body in is called isotropic if it has volume , center of mass at the origin, and there is a constant such that for all vectors in ; here stands for the standard Euclidean norm. See also Whitening transformation References Machine learning Random matrices
Isotropic position
[ "Physics", "Mathematics", "Engineering" ]
175
[ "Random matrices", "Machine learning", "Mathematical objects", "Matrices (mathematics)", "Matrix stubs", "Artificial intelligence engineering", "Statistical mechanics" ]
59,095,024
https://en.wikipedia.org/wiki/List%20of%20viscosities
Dynamic viscosity is a material property which describes the resistance of a fluid to shearing flows. It corresponds roughly to the intuitive notion of a fluid's 'thickness'. For instance, honey has a much higher viscosity than water. Viscosity is measured using a viscometer. Measured values span several orders of magnitude. Of all fluids, gases have the lowest viscosities, and thick liquids have the highest. The values listed in this article are representative estimates only, as they do not account for measurement uncertainties, variability in material definitions, or non-Newtonian behavior. Kinematic viscosity is dynamic viscosity divided by fluid density. This page lists only dynamic viscosity. Units and conversion factors For dynamic viscosity, the SI unit is Pascal-second. In engineering, the unit is usually Poise or centiPoise, with 1 Poise = 0.1 Pascal-second, and 1 centiPoise = 0.01 Poise. For kinematic viscosity, the SI unit is m^2/s. In engineering, the unit is usually Stoke or centiStoke, with 1 Stoke = 0.0001 m^2/s, and 1 centiStoke = 0.01 Stoke. For liquid, the dynamic viscosity is usually in the range of 0.001 to 1 Pascal-second, or 1 to 1000 centiPoise. The density is usually on the order of 1000 kg/m^3, i.e. that of water. Consequently, if a liquid has dynamic viscosity of n centiPoise, and its density is not too different from that of water, then its kinematic viscosity is around n centiStokes. For gas, the dynamic viscosity is usually in the range of 10 to 20 microPascal-seconds, or 0.01 to 0.02 centiPoise. The density is usually on the order of 0.5 to 5 kg/m^3. Consequently, its kinematic viscosity is around 2 to 40 centiStokes. Viscosities at or near standard conditions Here "standard conditions" refers to temperatures of 25 °C and pressures of 1 atmosphere. Where data points are unavailable for 25 °C or 1 atmosphere, values are given at a nearby temperature/pressure. The temperatures corresponding to each data point are stated explicitly. By contrast, pressure is omitted since gaseous viscosity depends only weakly on it. Gases Noble gases The simple structure of noble gas molecules makes them amenable to accurate theoretical treatment. For this reason, measured viscosities of the noble gases serve as important tests of the kinetic-molecular theory of transport processes in gases (see Chapman–Enskog theory). One of the key predictions of the theory is the following relationship between viscosity , thermal conductivity , and specific heat : where is a constant which in general depends on the details of intermolecular interactions, but for spherically symmetric molecules is very close to . This prediction is reasonably well-verified by experiments, as the following table shows. Indeed, the relation provides a viable means for obtaining thermal conductivities of gases since these are more difficult to measure directly than viscosity. Diatomic elements Hydrocarbons Organohalides Other gases Liquids n-Alkanes Substances composed of longer molecules tend to have larger viscosities due to the increased contact of molecules across layers of flow. This effect can be observed for the n-alkanes and 1-chloroalkanes tabulated below. More dramatically, a long-chain hydrocarbon like squalene (C30H62) has a viscosity an order of magnitude larger than the shorter n-alkanes (roughly 31 mPa·s at 25 °C). This is also the reason oils tend to be highly viscous, since they are usually composed of long-chain hydrocarbons. 1-Chloroalkanes Other halocarbons Alkenes Other liquids Aqueous solutions The viscosity of an aqueous solution can either increase or decrease with concentration depending on the solute and the range of concentration. For instance, the table below shows that viscosity increases monotonically with concentration for sodium chloride and calcium chloride, but decreases for potassium iodide and cesium chloride (the latter up to 30% mass percentage, after which viscosity increases). The increase in viscosity for sucrose solutions is particularly dramatic, and explains in part the common experience of sugar water being "sticky". Substances of variable composition Viscosities under nonstandard conditions Gases All values are given at 1 bar (approximately equal to atmospheric pressure). Liquids (including liquid metals) In the following table, the temperature is given in kelvins. Solids References Viscosity
List of viscosities
[ "Physics", "Chemistry" ]
988
[ "Physical phenomena", "Physical quantities", "nan", "Wikipedia categories named after physical quantities", "Viscosity", "Physical properties" ]
59,096,122
https://en.wikipedia.org/wiki/Ocean%20acidification%20in%20the%20Arctic%20Ocean
The Arctic Ocean covers an area of 14,056,000 square kilometers, and supports a diverse and important socioeconomic food web of organisms, despite its average water temperature being 32 degrees Fahrenheit. Over the last three decades, the Arctic Ocean has experienced drastic changes due to climate change. One of the changes is in the acidity levels of the ocean, which have been consistently increasing at twice the rate of the Pacific and Atlantic oceans. Arctic Ocean acidification is a result of feedback from climate system mechanisms, and is having negative impacts on Arctic Ocean ecosystems and the organisms that live within them. Process Ocean acidification is caused by the equilibration of the atmosphere with the ocean, a process that occurs worldwide. Carbon dioxide in the atmosphere equilibrates and dissolves into the ocean. During this reaction, carbon dioxide reacts with water to form carbonic acid. The carbonic acid then dissociates into bicarbonate ions and hydrogen ions. This reaction causes the pH of the water to lower, effectively acidifying it. Ocean acidification is occurring in every ocean across the world. Since the beginning of the Industrial Revolution, the World's oceans have absorbed approximately 525 billion tons of carbon dioxide. During this time, world ocean pH has collectively decreased from 8.2 to 8.1, with climatic modeling predicting a further decrease of pH by 0.3 units by 2100. However, the Arctic Ocean has been affected more due to the cold water temperatures and increased solubility of gases as water temperature decreases. The cold Arctic water is able to absorb higher amounts of carbon dioxide compared to the warmer Pacific and Atlantic Oceans. The chemical changes caused by the acidification of the Arctic Ocean are having negative ecological and socioeconomic repercussions. With the changes in the chemistry of their environment, arctic organisms are challenged with new stressors. These stressors can have damaging effects on these organisms, with some being affected more than others. Calcifying organisms specifically appear to be the most impacted by this changing water composition, as they rely on carbonate availability to survive. Dissolved carbonate concentrations decrease with increasing carbon dioxide and lowered pH in the water. Ecological food webs are also altered by the acidification. Acidification lowers the ability of many fish to grow, which not only impacts food webs but humans that rely on these fisheries as well. Economic effects are resulting from shifting food webs that decrease popular fish populations. These fish populations provide jobs to people who work in the fisheries industry. As is apparent, ocean acidification lacks any positive benefits, and as a result has been placed high on a priority list within the United States and other organizations such as the Scientific Committee on Oceanic Research, UNESCO's Intergovernmental Oceanographic Commission, the Ocean Carbon and Biogeochemistry Program, the Integrated Marine Biogeochemistry and Ecosystem Research Project, and the Consortium for Ocean Leadership. Causes Decreased sea ice Arctic sea ice has experienced an extreme reduction over the past few decades, with the minimum area of sea ice being 4.32 million km2 in 2019, a sharp 38% decrease from 1980, when the minimum area was 7.01 million km2. Sea ice plays an important role in the health of the Arctic Ocean, and its decline has had detrimental effects on Arctic Ocean chemistry. All oceans equilibrate with the atmosphere by pulling carbon dioxide out of the atmosphere and into the ocean, which lowers the pH of the water. Sea ice limits the air-sea gas exchange with carbon dioxide by protecting the water from being completely exposed to the atmosphere. Low carbon dioxide levels are important to the Arctic Ocean due to intense cooling, fresh water runoff, and photosynthesis from marine organisms. Reductions in sea ice have allowed more carbon dioxide to equilibrate with the arctic water, resulting in increased acidification. The decrease in sea ice has also allowed more Pacific Ocean water to flow into in the Arctic Ocean during the winter, called Pacific winter water. Pacific Ocean water is high in carbon dioxide, and with decreased amounts of sea ice, more Pacific Ocean water has been able to enter the Arctic Ocean, carrying carbon dioxide with it. This Pacific winter water has further acidified the Arctic Ocean, as well as increased the depth of acidified water. Melting methane hydrates Climate change is causing destabilization of multiple climate systems within the Arctic Ocean. One system that climate change is impacting is methane hydrates. Methane hydrates are located along the continental margins, and are stabilized by high pressure, as well as uniformly low temperatures. Climate change has begun to destabilize these methane hydrates within the Arctic Ocean by decreasing pressure and increasing temperatures, allowing methane hydrates to melt and release methane into the arctic waters. When methane is released into the water, it can either be used via anaerobic metabolism or aerobic metabolism by microorganisms in the ocean sediment, or be released from sea into the atmosphere. Most impactful to ocean acidification is aerobic oxidation by microorganisms in the water column. Carbon dioxide is produced by the reaction of methane and oxygen in water. Carbon dioxide then equilibrates with water, producing carbonic acid, which then equilibrates to release hydrogen ions and bicarbonate and further contributes to ocean acidification. Effects on Arctic organisms Organisms in Arctic waters are under high environmental stress such as extremely cold water. It is believed that this high stress environment will cause ocean acidification factors to have a stronger effect on these organisms. It could also cause these effects to appear in the Arctic before it appears in other parts of the ocean. There is a significant variation in the sensitivity of marine organisms to increased ocean acidification. Calcifying organisms generally exhibit larger negative responses from ocean acidification than non-calcifying organisms across numerous response variables, with the exception of crustaceans, which calcify but don't seem to be negatively affected. This is due, mainly, to the process of marine biogenic calcification, that calcifying organisms utilize. Calcifying organisms Carbonate ions (CO32-) are essential in marine calcifying organisms, like plankton and shellfish, as they are required to produce their calcium carbonate () shells and skeletons. As the ocean acidifies, the increased uptake of CO2 by seawater increases the concentration of hydrogen ions, which lowers the pH of the water. This change in the chemical equilibrium of the inorganic carbon system reduces the concentration of these carbonate ions. This reduces the ability of these organisms to create their shells and skeletons. The two polymorphs of calcium carbonate that are produced by marine organisms are aragonite and calcite. These are the materials that makes up most of the shells and skeletons of these calcifying organisms. Aragonite, for example, makes up nearly all mollusc shells, as well as the exoskeleton of corals. The formation of these materials is dependent on the saturation state of CaCO3 in ocean water. Waters which are saturated in are favorable to precipitation and formation of shells and skeletons, but waters which are undersaturated are corrosive to shells. In the absence of protective mechanisms, dissolution of calcium carbonate will occur. As colder arctic water absorbs more , the concentration of CO32- is reduced, therefore the saturation of calcium carbonate is lower in high-latitude oceans than it is in tropical or temperate oceans. The undersaturation of CaCO3 causes the shells of calcifying organisms to dissolve, which can have devastating consequences to the ecosystem. As the shells dissolve, the organisms struggle to maintain proper health, which can lead to mass mortality. The loss of many of these species can lead to intense consequences on the marine food web in the Arctic Ocean, as many of these marine calcifying organisms are keystone species. Laboratory experiments on various marine biota in an elevated environment show that changes in aragonite saturation cause substantial changes in overall calcification rates for many species of marine organisms, including coccolithophore, foraminifera, pteropods, mussels, and clams. Although the undersaturation of arctic water has been proven to have an effect on the ability of organisms to precipitate their shells, recent studies have shown that the calcification rate of calcifiers, such as corals, coccolithophores, foraminiferans and bivalves, decrease with increasing p, even in seawater supersaturated with respect to . Additionally, increased p has been found to have complex effects on the physiology, growth and reproductive success of various marine calcifiers. Life cycle tolerance seems to differ between various marine organisms, as well as tolerance at different life cycle stages (e.g. larva and adult). The first stage in the life cycle of marine calcifiers at serious risk from high content is the planktonic larval stage. The larval development of several marine species, primarily sea urchins and bivalves, are highly affected by elevations of seawater p. In laboratory tests, numerous sea urchin embryos were reared under different concentrations until they developed to the larval stage. It was found that once they reached this stage, larval and arm sizes were significantly smaller, as well as abnormal skeleton morphology was noted with increasing p. Similar findings have been found in treated-mussel larvae, which showed a larval size decrease of about 20% and showed morphological abnormalities such as convex hinges, weaker and thinner shells and protrusion of mantle. The larval body size also impacts the encounter and clearance rates of food particles, and if larval shells are smaller or deformed, these larvae are more prone to starvation. structures also serve vital functions for calcified larvae, such as defense against predation, as well as roles in feeding, buoyancy control and pH regulation. Another example of a species which may be seriously impacted by ocean acidification is Pteropods, which are shelled pelagic molluscs which play an important role in the food-web of various ecosystems. Since they harbour an aragonitic shell, they could be very sensitive to ocean acidification driven by the increase of anthropogenic emissions. Laboratory tests showed that calcification exhibits a 28% decrease of the pH value of the Arctic ocean expected for the year 2100, compared to the present pH value. This 28% decline of calcification in the lower pH condition is within the range reported for other calcifying organisms such as corals. In contrast with sea urchin and bivalve larvae, corals and marine shrimps are more severely impacted by ocean acidification after settlement, while they developed into the polyp stage. From laboratory tests, the morphology of the -treated polyp endoskeleton of corals was disturbed and malformed compared to the radial pattern of control polyps. This variability in the impact of ocean acidification on different life cycle stages of different organisms can be partially explained by the fact that most echinoderms and mollusks start shell and skeleton synthesis at their larval stage, while corals start at the settlement stage. Hence, these stages are highly susceptible to the potential effects of ocean acidification. Most calcifiers, such as corals, echinoderms, bivalves and crustaceans, play important roles in coastal ecosystems as keystone species, bioturbators and ecosystem engineers. The food web in the arctic ocean is somewhat truncated, meaning it is short and simple. Any impacts to key species in the food web can cause exponentially devastating effects on the rest of the food chain as a whole, as they will no longer have a reliable food source. If these larger organisms no longer have any source of nutrients, they too will eventually die off, and the entire Arctic ocean ecosystem will be affected. This would have a huge impact on the arctic people who catch arctic fish for a living, as well as the economic repercussions which would follow such a major shortage of food and living income for these families. Effects on Local Communities Ocean acidification not only has impacts on aquatic life, but also on human communities and the overall livelihood of people living near these waters. For example, as a result of crustaceans being unable to produce their shells and skeletons due to reduced amounts of carbonate ions, populations such as crabs have significantly decreased in some areas in the Northern hemisphere. This has resulted in numerous fisheries in these areas to close down as a result of multi-million dollar losses. In addition, increased temperatures have caused a swift increase in toxic algal blooms, which are known to produce a neurotoxin called domoic acid that can accumulate inside the bodies of certain shellfish. If ingested by humans this toxin can cause severe health issues, which has forced many additional fisheries to close down. Methods to Reduce Acidification Since the carbon cycle is tightly connected to the issue of ocean acidification, the most effective method for minimizing the effects of ocean acidification is to slow climate change. Anthropogenic inputs of CO2 can be reduced through methods such as limiting the use of fossil fuels and employing renewable energies. This will ultimately lower the amount of CO2 in the atmosphere and reduce the amount dissolved into the oceans. More intrusive methods to mitigate acidification involve a technique called enhanced weathering where powdered minerals like silicate are applied to the land or ocean surface. The powdered minerals enable accelerated dissolution, releasing cations, converting CO2 to bicarbonate and increasing the pH of the oceans. Other mitigation methods, like ocean iron fertilization, still need more experimentation and evaluation in order to be deemed effective. Ocean iron fertilization in particular has been shown to increase acidification in the deep ocean while only slightly reducing acidification at the surface. References Arctic Ocean Effects of climate change Biological oceanography Chemical oceanography Geochemistry Aquatic ecology
Ocean acidification in the Arctic Ocean
[ "Chemistry", "Biology" ]
2,832
[ "Chemical oceanography", "Aquatic ecology", "Ecosystems", "nan" ]
59,097,586
https://en.wikipedia.org/wiki/Waffle%20slab
A waffle slab or two-way joist slab is a concrete slab made of reinforced concrete with concrete ribs running in two directions on its underside. The name waffle comes from the grid pattern created by the reinforcing ribs. Waffle slabs are preferred for spans greater than , because, for a given mass of concrete, they are much stronger than flat slabs, flat slabs with drop panels, two-way slabs, one-way slabs, and one-way joist slabs. Description A waffle slab is flat on top, while joists create a grid like surface on the bottom. The grid is formed by the removal of molds after the concrete sets. This structure was designed to be more solid when used on longer spans and with heavier loads. This type of structure, because of its rigidity, is recommended for buildings that require minimal vibration, like laboratories and manufacturing facilities. It is also used in buildings that require big open spaces, like theatres or train stations. Waffle slabs are composed by intricate formwork, and may be more expensive than other types of slabs, but depending on the project and the quantity of concrete needed it may be cheaper to build. There are two types of waffle slab system: One way waffle slab system Two way waffle slab system Construction process A waffle slab can be made in different ways but generic forms are needed to give the waffle shape to the slab. The formwork is made up of many elements: waffle pods, horizontal supports, vertical supports, cube junctions, hole plates, clits and steel bars. First the supports are built, then the pods are arranged in place, and finally the concrete is poured. This process may occur in three different approaches, however the basic method is the same in each: In situ: Formwork construction and pouring of concrete occur on site, then the slab is assembled (if required). Precast: The slabs are made somewhere else and then brought to the site and assembled. Pre-fabricated: The reinforcements are integrated into the slab while being manufactured, without needing to reinforce the assembly on site. This is the most expensive option. Waffle slab design Different guides have been made for architects and engineers to determine various parameters of waffle slabs, primarily the overall thickness and rib dimensions. The following are rules of thumb, which are explained further in the accompanying diagrams: Slab depth is typically to thick. As a rule of thumb, the depth should be of the span. The width of the ribs is typically to , and ribs usually have steel rod reinforcements. The distance between ribs is typically . The height of the ribs and beams should be of the span between columns. The width of the solid area around the column should be of the span between columns. Its height should be the same as the ribs. Advantages The waffle slab floor system has several advantages: Better for buildings that require less vibrationsthis is managed by the two way joist reinforcements that form the grid. Bigger spans can be achieved with less material, being more economical and environmentally friendly Some people find the waffle pattern aesthetically pleasing Greater load capacity than traditional one-way slabs Forms can be implemented with wood, concrete or steel If holes are provided between the ribs, building services can be run through them. One proprietary implementation of this system is called Holedeck. Disadvantages Greater quantities of formwork materials are needed, which can be very costly Waffle slabs are thicker than flat slabs, so the height between each floor must be greater to have enough space for the slab system and other building services Waffle slabs are preferred for flat topographical areas not sloped sites Examples Royal National Theatre, London, United Kingdom Washington Metro Building Logistic and Telecommunication SL, Madrid, Spain Barangaroo House, Sydney, Australia GS1 Portugal, Lisboa, Portugal Galbraith Hall, UC San Diego, California odD House, Quito, Ecuador Centro de Bellas Artes de Caguas Parking Garage, Caguas, Puerto Rico See also Waffle slab foundation Reinforced concrete Concrete slab Formwork References Floors Concrete buildings and structures Concrete Reinforced concrete Structural system
Waffle slab
[ "Technology", "Engineering" ]
828
[ "Structural engineering", "Building engineering", "Floors", "Structural system", "Concrete" ]
41,286,954
https://en.wikipedia.org/wiki/Scaling%20dimension
In theoretical physics, the scaling dimension, or simply dimension, of a local operator in a quantum field theory characterizes the rescaling properties of the operator under spacetime dilations . If the quantum field theory is scale invariant, scaling dimensions of operators are fixed numbers, otherwise they are functions of the distance scale. Scale-invariant quantum field theory In a scale invariant quantum field theory, by definition each operator acquires under a dilation a factor , where is a number called the scaling dimension of . This implies in particular that the two point correlation function depends on the distance as . More generally, correlation functions of several local operators must depend on the distances in such a way that Most scale invariant theories are also conformally invariant, which imposes further constraints on correlation functions of local operators. Free field theories Free theories are the simplest scale-invariant quantum field theories. In free theories, one makes a distinction between the elementary operators, which are the fields appearing in the Lagrangian, and the composite operators which are products of the elementary ones. The scaling dimension of an elementary operator is determined by dimensional analysis from the Lagrangian (in four spacetime dimensions, it is 1 for elementary bosonic fields including the vector potentials, 3/2 for elementary fermionic fields etc.). This scaling dimension is called the classical dimension (the terms canonical dimension and engineering dimension are also used). A composite operator obtained by taking a product of two operators of dimensions and is a new operator whose dimension is the sum . When interactions are turned on, the scaling dimension receives a correction called the anomalous dimension (see below). Interacting field theories There are many scale invariant quantum field theories which are not free theories; these are called interacting. Scaling dimensions of operators in such theories may not be read off from a Lagrangian; they are also not necessarily (half)integer. For example, in the scale (and conformally) invariant theory describing the critical points of the two-dimensional Ising model there is an operator whose dimension is 1/8. Operator multiplication is subtle in interacting theories compared to free theories. The operator product expansion of two operators with dimensions and will generally give not a unique operator but infinitely many operators, and their dimension will not generally be equal to . In the above two-dimensional Ising model example, the operator product gives an operator whose dimension is 1 and not twice the dimension of . Non scale-invariant quantum field theory There are many quantum field theories which, while not being exactly scale invariant, remain approximately scale invariant over a long range of distances. Such quantum field theories can be obtained by adding to free field theories interaction terms with small dimensionless couplings. For example, in four spacetime dimensions one can add quartic scalar couplings, Yukawa couplings, or gauge couplings. Scaling dimensions of operators in such theories can be expressed schematically as , where is the dimension when all couplings are set to zero (i.e. the classical dimension), while is called the anomalous dimension, and is expressed as a power series in the couplings collectively denoted as . Such a separation of scaling dimensions into the classical and anomalous part is only meaningful when couplings are small, so that is a small correction. Generally, due to quantum mechanical effects, the couplings do not remain constant, but vary (in the jargon of quantum field theory, run) with the distance scale according to their beta-function. Therefore the anomalous dimension also depends on the distance scale in such theories. In particular correlation functions of local operators are no longer simple powers but have a more complicated dependence on the distances, generally with logarithmic corrections. It may happen that the evolution of the couplings will lead to a value where the beta-function vanishes. Then at long distances the theory becomes scale invariant, and the anomalous dimensions stop running. Such a behavior is called an infrared fixed point. In very special cases, it may happen when the couplings and the anomalous dimensions do not run at all, so that the theory is scale invariant at all distances and for any value of the coupling. For example, this occurs in the N=4 supersymmetric Yang–Mills theory. References Conformal field theory Quantum field theory
Scaling dimension
[ "Physics" ]
873
[ "Quantum field theory", "Quantum mechanics", "Quantum physics stubs" ]
41,287,104
https://en.wikipedia.org/wiki/Aperiodic%20set%20of%20prototiles
A set of prototiles is aperiodic if copies of the prototiles can be assembled to create tilings, such that all possible tessellation patterns are non-periodic. The aperiodicity referred to is a property of the particular set of prototiles; the various resulting tilings themselves are just non-periodic. A given set of tiles, in the Euclidean plane or some other geometric setting, admits a tiling if non-overlapping copies of the tiles in the set can be fitted together to cover the entire space. A given set of tiles might admit periodic tilings — that is, tilings that remain invariant after being shifted by a translation (for example, a lattice of square tiles is periodic). It is not difficult to design a set of tiles that admits non-periodic tilings as well as periodic tilings. (For example, randomly arranged tilings using a 2×2 square and 2×1 rectangle are typically non-periodic.) However, an aperiodic set of tiles can only produce non-periodic tilings. Infinitely many distinct tilings may be obtained from a single aperiodic set of tiles. The best-known examples of an aperiodic set of tiles are the various Penrose tiles. The known aperiodic sets of prototiles are seen on the list of aperiodic sets of tiles. The underlying undecidability of the domino problem implies that there exists no systematic procedure for deciding whether a given set of tiles can tile the plane. History Polygons are plane figures bounded by straight line segments. Regular polygons have all sides of equal length as well as all angles of equal measure. As early as AD 325, Pappus of Alexandria knew that only 3 types of regular polygons (the square, equilateral triangle, and hexagon) can fit perfectly together in repeating tessellations on a Euclidean plane. Within that plane, every triangle, irrespective of regularity, will tessellate. In contrast, regular pentagons do not tessellate. However, irregular pentagons, with different sides and angles can tessellate. There are 15 irregular convex pentagons that tile the plane. Polyhedra are the three dimensional correlates of polygons. They are built from flat faces and straight edges and have sharp corner turns at the vertices. Although a cube is the only regular polyhedron that admits of tessellation, many non-regular 3-dimensional shapes can tessellate, such as the truncated octahedron. The second part of Hilbert's eighteenth problem asked for a single polyhedron tiling Euclidean 3-space, such that no tiling by it is isohedral (an anisohedral tile). The problem as stated was solved by Karl Reinhardt in 1928, but sets of aperiodic tiles have been considered as a natural extension. The specific question of aperiodic sets of tiles first arose in 1961, when logician Hao Wang tried to determine whether the Domino Problem is decidable — that is, whether there exists an algorithm for deciding if a given finite set of prototiles admits a tiling of the plane. Wang found algorithms to enumerate the tilesets that cannot tile the plane, and the tilesets that tile it periodically; by this he showed that such a decision algorithm exists if every finite set of prototiles that admits a tiling of the plane also admits a periodic tiling. Hence, when in 1966 Robert Berger found an aperiodic set of prototiles this demonstrated that the tiling problem is in fact not decidable. (Thus Wang's procedures do not work on all tile sets, although that does not render them useless for practical purposes.) This first such set, used by Berger in his proof of undecidability, required 20,426 Wang tiles. Berger later reduced his set to 104, and Hans Läuchli subsequently found an aperiodic set requiring only 40 Wang tiles. The set of 13 tiles given in the illustration on the right is an aperiodic set published by Karel Culik, II, in 1996. However, a smaller aperiodic set, of six non-Wang tiles, was discovered by Raphael M. Robinson in 1971. Roger Penrose discovered three more sets in 1973 and 1974, reducing the number of tiles needed to two, and Robert Ammann discovered several new sets in 1977. The question of whether an aperiodic set exists with only a single prototile is known as the einstein problem. Constructions There are few constructions of aperiodic tilings known, even forty years after Berger's groundbreaking construction. Some constructions are of infinite families of aperiodic sets of tiles. Those constructions that have been found are mostly constructed in one of a few ways—primarily by forcing some sort of non-periodic hierarchical structure. Despite this, the undecidability of the Domino Problem ensures that there must be infinitely many distinct principles of construction, and that in fact, there exist aperiodic sets of tiles for which there can be no proof of their aperiodicity. There can be no aperiodic set of tiles in one dimension: it is a simple exercise to show that any set of tiles in the line either cannot be used to form a complete tiling, or can be used to form a periodic tiling. Aperiodicity of prototiles requires two or more dimensions. References
Aperiodic set of prototiles
[ "Physics" ]
1,110
[ "Tessellation", "Aperiodic tilings", "Symmetry" ]
67,025,945
https://en.wikipedia.org/wiki/Water%20droplet%20erosion
Water droplet erosion (WDE) is "a form of materials wear that is caused by the impact of liquid droplets with sufficiently high speed." The phenomenon was furthermore previously known as liquid impingement erosion (LIE). Distinction from other phenomena The emphasis of discrete water droplets serves to distinguish the WDE problem from liquid jet erosion and cavitation. The impact pressures invoked by discrete water droplet impact have a range considerably higher than the stagnation pressure created by liquid jet. The difference between WDE and cavitation erosion is the fact that WDE usually comprises a gaseous or vaporous phase containing discrete liquid droplets; while cavitation erosion is observed when a continual liquid phase carries separate gaseous bubbles or cavities inside it. Recently, Ibrahim & Medraj developed an analytical model to predict the threshold speed of water droplet erosion and verified it experimentally, a challenge having been attempted hitherto without success since the 1950s. Consequences For an extended period of time, many industries have encountered the problem of erosion due to water droplet impact, and it continues to reappear wherever rotation or movement of a component at high speed in a hydrometer environment is employed. Recently, with the use of larger wind turbine blades, the issue of erosion of the leading edge due to rain droplets has grown more grave. Aerodynamics efficiency of turbine blades is severely diminished due to leading-edge erosion, resulting in a considerable decrease in annual energy production. References Materials degradation
Water droplet erosion
[ "Materials_science", "Engineering" ]
299
[ "Materials degradation", "Materials science" ]
67,027,040
https://en.wikipedia.org/wiki/Danavorexton
Danavorexton (developmental code name TAK-925) is a selective orexin 2 receptor agonist. It is a small-molecule compound and is administered intravenously. The compound was found to dose-dependently produce wakefulness to a similar degree as modafinil in a phase 1 clinical trial. As of March 2021, danavorexton is under development for the treatment of narcolepsy, idiopathic hypersomnia, and sleep apnea. It is related to another orexin receptor agonist, firazorexton (TAK-994), the development of which was discontinued for safety reasons in October 2021. See also Orexin receptor § Agonists List of investigational sleep drugs § Orexin receptor agonists References External links Danavorexton - AdisInsight Carboxylic acids Cyclohexanes Esters Ethers Experimental drugs Orexin receptor agonists Piperidines Sulfonamides Wakefulness-promoting agents
Danavorexton
[ "Chemistry" ]
203
[ "Esters", "Carboxylic acids", "Functional groups", "Organic compounds", "Ethers" ]
67,030,271
https://en.wikipedia.org/wiki/Propionylation
Propionylation is a post-translational modification of proteins, in which a propionyl-group is added to a lysine amino acid of a protein. Propionylation participates in crucial biological processes, including metabolic processes and cellular stress response. Lysine propionylation was first identified on histone proteins, and since has also been identified on other proteins. Histone propionylation is a mark of active chromatin. The substrate for protein propionylation is propionyl-CoA. Propionyl-CoA in the cell is metabolised by the enzyme propionyl-CoA carboxylase. Accumulation of propionyl-CoA leads to increased protein propionylation. In patients with propionic acidemia, a rare autosomal recessive metabolic disorder, propionyl-CoA levels elevated and increased propionylation, which might contribute to the pathology in these patients. References Post-translational modification
Propionylation
[ "Chemistry" ]
200
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
55,752,776
https://en.wikipedia.org/wiki/Quadratic%20quadrilateral%20element
The quadratic quadrilateral element, also known as the Q8 element is a type of element used in finite element analysis which is used to approximate in a 2D domain the exact solution to a given differential equation. It is a two-dimensional finite element with both local and global coordinates. This element can be used for plane stress or plane strain problems in elasticity. The quadratic quadrilateral element has modulus of elasticity E, Poisson’s ratio v, and thickness t. References FEM elements
Quadratic quadrilateral element
[ "Mathematics" ]
108
[ "Mathematical analysis", "Mathematical analysis stubs" ]
55,753,389
https://en.wikipedia.org/wiki/Iain%20McCulloch%20%28academic%29
Iain McCulloch is Professor of Polymer Chemistry, in the Department of Chemistry, at the University of Oxford, UK, a fellow and tutor in chemistry at Worcester College, and an adjunct professor at King Abdullah University of Science and Technology (KAUST), Saudi Arabia, and a visiting professor in the Department of Chemistry at Imperial College London. Education McCulloch was born in Scotland. He studied chemistry at the University of Strathclyde. He obtained his Bachelor of Science with First Class Honors in 1986 and a Ph.D. in polymer chemistry in 1989. Research McCulloch began his career after graduating with a PhD in polymer chemistry from University of Strathclyde, UK, at Hoechst Celanese Corporation in New Jersey, US, where he designed, developed and commercialized functional polymers for a range of optical, electronic, and drug-delivery applications including  a water-based antireflective polymer system for photoresist processes with AZ Clariant.  He then moved to ISP Corporation, in New Jersey, US, to manage the polymer physics research group, working on developing methodology for rheological surface science and electronic products.  In 2000, he returned to the UK, as a research manager at Merck Chemicals in Southampton, where he was responsible for developing semiconducting polymers for organic electronic and solar-cell applications. A key aspect of his research was the exploitation of molecular alignment and organization of semiconducting polymers and small molecules in the liquid crystalline phase. At Merck, his group discovered a liquid crystalline thiophene polymer, pBTTT, which subsequently underpinned many research advances in charge transport of organic thin films since its publication in Nature Materials in 2006, which garnered the distinction of one of the top ten most influential papers published in the first five years of publication of the journal. In 2007, McCulloch joined the faculty at Imperial College to continue research in organic semiconductor materials.  At this time, along with colleague Professor Martin Heeney, cofounded the specialty chemical company, Flexink Ltd, where he is currently the managing director, which have been in operation for the last 12 years, supplying a range of electronic materials to leading manufacturers across the world.  At Imperial, he continued to explore new chemistries for organic solar cells and transistors, developing the polymer IDTBT, which exhibits disorder free transport, and an early non-fullerene electron acceptor for solar cells, IDTBR.  McCulloch joined KAUST in 2014, and became Director of the KAUST Solar Center in 2016. His work developing new solar cell materials led to the discovery that a ternary materials blend, with two non-fullerene acceptors, could outperform the equivalent binary devices, leading to high power conversion efficiencies, that helped towards a resurgence in the field.  He continues to expand his application focus for polymer materials to perform at the interface between biology and electronics, demonstrating together with colleagues Jonathan Rivnay, George Malliaras and Sahika Inal, electron transport in an organic electrochemical transistor (OECT) operated in an aqueous electrolyte in ambient conditions.  This discovery provided the impetus for a new class of polymer based electrochemical transistor sensors for biological applications and improve the sophistication of bioelectronic devices. Further work with colleague Inal has led to their employment in the detection of lactate and glucose with potential societal impact in healthcare.  More recently, his group have demonstrated the potential for hydrogen production, arising from photocatalysis of water using nanoparticle blends of organic semiconductors. Recognition McCulloch's scientific achievements were recognised by the 2011 analysis of the “Top 100 Materials Scientists, 2000-10, Ranked by Citation Impact” where he was ranked at number 35 globally and number 2 in the UK He is among the top 100 most cited chemists in the world, and is included in the list of the Highly Cited Researchers for materials science in 2014, 2015, 2016, 2017, 2018, Chemistry in 2017, 2018 and Crossfield in 2019, 2020. Iain is a Fellow of the Royal Society, a Member of Academia Europaea, a Fellow of the European Academy of Sciences, a Fellow of the Royal Society of Chemistry and a member of the Advanced Materials (Wiley) Hall of Fame. McCulloch has received the Royal Society of Chemistry 2020 Interdisciplinary Prize, the 2014 Tilden Prize, and the 2009 Creativity in Industry Prize.  In 2020, he was also awarded the European Academy of Sciences Blaise Pascal Medal for Materials Science. He was also recognized with a Royal Society Wolfson Merit Award in 2014. Research 446 peer-reviewed papers, 66 patents filed, 1 book edited, 6 book chapters co-authored. Google Scholar h-index: 107.  >48000 citations and >390 papers with at least 10 citations. Career Co-Founder and Director, Flexink Ltd (since 2007) a Specialty Chemicals Company. Co-founder of Solar Press, an organic solar cell start-up funded by the Carbon Trust (2009-2013). Partner in C-Change LLP (2008-2014), a technology consultancy partnership. McCulloch is Associate Editor of Science Advances, a member of the International Advisory Board Member of Advanced Materials, an Advisory Board Member of (RSC) Journal of Materials Chemistry C and Materials Advances, an Editorial Advisory Board Member of Chemistry of Materials and an Associate Editor of Materials Science and Engineering R: Reports. References External links Professor of Polymer Chemistry, in the Department of Chemistry Fellow and Tutor in Chemistry at Worcester College Professor of Polymer Materials within the Program of Chemical Sciences at King Abdullah University of Science and Technology (KAUST), Year of birth missing (living people) Living people Polymer scientists and engineers Scottish chemists Alumni of the University of Strathclyde Academic staff of King Abdullah University of Science and Technology Fellows of Worcester College, Oxford
Iain McCulloch (academic)
[ "Chemistry", "Materials_science" ]
1,198
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
55,754,856
https://en.wikipedia.org/wiki/Pujiang%20line
The Pujiang line of Shanghai Metro () is an automated, driverless, rubber-tired Shanghai Metro line in the town of Pujiang in the Shanghainese district of Minhang. It was originally conceived as phase 3 of Shanghai Metro line 8, but afterwards was constructed as a separate line, connecting with line 8 at its southern terminus, Shendu Highway. The line opened for passenger trial operations on March 31, 2018. It is the first automated, driverless people mover line in the Shanghai Metro, and has 6 stations with a total length of . The people mover was expected to carry 73,000 passengers a day. The line is colored gray on system maps. The line is operated by Shanghai Keolis Public Transport Operation & Management Co. Ltd. (), a joint venture owned by Keolis and Shanghai Shentong Metro Group for at least five years after opening. History Stations Service routes Important stations - Passengers can interchange to line 8. Future expansion There are no plans to extend the line. Station name change On June 9, 2013, the Aerospace Museum was renamed (before Pujiang line began serving the station). Headways <onlyinclude> <onlyinclude> <onlyinclude> <onlyinclude> <onlyinclude> Technology Signalling The entire operation of the new line is remotely controlled from a central dispatch room. Trains operate using the Cityflo 650 communications-based train control (CBTC) from CRRC Puzhen Bombardier Transportation Systems Limited, a joint venture between Bombardier and CRRC Nanjing Puzhen Co., Ltd. The automatic trains had initially six staff members working at each APM station, but the operator hopes to reduce that to one or two. Rolling Stock Pujiang line uses rubber-tyred Bombardier Innovia APM 300 trains. The trains have 4 cars each, totaling in length, with capacity for 566 passengers per train. There are large windows at each end of the train allowing passengers to look out the front and rear. The small trains with rubber tires running on concrete tracks allow for turning radii as tight as to be negotiated, compared to over for typical metro on steel rails. On 13 January 2017, Bombardier delivered the first out of 44 autonomous people movers to Shanghai. References 2018 establishments in Shanghai People mover systems in China Railway lines opened in 2018 Rubber-tyred metros Shanghai Metro lines Siemens Mobility projects
Pujiang line
[ "Technology", "Engineering" ]
491
[ "Siemens Mobility projects", "Transport systems" ]
55,762,582
https://en.wikipedia.org/wiki/Seljuk%20stucco%20figures
The Seljuk stucco figures are stucco (plaster) figures found in the region of the Seljuk Empire, from its "golden age" between the 11th and 13th centuries. They decorated the inner walls and friezes of Seljuk palaces, together with other ornamented stucco ornaments, concealing the wall behind them. The figures were painted bright-colored and often gilded. They represented royal figures and were symbols of power and authority. Islamic art of Seljuk The Seljuks were a Turkic dynasty of Central Asian nomadic origins, who became the new rulers of the eastern Islamic world after defeating the Ghaznavids in the Battle of Dandanaqan, and the Buyid dynasty. Following these victories, the Seljuks established themselves as the new patrons of the Abbasid Caliphate and Sunni Islam. In only half-a-century, the Seljuks managed to create a vast empire encompassing modern Iran, Iraq, and much of Anatolia. Under the Seljuks, Iran enjoyed a period of cultural prosperity. Multitudes of architecture and arts were developed during the period, and influenced later artistic developments in the region and the surrounding. In ceramics, fine motifs were created in underglaze painting, luster decorations, and polychrome painting. Metal objects were decorated with inlays of silver and gold. The Seljuks developed many figurative motifs with a frequent depiction of animals, men, and women. An anthropomorphic representation of figures are not rare at all in the Muslim culture. Whereas iconic image in holy places e.g. mosques are strictly forbidden, in secular places, depiction of figures are common. Other forms of Seljuk art are discussed in the page on the Seljuk Empire. Seljuk palaces All the Seljuk palaces are now in ruins. Excavations indicate that these palaces had once been decorated with tiles and with stucco wall reliefs of geometric patterns and figures. In Lashgari Bazar, a ruin of former Ghaznavid period palace, polychrome frescoes depicting 44 soldiers were found decorating the lower floor of the audience hall. They all have similar round faces and almond-shaped eyes, traditionally associated with the Turks of Central Asia. The stucco figures would have decorated similar royal palaces in the audience hall or the royal court. They were found decorating large palaces of the Seljuk sultans, or smaller royal courts of the local vassals or successors. The stucco figures may be part of larger stucco geometric ornamentation which conceals the base wall behind it. One example of stucco figures in complete form comes from the late 12th century Rey, which depicts the enthroned Seljuk Sultan Tughril II (1194) surrounded by his officers. Similar examples were found in Bast, Afghanistan, in Samarkand, and in Uzbekistan. These were painted in bright colors of red, blue, black, and gilded with gold. The dark room in the palace where they were placed means that this figure needs to stand out as much as possible. Form Stucco or plaster is a soft, cement-like water-based material that is easy to carve when dry and mold when still wet. Its lightness makes it easy to affix to walls. Many 12th-century stucco figures survived in pristine condition because of the preserving dryness of the desert where they were found. Seljuk stucco figures were painted in bright colors of blue (powdered lapis lazuli), red (powdered ruby), and black colors, and were gilded with gold. The figures were representations of power. In a royal palace setting, they represent figures related to the power of the empire, e.g. royal guards, royal viziers, courtiers or amir. Warrior figures were depicted as clutching swords. They wear rich colored caftans, trousers, tiraz bands, and long boots. Royal figures were depicted wearing crowns. The two figures in the Metropolitan Museum of Art in New York are wearing crowns, one figure is wearing the winged crown, an ancient symbol of authority which was first recorded in a 3rd century Sasanian coins. All of the Seljuk stucco figures have round faces with typical high cheekbones and almond-shaped eyes, known as the Turkic moon face, which reflect the indicating the Turkic and Mongol ethnic type. The stucco figures were usually displayed in a pomp and circumstance setting, enhancing the actual ceremonies that took place in the room where the figures were set. References Cited works External links Art of the Seljuk Empire Seljuk architecture Plastering Persian art
Seljuk stucco figures
[ "Chemistry", "Engineering" ]
943
[ "Building engineering", "Coatings", "Plastering" ]
44,141,694
https://en.wikipedia.org/wiki/Hydrology%20of%20the%20Catawissa%20Tunnel
The Catawissa Tunnel is a mine drainage tunnel in Schuylkill County, Pennsylvania, in the United States. Its properties include the discharge, the pH, the chemical hydrology, and the water temperature. A total of 30 different metals and metalloids have been observed in the tunnel's waters. The hydrological data comes from a gauge on the tunnel at a location of 40°54'39" north and 76°03'59" west and an elevation of above sea level. Some of the most abundant metals in the waters of the tunnel include iron, aluminum, and manganese. These metals have concentrations on the order of several milligrams per liter. A number of other metals have concentrations on the order of micrograms per liter and some metals are found in even lower concentrations. Nonmetals such as nitrates, sulfates, fluorides, chlorides, and silica are also present in the tunnel. The concentrations of such nonmetals range between several micrograms per liter and several milligrams per liter. The discharge of the Catawissa Tunnel is similar to the discharges of other mine drainage tunnels in the watershed of Catawissa Creek, being on the order of several thousand gallons per minute. However, it can become significantly higher during times of heavy rainfall. Additionally, the tunnel is highly acidic, with a pH averaging slightly more than 4. Background Coal mining began in the South Green Mountain Coal Basin in the middle of the 1800s. The Catawissa Tunnel was constructed in the 1930s to drain the aforementioned coal basin via gravity. Metals and metalloids The concentrations of metals in the Catawissa Tunnel are lower than other mine drainage tunnels in the watershed of Catawissa Creek. The concentration of iron in the water discharged from the Catawissa Tunnel is 1.01 milligrams per liter and the daily load of iron is . The maximum allowable concentration of iron is 0.58 milligrams per liter and the maximum allowable load is . The iron load requires a 43 percent reduction to meet its total maximum daily load requirements. The concentration of manganese is 0.31 milligrams per liter and the load of manganese is per day. The maximum allowable manganese concentration is 0.31 milligrams per liter and the maximum allowable load is per day. The load of manganese requires no reduction to meet its total maximum daily load requirements. The concentration of aluminum in the tunnel's waters is 1.27 milligrams per liter and the daily load is . The maximum allowable concentration is 0.39 milligrams per liter and the maximum allowable daily load is . The aluminum load requires a 69 percent reduction to meet its total maximum daily load requirements. On April 15, 1975, the concentrations of magnesium and calcium were 3.70 and 3.50 milligrams per liter, respectively. The concentration of strontium was measured to be 20 micrograms per liter and the barium is 23 micrograms per liter. The concentrations of lithium, sodium and potassium were 0.10, 50, and 60 micrograms per liter, respectively. The concentrations of titanium and zirconium in the discharge of the Catawissa Tunnel are both less than 1 microgram per liter. The vanadium concentration is less than 0.5 milligrams per liter, the chromium and molybdenum concentrations are less than 1 milligram per liter. The concentrations of cobalt, nickel, and copper are 40 micrograms per liter, 100 micrograms per liter, and 30 micrograms per liter, respectively. The silver concentration is less than one microgram per liter. The concentrations of zinc and mercury are 200 and 0.5 micrograms per liter, respectively. The concentrations of gallium, germanium, and tin in the waters of the Catawissa Tunnel is less than 1 microgram per liter. The bismuth concentration is less than 1 microgram per liter and the arsenic concentration is 1 milligram per liter. A number of other elements have been observed in the discharge of the Catawissa Tunnel, but their concentrations are unknown. These include beryllium, boron, cadmium, and lead. Nonmetals On April 15, 1975, the concentration of nitrogen in the form nitrates was measured to be 0.08 milligrams per liter in the waters of the Catawissa Tunnel. The concentration of organic carbon was measured to be 13.0 milligrams per liter. The concentration of hydrogen ions in the tunnel's waters was measured to be 0.12689 milligrams per liter. The concentration of sulfates in the waters of the Catawissa Tunnel was measured to be 58.0 milligrams per liter on April 15, 1975. The chloride concentration was 1.4 milligrams per liter and the fluoride concentration was 0.1 milligrams per liter. Additionally, the mineral silica is present in the waters of the tunnel. Its concentration was measured to be 8 micrograms per liter. The concentration of total dissolved solids in the waters of the Catawissa Tunnel was measured to be 0.19 tons per day and 0.12 tons per acre-foot on April 15, 1975. Other hydrological information The discharge of the Catawissa Tunnel is 820,000 gallons per day. The discharge of the tunnel ranges from 4,000 to 10,000 gallons per minute, although it can reach 18,000 gallons per minute during rainfall. This is fairly close to the discharges of the other mine drainage tunnels in the watershed of Catawissa Creek. The pH of the water discharged from the Catawissa Tunnel ranges from 3.8 to 4.5, with an average of 4.17. However, it was measured to be 3.9 on April 15, 1975. The concentration of acidity in the tunnel's water is 18.44 milligrams per liter and the daily load of acidity is . The acidity load requires a 90 percent reduction to meet its total maximum daily load requirements. The concentration of alkalinity is 4.11 milligrams per liter and the daily load of alkalinity is per day. The concentration of water hardness was measured to be 24.0 milligrams per liter in 1975. The water temperature of the water discharged from the Catawissa Tunnel was measured to be on April 15, 1975. The specific conductance at this time was 175 micro-siemens per centimeter at . References Hydrology Schuylkill County, Pennsylvania Water in Pennsylvania
Hydrology of the Catawissa Tunnel
[ "Chemistry", "Engineering", "Environmental_science" ]
1,362
[ "Hydrology", "Environmental engineering" ]
44,144,166
https://en.wikipedia.org/wiki/ERp27
ERp27 (Endoplasmic Reticulum protein 27.7 kDa) is a homologue of PDI (protein disulfide-isomerase), localised to the Endoplasmic Reticulum. The structure of ERp27 has been solved by both X-ray crystallography and NMR spectroscopy, showing it to be composed of two thioredoxin-like domains with homology to the non-catalytic b and b' domains of PDI. The function of ERp27 is unknown, but on the basis of its homology with PDI it is thought to possess chaperone activity. References Endoplasmic reticulum resident proteins
ERp27
[ "Chemistry" ]
143
[ "Biochemistry stubs", "Protein stubs" ]
44,146,963
https://en.wikipedia.org/wiki/Metallomesogen
Metallomesogens are metal complexes that exhibit liquid crystalline behavior. Thus, they adopt ordered structures in the molten phase, e.g. smectic and nematic phases. The dominant interactions responsible for their phase behavior are the nonbonding contacts between organic substituents. Two early classes of such materials are based on substituted ferrocenes and dithiolene complexes; more recent work shows that alkoxystilbazoles have similar utility. References Soft matter Optical materials Phase transitions Transition metals Coordination chemistry
Metallomesogen
[ "Physics", "Chemistry", "Materials_science" ]
108
[ "Physical phenomena", "Phase transitions", "Soft matter", "Phases of matter", "Critical phenomena", "Coordination chemistry", "Materials", "Optical materials", "Condensed matter physics", "Statistical mechanics", "Matter" ]
44,148,074
https://en.wikipedia.org/wiki/Phycosphere
The phycosphere is a microscale mucus region that is rich in organic matter surrounding a phytoplankton cell. This area is high in nutrients due to extracellular waste from the phytoplankton cell and it has been suggested that bacteria inhabit this area to feed on these nutrients. This high nutrient environment creates a microbiome and a diverse food web for microbes such as bacteria and protists. It has also been suggested that the bacterial assemblages within the phycosphere are species-specific and can vary depending on different environmental factors. In terms of comparison, the phycosphere in phytoplankton has been suggested analogous to the rhizosphere in plants, which is the root zone important for nutrient recycling. Both plant roots and phytoplankton exude chemicals which alter their immediate surrounds drastically – including altering the pH and oxygen levels. In terms of community construction, chemotaxis is used in both environments in order to propagate the recruitment of microbes. In the rhizosphere, chemotaxis is used by the host – the plant – to mediate the motility of the soil which allows for microbial colonization. In the phycosphere, the phytoplankton release of specific chemical exudates elicits a response from bacterial symbionts who exhibit chemotaxis signaling, thereby enabling the recruitment of microbes and subsequent colonization. The interfaces also have a few similar microbes, chemicals, and metabolites involved in the host – symbiont interactions. This includes microbes such as Rhizobium, which in the phycospheres of green algae was found to be the foremost microbe when compared to other abundant community members. Chemicals such as dimethylsuloniopropionate (DMSP) and 2,3-dihydroxypropane-1-sulfonate (DHPS) and metabolites such as sugars and amino acids are implicated in the mechanisms of action of both microbiomes. Phytoplankton-bacteria interactions The microscale interactions between the phytoplankton and bacteria are complex. The phytoplankton-bacteria interactions have the potential to be parasitism, competition or mutualism. Interactions between phytoplankton and bacteria in the phycosphere could be potentially important in low-nutrient regions of the ocean and an example of mutualism. In marine ecosystems that are low in nutrients (i.e. oligotrophic regions of the oceans), it could be potentially beneficial for the phytoplankton to have remineralizing bacteria in the phycosphere for nutrient recycling. It has been suggested that while the bacterial activity may be low, the taxonomic diversity and nutritional diversity is high. This can possibly suggest that the phytoplankton species may rely on a diverse array of bacterial interactions for recycled nutrients in these oligotrophic regions and the bacteria rely on organic matter surrounding the phycosphere for a source of food. However, bacterial-phytoplankton interactions in the phycosphere could be parasitic. In the same low nutrient oligotrophic regions of the ocean, phytoplankton that are nutrient stressed may not be able to produce this protective mucus layer or its associated antibiotics. The bacteria, who are also food stressed, could kill the phytoplankton and use it as a food substrate. Also, bacteria metabolize the organic matter through aerobic respiration, which depletes oxygen from the water and can lower the pH of the water column. If enough organic matter is produced, the bacteria could potentially harm the phytoplankton by causing the water to become more acidic. (See also eutrophication). Examples of bacteria associated with phycosphere In reality, the actual bacterial diversity of the phycosphere is extremely diverse and is dependent environmental factors, such as turbulence in the water (so the bacteria can attach to the mucus or the phytoplankton cell) or the concentrations of nutrients. Also, the bacteria tend to be highly specialized when associated with this region. Nevertheless, here are some examples of bacterium genera associated with the phycosphere. Pseudomonas Achromobacter Roseobacter Flavobacteraceae Alteromonadaceae Athrospira plantensis Terrimonas rubra C.vulgaris Sediminibacterium Chryseobacterium See also Biomineralization Hologenome theory of evolution Marine microorganisms Microbial loop Microbiota References 5. Seymour, Justin R., et al. “Zooming in on the Phycosphere: the Ecological Interface for Phytoplankton–Bacteria Relationships.” Nature Microbiology, vol. 2, no. 7, 2017, pp. 1–13., doi:10.1038/nmicrobiol.2017.65. 6. Kim, B.-H., Ramanan, R., Cho, D.-H., Oh, H.-M., & Kim, H.-S. (2014). Role of Rhizobium, a plant growth promoting bacterium, in enhancing algal biomass through mutualistic interaction. Biomass and Bioenergy, 69, 95–105. doi: 10.1016/j.biombioe.2014.07.015 7. Geng, H., & Belas, R. (2010). Molecular mechanisms underlying roseobacter–phytoplankton symbioses. Current Opinion in Biotechnology, 21(3), 332–338. doi: 10.1016/j.copbio.2010.03.013 8. Ramanan, R., Kang, Z., Kim, B.-H., Cho, D.-H., Jin, L., Oh, H.-M., & Kim, H.-S. (2015). Phycosphere bacterial diversity in green algae reveals an apparent similarity across habitats. Algal Research, 8, 140–144. doi: 10.1016/j.algal.2015.02.003 9. Scharf, B. E., Hynes, M. F., & Alexandre, G. M. (2016). Chemotaxis signaling systems in model beneficial plant–bacteria associations. Plant Molecular Biology, 90(6), 549–559. doi: 10.1007/s11103-016-0432-4 Oceanography
Phycosphere
[ "Physics", "Environmental_science" ]
1,406
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]