id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
12,219,400 | https://en.wikipedia.org/wiki/Glycerol%20ester%20of%20wood%20rosin | Glycerol ester of wood rosin (or gum rosin), also known as glyceryl abietate or ester gum, is an oil-soluble food additive (E number E445). The food-grade material is used in foods, beverages, and cosmetics to keep oils in suspension in water, and its name may be shortened in the ingredient list as glycerol ester of rosin. It is also used as an ingredient in the production of chewing gum and ice cream.
To make the glycerol ester of wood rosin, refined wood rosin is reacted with glycerin to produce the glycerol ester.
Glycerol ester of wood rosin is an alternative to brominated vegetable oil in citrus oil-flavored soft drinks. In some cases, both ingredients are used together.
References
External links
Standard Terminology Relating to Pine Chemicals, Including Tall Oil and Related Products
Resins
Food additives
Adhesives
E-number additives | Glycerol ester of wood rosin | Physics | 210 |
17,888,698 | https://en.wikipedia.org/wiki/Sugon | Sugon (), officially Dawning Information Industry Company Limited, is a supercomputer manufacturer based in the People's Republic of China. The company is a spin-off from research done at the Chinese Academy of Sciences (CAS), and still has close links to it.
History
The company is a development of work done at the Institute of Computer Science, CAS. Under the Chinese government's 863 Program, for the research and development of high technology products, the group launched their first supercomputer (Dawning No. 1) in 1993. In 1996 the group launched the Dawning Company to allow the transfer of research computers into the market.
The company was tasked with developing further supercomputers under the 863 program, which led to the Dawning 5000A and 6000 computers.
The company was listed on the Shanghai Stock Exchange in 2014. CAS still retains stock in the company.
U.S. sanctions
According to the United States Department of Defense the company has links to the People's Liberation Army and, in 2019, Sugon was added to the Bureau of Industry and Security's Entity List due to U.S. national security concerns. In November 2020, the then President of the United States Donald Trump issued an executive order prohibiting any American company or individual from owning shares in companies that the United States Department of Defense has listed as having links to the People's Liberation Army, which included Sugon.
In October 2022, the United States Department of Defense added Sugon to a list of "Chinese military companies" operating in the U.S.
Supercomputers
Dawning was the company's initial name; it was later changed to Sugon. The computers are originally known by their Dawning moniker, but can also use Sugon names in the literature. The model series is as below.
Dawning No.1
The first supercomputer created was Dawning No.1 (Shuguang Yihao, 曙光一号), which received state certification in October 1993. This supercomputer achieved 640 million FLOPS, and utilizes Motorola 88100 CPUs (4 total) and 88200 CPUs (8 total), and over 20 were built. The operating system is UNIX V.
Dawning 1000
The Dawning 1000 was Sugon's second generation supercomputer, and was originally named Dawning No.2 (Shuguang Erhao, 曙光二号). Dawning 1000 was released in 1995, and received state certification on 11 May 1995. The family of supercomputers could achieve 2.5 GFLOPS. This series of the Dawning family consists of the Dawning 1000A and 1000L.
Dawning 2000
The Dawning 2000 was initially released in 1996, and could achieve a peak performance of 4 GFLOPS. A further variant, the Dawning 2000-I, was released in 1998 with a peak performance of 20 GFLOPS. The final model in the series, the Dawning 2000-II, was released in 1999 with a peak performance of 111.7 GFLOPS.
The Dawning 2000 passed state certification on 28 January 2000. The supercomputer model was designed as a cluster to achieve over 100 GLOPS peak performance. The number of CPUs used was greatly increased to 164 in comparison to older models, and like earlier models, the operating system is UNIX.
Dawning 3000
The Dawning 3000 passed state certification on 9 March 2001. Like the Dawning 2000, the system was designed as a cluster, and could achieve 400 GFLOPS peak performance. The number of CPU increased to 280, and the system consists of ten 2-meter tall racks, weighing 5 tons total. Power consumption is 25 kW, and one of the tasks it was used for was the part of human genome mapping that China was responsible for.
Dawning 4000A
The fifth member of the Dawning family, Dawning 4000A, debuted as one of the top 10 fastest supercomputers in the world on the TOP500 list, capable of 806.1 billion FLOPS. The system, at the Shanghai Supercomputer Center, utilizes over 2,560 AMD Opteron processors, and can reach speeds of 8 teraflops.
Dawning 5000
The Dawning 5000 series was initially planned to use indigenous Loongson processors. However Shanghai Supercomputer Center required Microsoft Windows support whereas Loongson only ran under Linux.
The resulting Dawning 5000A uses 7,680 1.9 GHz AMD Opteron Quad-core processors, resulting in 30,720 cores, with an Infiniband interconnecting network. The computer occupies an area of 75 square meters and the power consumption is 700 kW. The supercomputer is capable of 180 teraflops and received state certification in June 2008.
The Dawning 5000A was ranked 10th in the November 2008 TOP500 list. Additionally at the time, it was also the largest system using Windows HPC Server 2008 for this benchmark. This system is also installed at the Shanghai Supercomputer Center and runs with SUSE Linux Enterprise Server 10.
Dawning 6000
The Dawning 6000 was announced in 2011, at 300 TFLOPS, incorporating 3000 8-core Loongson 3B processors at 3.2 GFLOP/W. It is the "first supercomputer made exclusively of Chinese components" and has a projected speed of over a PFLOP (one quadrillion operations per second). For comparison, the fastest supercomputer as of June 2014 runs at 33 PFLOPS. The same announcement said that a petascale supercomputer was under development and that the launch was anticipated in 2012 or 2013.
See also
Shanghai Supercomputer Center
Nebulae - Dawning TC3600
References
External links
Supercomputers
Computer hardware companies
Supercomputing in China
Defence companies of the People's Republic of China
Manufacturing companies based in Beijing
Chinese entities subject to U.S. Department of the Treasury sanctions | Sugon | Technology | 1,231 |
19,067,071 | https://en.wikipedia.org/wiki/Topological%20pair | In mathematics, more specifically algebraic topology, a pair is shorthand for an inclusion of topological spaces . Sometimes is assumed to be a cofibration. A morphism from to is given by two maps and
such that .
A pair of spaces is an ordered pair where is a topological space and a subspace. The use of pairs of spaces is sometimes more convenient and technically superior to taking a quotient space of by . Pairs of spaces occur centrally in relative homology, homology theory and cohomology theory, where chains in are made equivalent to 0, when considered as chains in .
Heuristically, one often thinks of a pair as being akin to the quotient space .
There is a functor from the category of topological spaces to the category of pairs of spaces, which sends a space to the pair .
A related concept is that of a triple , with . Triples are used in homotopy theory. Often, for a pointed space with basepoint at , one writes the triple as , where .
References
.
Algebraic topology | Topological pair | Mathematics | 215 |
28,353,830 | https://en.wikipedia.org/wiki/C3H3NOS | {{DISPLAYTITLE:C3H3NOS}}
The molecular formula C3H3NOS may refer to:
Isothiazolinone
Thiazolone | C3H3NOS | Chemistry | 36 |
14,353,229 | https://en.wikipedia.org/wiki/Viral%20shedding | Viral shedding is the expulsion and release of virus progeny following successful reproduction during a host cell infection. Once replication has been completed and the host cell is exhausted of all resources in making viral progeny, the viruses may begin to leave the cell by several methods.
The term is variously used to refer to viral particles shedding from a single cell, from one part of the body into another, and from a body into the environment, where the virus may infect another.
Vaccine shedding is a form of viral shedding which can occur in instances of infection caused by some attenuated (or "live virus") vaccines.
Means
Shedding from a cell into extracellular space
Budding (through cell envelope)
"Budding" through the cell envelope—in effect, borrowing from the cell membrane to create the virus' own viral envelope— into extracellular space is most effective for viruses that require their own envelope. These include such viruses as HIV, HSV, SARS or smallpox. When beginning the budding process, the viral nucleocapsid cooperates with a certain region of the host cell membrane. During this interaction, the glycosylated viral envelope protein inserts itself into the cell membrane. In order to successfully bud from the host cell, the nucleocapsid of the virus must form a connection with the cytoplasmic tails of envelope proteins. Though budding does not immediately destroy the host cell, this process will slowly use up the cell membrane and eventually lead to the cell's demise. This is also how antiviral responses are able to detect virus-infected cells. Budding has been most extensively studied for viruses of eukaryotes. However, it has been demonstrated that viruses infecting prokaryotes of the domain Archaea also employ this mechanism of virion release.
Apoptosis (cell destruction)
Animal cells are programmed to self-destruct when they are under viral attack or damaged in some other way. By forcing the cell to undergo apoptosis or cell suicide, release of progeny into the extracellular space is possible. However, apoptosis does not necessarily result in the cell simply popping open and spilling its contents into the extracellular space. Rather, apoptosis is usually controlled and results in the cell's genome being chopped up, before apoptotic bodies of dead cell material clump off the cell to be absorbed by macrophages. This is a good way for a virus to get into macrophages either to infect them or simply travel to other tissues in the body.
Although this process is primarily used by non-enveloped viruses, enveloped viruses may also use this. HIV is an example of an enveloped virus that exploits this process for the infection of macrophages.
Exocytosis (cell release)
Viruses that have envelopes that come from nuclear or endosomal membranes can leave the cell via exocytosis, in which the host cell is not destroyed. Viral progeny are synthesized within the cell, and the host cell's transport system is used to enclose them in vesicles; the vesicles of virus progeny are carried to the cell membrane and then released into the extracellular space. This is used primarily by non-enveloped viruses, although enveloped viruses display this too. An example is the use of recycling viral particle receptors in the enveloped varicella-zoster virus.
Shedding from one part of the body to another
Shedding from a body into the environment
Contagiousness
A human with a viral disease can be contagious if they are shedding virus particles, even if they are unaware of doing so. Some viruses such as HSV-2 (which produces genital herpes) can cause asymptomatic shedding and therefore spread undetected from person to person, as no fever or other hints reveal the contagious nature of the host.
See also
Vaccine shedding - a form of viral shedding following administration of an attenuated (or "live virus") vaccine
References
Virology
Viral life cycle | Viral shedding | Biology | 831 |
28,436,772 | https://en.wikipedia.org/wiki/College%20of%20Fisheries%20and%20Ocean%20Sciences | The College of Fisheries and Ocean Sciences, or CFOS, is part of the University of Alaska Fairbanks. CFOS offers a bachelor of arts and a bachelor of science in fisheries, master’s and doctoral degrees in oceanography, fisheries and marine biology, and a minor in marine science.
The college was established by the University of Alaska Board of Regents in 2016 from units at several campuses and placed under a single umbrella administered within the University of Alaska Fairbanks.
CFOS is headquartered in Fairbanks, Alaska, with major divisions in Seward, Anchorage, Juneau and Kodiak:
The Institute of Marine Science in Fairbanks is active in research and graduate training at the master's and doctoral levels. IMS conducts marine science studies in the world’s oceans, with special emphasis on arctic and Pacific subarctic waters.
The Kodiak Seafood and Marine Science Center in Kodiak works to increase the value of the Alaska fishing industry through academic and research programs in sustainable harvesting and seafood technology.
The NOAA Alaska Sea Grant College Program in Fairbanks funds marine research, offers education and advisory services, distributes information about Alaska’s seas and coasts and provides outreach to coastal communities.
The Seward Marine Center in Seward provides access to saltwater laboratories and the coastal environment with laboratories, constant temperature chambers and a running seawater system. The University of Alaska's new icebreaker, the RV Sikuliaq will be based here.
The Fisheries Division in Juneau and Fairbanks collaborates with state, national and international organizations to study how to develop and maintain sustainable fisheries programs in Alaska and global waters. Faculty at the Fisheries Division train undergraduate and graduate programs in fisheries.
Research projects
The college is involved in the first ever study of ocean acidification in the North Pacific and its effects on fish. They are also part of an ongoing project to use radar to map surface currents in the Arctic Ocean, and UAF CFOS researchers designed and deployed a solar and wind powered "remote power module" to provide energy for the programs equipment, which is often located in remote, unpopulated areas.
References
External links
About the School of Fisheries and Ocean Sciences
1987 establishments in Alaska
Education in Anchorage, Alaska
Education in Juneau, Alaska
Education in Kenai Peninsula Borough, Alaska
Education in Kodiak Island Borough, Alaska
Educational institutions established in 1987
Fisheries and aquaculture research institutes
Marine biology
Science and technology in Alaska
University of Alaska Fairbanks | College of Fisheries and Ocean Sciences | Biology | 485 |
73,233,518 | https://en.wikipedia.org/wiki/Algorithmic%20curation | Algorithmic curation is the selection of online media by recommendation algorithms and personalized searches. Examples include search engine and social media products such as the Twitter feed, Facebook's News Feed, and the Google Personalized Search.
Curation algorithms are typically proprietary or "black box", leading to concern about algorithmic bias and the creation of filter bubbles.
See also
Algorithmic radicalization
Ambient awareness
Influence-for-hire
Social bot
Social data revolution
Social influence bias
Social media bias
Social media intelligence
Social profiling
Virtual collective consciousness
References
Social media
Mass media monitoring
Social influence | Algorithmic curation | Technology | 114 |
39,566,174 | https://en.wikipedia.org/wiki/Arbaclofen%20placarbil | Arbaclofen placarbil ( , also known as XP19986) is a prodrug of R-baclofen. Arbaclofen placarbil possesses more favorable pharmacokinetic profile than baclofen, with less fluctuations in plasma drug levels. It was being developed as a potential treatment for patients with GERD and spasticity due to multiple sclerosis; however, in May 2013 XenoPort announced the termination of development because of unsuccessful results in phase III clinical trials.
It is being developed as an addiction medicine to treat alcoholism. It is also studied as a potential therapeutic for some autistic subjects.
See also
Gabapentin enacarbil
Lesogaberan
References
Abandoned drugs
Calcium channel blockers
4-Chlorophenyl compounds
GABA analogues
GABAB receptor agonists
Gamma-Amino acids
Prodrugs | Arbaclofen placarbil | Chemistry | 188 |
4,791,359 | https://en.wikipedia.org/wiki/Legendre%27s%20equation | In mathematics, Legendre's equation is a Diophantine equation of the form:
The equation is named for Adrien-Marie Legendre who proved it in 1785 that it is solvable in integers x, y, z, not all zero, if and only if
−bc, −ca and −ab are quadratic residues modulo a, b and c, respectively, where a, b, c are nonzero, square-free, pairwise relatively prime integers and also not all positive or all negative.
References
L. E. Dickson, History of the Theory of Numbers. Vol.II: Diophantine Analysis, Chelsea Publishing, 1971, . Chap.XIII, p. 422.
J.E. Cremona and D. Rusin, "Efficient solution of rational conics", Math. Comp., 72 (2003) pp. 1417-1441.
Diophantine equations | Legendre's equation | Mathematics | 193 |
71,590,725 | https://en.wikipedia.org/wiki/Socolar%20tiling | A Socolar tiling is an example of an aperiodic tiling, developed in 1989 by Joshua Socolar in the exploration of quasicrystals. There are 3 tiles a 30° rhombus, square, and regular hexagon. The 12-fold symmetry set exist similar to the 10-fold Penrose rhombic tilings, and 8-fold Ammann–Beenker tilings.
The 12-fold tiles easily tile periodically, so special rules are defined to limit their connections and force nonperiodic tilings. The rhombus and square are disallowed from touching another of itself, while the hexagon can connect to both tiles as well as itself, but only in alternate edges.
Dodecagonal rhomb tiling
The dodecagonal rhomb tiling include three tiles, a 30° rhombus, a 60° rhombus, and a square. Another set includes a square, a 30° rhombus and an equilateral triangle.
See also
Pattern block - 6 tiles based on 12-fold symmetry, including the 3 Socolar tiles
Socolar–Taylor tile - A different tiling named after Socolar
References
Aperiodic tilings | Socolar tiling | Physics,Mathematics | 250 |
25,168,088 | https://en.wikipedia.org/wiki/Johnson%27s%20figure%20of%20merit | Johnson's figure of merit is a measure of suitability of a semiconductor material for high frequency power transistor applications and requirements. More specifically, it is the product of the charge carrier saturation velocity in the material and the electric breakdown field under same conditions, first proposed by Edward O. Johnson of RCA in 1965.
Note that this figure of merit (FoM) is applicable to both field-effect transistors (FETs), and with proper interpretation of the parameters, also to bipolar junction transistors (BJTs).
Example materials
JFM figures vary wildly between sources - see external links and talk page.
External links
Gallium Nitride as an Electromechanical Material. R-Z. IEEE 2014 Table IV (p 5) lists JFM (relative to Si) : Si=1, GaAs=2.7, SiC=20, InP=0.33, GaN=27.5, also shows Vsat and Ebreakdown.
Why diamond? gives very different figures (but no refs) :
Si GaAs GaN SiC diamond
JFM 1 11 790 410 5800
References
Semiconductors | Johnson's figure of merit | Physics,Chemistry,Materials_science,Engineering | 235 |
76,514,362 | https://en.wikipedia.org/wiki/Entoloma%20cuspidiferum | Entoloma cuspidiferum is a species of fungus in the family Entolomataceae, first described by Machiel Noordeloos.
Distribution and habitat
It appears in North America and Europe. It grows in spruce forests, on peaty ground, among Plagiothecium and Sphagnum mosses.
References
External links
Entolomataceae
Fungi of North America
Fungi of Europe
Fungi described in 1980
Fungus species | Entoloma cuspidiferum | Biology | 92 |
76,682,938 | https://en.wikipedia.org/wiki/UGC%204881 | UGC 4881 (also known as The Grasshopper) is a pair of interacting galaxies, UGC 4881A and UGC 4881B. They are located in the constellation Lynx, some 500 million light-years away. UGC 4881, the brighter, is a peculiar spiral galaxy. It has been heavily documented by the Hubble Space Telescope, and is cataloged in the Atlas of Peculiar Galaxies.
Etymology
UGC 4881 was first given the nickname "The Grasshopper" by astrophysicist Dr. Boris Vorontsov-Velyaminov in a 1977 paper due to its resemblance to a grasshopper larva. It has also been informally called the "Shrimp galaxy" due to the curvature of the arm resembling a shrimp.
Morphology
The two galaxy cores are the two brightest regions in the object. The cores of each merging galaxy are separated and distinct, but the disks of the galaxies have started to merge. Intense star formation is occurring, as seen by the bright blue line of clusters along the grasshopper's "tail". Three other faint galaxies are visible near UGC 4881 and form a group with it.
Formation
UGC 4881 is believed to be in the process of merging, but the discs of the parent galaxies are overlapping while the cores are separated. A supernova exploded inside of UGC 4881 in 1999 and the galaxy is in the beginning of star formation.
See also
Halton Arp
Starburst galaxy
References
Interacting galaxies
Lynx (constellation)
04881
+08-17-065
026132
055 | UGC 4881 | Astronomy | 329 |
3,458,672 | https://en.wikipedia.org/wiki/Pole%E2%80%93zero%20plot | In mathematics, signal processing and control theory, a pole–zero plot is a graphical representation of a rational transfer function in the complex plane which helps to convey certain properties of the system such as:
Stability
Causal system / anticausal system
Region of convergence (ROC)
Minimum phase / non minimum phase
A pole-zero plot shows the location in the complex plane of the poles and zeros of the transfer function of a dynamic system, such as a controller, compensator, sensor, equalizer, filter, or communications channel. By convention, the poles of the system are indicated in the plot by an X while the zeros are indicated by a circle or O.
A pole-zero plot is plotted in the plane of a complex frequency domain, which can represent either a continuous-time or a discrete-time system:
Continuous-time systems use the Laplace transform and are plotted in the s-plane:
Real frequency components are along its vertical axis (the imaginary line where )
Discrete-time systems use the Z-transform and are plotted in the z-plane:
Real frequency components are along its unit circle
Continuous-time systems
In general, a rational transfer function for a continuous-time LTI system has the form:
where
and are polynomials in ,
is the order of the numerator polynomial,
is the coefficient of the numerator polynomial,
is the order of the denominator polynomial, and
is the coefficient of the denominator polynomial.
Either or or both may be zero, but in real systems, it should be the case that ; otherwise the gain would be unbounded at high frequencies.
Poles and zeros
the zeros of the system are roots of the numerator polynomial: such that
the poles of the system are roots of the denominator polynomial: such that
Region of convergence
The region of convergence (ROC) for a given continuous-time transfer function is a half-plane or vertical strip, either of which contains no poles. In general, the ROC is not unique, and the particular ROC in any given case depends on whether the system is causal or anti-causal.
If the ROC includes the imaginary axis, then the system is bounded-input, bounded-output (BIBO) stable.
If the ROC extends rightward from the pole with the largest real-part (but not at infinity), then the system is causal.
If the ROC extends leftward from the pole with the smallest real-part (but not at negative infinity), then the system is anti-causal.
The ROC is usually chosen to include the imaginary axis since it is important for most practical systems to have BIBO stability.
Example
This system has no (finite) zeros and two poles:
and
The pole-zero plot would be:
Notice that these two poles are complex conjugates, which is the necessary and sufficient condition to have real-valued coefficients in the differential equation representing the system.
Discrete-time systems
In general, a rational transfer function for a discrete-time LTI system has the form:
where
is the order of the numerator polynomial,
is the coefficient of the numerator polynomial,
is the order of the denominator polynomial, and
is the coefficient of the denominator polynomial.
Either or or both may be zero.
Poles and zeros
such that are the zeros of the system
such that are the poles of the system.
Region of convergence
The region of convergence (ROC) for a given discrete-time transfer function is a disk or annulus which contains no uncancelled poles. In general, the ROC is not unique, and the particular ROC in any given case depends on whether the system is causal or anti-causal.
If the ROC includes the unit circle, then the system is bounded-input, bounded-output (BIBO) stable.
If the ROC extends outward from the pole with the largest (but not infinite) magnitude, then the system has a right-sided impulse response. If the ROC extends outward from the pole with the largest magnitude and there is no pole at infinity, then the system is causal.
If the ROC extends inward from the pole with the smallest (nonzero) magnitude, then the system is anti-causal.
The ROC is usually chosen to include the unit circle since it is important for most practical systems to have BIBO stability.
Example
If and are completely factored, their solution can be easily plotted in the z-plane. For example, given the following transfer function:
The only (finite) zero is located at: , and the two poles are located at: , where is the imaginary unit.
The pole–zero plot would be:
See also
Root locus
Laplace transform
Z-transform
Rational function
Bibliography
Signal processing | Pole–zero plot | Technology,Engineering | 958 |
74,109,941 | https://en.wikipedia.org/wiki/ADB-5%27Br-PINACA | ADB-5'Br-PINACA (5'-Br-ADB-PINACA) is an indazole-3-carboxamide based synthetic cannabinoid receptor agonist that has been sold as a designer drug. It was first identified in Abu Dhabi in September 2022 but has subsequently been found in the US and Europe. While formal pharmacology studies have not yet been carried out, ADB-5'Br-PINACA is believed to be a highly potent synthetic cannabinoid with similar potency to compounds such as MDMB-FUBINACA and 5F-ADB, which have been responsible for numerous fatal and non-fatal drug overdoses, consistent with previously reported compounds from the patent literature showing bromination of the indazole ring at the 5-, 6-, or 7- positions to increase potency over the unsubstituted analogues. ADB-5'Br-PINACA is the 5'-bromo analog of ADB-PINACA.
Synthesis
ADB-5'Br-PINACA can be synthesized from a "half finished" synthesis precursor known as ADB-5-Br-INACA, related to MDMB-5Br-INACA.
Legality
ADB-5'Br-PINACA is not specifically scheduled in the United States at the federal level as of October 20, 2023 but may be considered illegal under the federal analogue act if intended for consumption as a structural analog of the Schedule I cannabinoid ADB-PINACA.
See also
6-Bromopravadoline
ADB-PINACA
ADB-5'F-BUTINACA
ADMB-3TMS-PRINACA
ADSB-FUB-187
MDMB-5'Br-INACA
MDMB-5'Br-BUTINACA
References
Cannabinoids
Designer drugs
Bromoarenes
Amides
Tert-butyl compounds
Indazolecarboxamides | ADB-5'Br-PINACA | Chemistry | 407 |
2,906,587 | https://en.wikipedia.org/wiki/Dorsiventral | A dorsiventral (Lat. dorsum, "the back", venter, "the belly") organ is one that has two surfaces differing from each other in appearance and structure, as an ordinary leaf. This term has also been used as a synonym for dorsoventral organs, those that extend from a dorsal to a ventral surface.
This word is also used to define body structure of an organism, e.g. flatworm have dorsiventrally flattened bodies.
References
Anatomy | Dorsiventral | Biology | 108 |
2,376,380 | https://en.wikipedia.org/wiki/CP%20Puppis | CP Puppis (or Nova Puppis 1942) was a bright nova occurring in the constellation Puppis in 1942.
The nova was discovered on 9 November 1942 by Bernhard Dawson at La Plata, Argentina, when it had an apparent visual magnitude of about 2. It was independently discovered at 18:00 10 November 1942 (UT) by a 19-year-old Japanese schoolgirl, Kuniko Sofue, who looked at the sky after patching her socks and noticed the nova. For this discovery, asteroid 7189 Kuniko was named in her honor.
From a 17th magnitude star, it reached an apparent visual magnitude of –0.2 then began a rapid decline. It had dropped by three magnitudes in an interval of 6.5 days, one of the sharpest declines ever noted for a nova. About 14 years later, the shell ejected by the nova event was detected, which allowed the distance to be computed. In 2000, this distance was revised to after correcting for probable errors. The Gaia spacecraft later measured the parallax of the star leading to an accurate distance of parsecs.
The nova outburst can be explained by a white dwarf that is accreting matter from a companion; most likely a low-mass main sequence star. This close binary system has an orbital period of 1.47 hours, which is one of the shortest periods of the known classical nova. Unusually, the white dwarf may have a magnetic field. Other properties of the system remain uncertain, although observations of X-ray emission from the system suggest that the white dwarf has a mass of more than 1.1 times the mass of the Sun.
References
External links
S. Balman, M. Orio, H. Ögelman - copyright the American Astronomical Society retrieved 21/09/2011
B. Warner, Department of Astronomy, University of Cape Town retrieved 21/09/2011
https://web.archive.org/web/20051026122516/http://www.otticademaria.it/astro/Costellazioni/st_pup.html
Novae
Puppis
1942 in science
Puppis, CP | CP Puppis | Astronomy | 437 |
971,289 | https://en.wikipedia.org/wiki/Dropsonde | A dropsonde is an expendable weather reconnaissance device created by the National Center for Atmospheric Research (NCAR), designed to be dropped from an aircraft at altitude over water to measure (and therefore track) storm conditions as the device falls to the surface. The sonde contains a GPS receiver, along with pressure, temperature, and humidity (PTH) sensors to capture atmospheric profiles and thermodynamic data. It typically relays this data to a computer in the aircraft by radio transmission.
Usage
Dropsonde instruments are typically the only current method to measure the winds and barometric pressure through the atmosphere and down to the sea surface within the core of tropical cyclones far from land-based weather radar. The data obtained is usually fed via radio into supercomputers for numerical weather prediction, enabling forecasters to better predict the effects and intensity, based on computer-generated models using data gathered from previous storms under similar conditions. This helps meteorologists to more reliably establish a storm's potential damage, based on those factors.
Since the early 1970s, United States Air Force Reserves Hurricane Hunters of the 53rd Weather Reconnaissance Squadron based at Keesler Air Force Base in Biloxi, Mississippi, have employed dropsondes while flying over the ocean to obtain meteorological data on the structure of hurricanes deemed to be of possible concern to coastal and inland locations in the northern Atlantic ocean, northeastern Pacific ocean, and the Gulf of Mexico. During a typical hurricane season, Hurricane Hunters deploys 1,000 to 1,500 sondes on training and storm missions.
Aircraft reconnaissance missions are also sometimes requested to investigate the broader atmospheric structure over the ocean when cyclones may pose a significant threat to the United States. These interests include not only potential hurricanes, but also possible snow events (like nor'easters) or significant tornado outbreaks. The dropsondes are used to supplement the large gaps over oceans within the global network of daily radiosonde launches. Typically satellite data provides an estimate of conditions in such areas, but the increased precision of sondes can improve forecasts, particularly of the storm path.
Dropsondes may also be employed during meteorological research projects.
Device and launch details
The sonde is a lightweight system designed to be operated by one person and is launched through a chute installed in the measuring aircraft. The device's descent is slowed and stabilized by a small square-cone parachute, allowing for more readings to be taken before it reaches the ocean surface. The parachute is designed to immediately deploy after release so as to reduce or eliminate any pendulum effect, and the device typically drops for three to five minutes. The sonde has a casing of stiff cardboard to protect electronics and form a more stable aerodynamic profile.
To obtain data in a tropical cyclone, an aircraft (in the US, operated either by NOAA or the U.S. Air Force) flies into the system. A series of dropsondes are typically released as the plane passes through the storm, typically launched with greatest frequency near the center of the storm, including into the eyewall and eye (center), if one exists. Most drops are performed at a flight level of around 10,000 feet (approx. 3,000 meters).
The dropsonde sends back coded data, which includes:
The date and time of the drop. Time is always in UTC.
Location of the drop, indicated by the latitude, longitude, and Marsden square.
The height, temperature, dewpoint depression, wind speed, and wind direction recorded at any standard isobaric surfaces encountered as the dropsonde descends, which are from the set of: 1000, 925, 850, 700, 500, 400, 300, 250 hectopascals (hPa), and at the sea surface.
The temperature and dewpoint depression at all other atmospheric pressure deemed significant due to important changes or values in the atmospheric conditions found
Air pressure, temperature, dewpoint depression, wind speed and wind direction of the tropopause.
Also included in the report is information on the aircraft, the mission, the dropsonde itself, and other remarks.
Driftsondes
A driftsonde is a high altitude, durable weather balloon holding a transmitter and a bank (35 in the first models) of miniature dropsonde capsules which can then be dropped at automatic intervals or remotely. The water-bottle-sized transmitters in the dropsondes have enough power to send information to the balloon during their parachute-controlled fall. The balloon carries a larger transmitter powerful enough to relay readings to a satellite. The single-use sensor packages cost US$300 to $400 each.
After being introduced in April 2007, around a thousand a year are expected to be used to track winds in hurricane breeding grounds off of West Africa, which are outside the operating region of hurricane hunter planes.
See also
Atmospheric science
Radiosonde
References
External links
Data from Expendable Probes (NOAA site)
NCAR GPS Dropsonde System (AVAPS)
Vaisala Dropsonde RD94
Meteomodem Dropsonde
Meteorological instrumentation and equipment
Atmospheric sounding | Dropsonde | Technology,Engineering | 1,033 |
9,258,044 | https://en.wikipedia.org/wiki/MEDCIN | Medcin, is a system of standardized medical terminology, a proprietary medical vocabulary and was developed by Medicomp Systems, Inc. MEDCIN is a point-of-care terminology, intended for use in Electronic Health Record (EHR) systems, and it includes over 280,000 clinical data elements encompassing symptoms, history, physical examination, tests, diagnoses and therapy. This clinical vocabulary contains over 38 years of research and development as well as the capability to cross map to leading codification systems such as SNOMED CT, CPT, ICD-9-CM/ICD-10-CM, DSM, LOINC, CDT, CVX, and the Clinical Care Classification (CCC) System for nursing and allied health.
The MEDCIN coding system is marketed for point-of-care documentation. Several Electronic Health Record (EHR) systems embed MEDCIN, which allows them to produce structured and numerically codified patient charts. Such structuring enables the aggregation, analysis, and mining of clinical and practice management data related to a disease, a patient or a population.
History
MEDCIN was initially developed by Peter S. Goltra, founder of Medicomp Systems “as an intelligent clinical database for documentation at the time of care."
The first few years of the development were spent in designing the structure of a knowledge engine that would enable the population of relationships between clinical events.
Since 1978, the MEDCIN database engine has been continuously refined and expanded to include concepts from clinical histories, test, physical examination, therapies and diagnoses to enable coding of complete patient encounters with the collaboration of physicians and teaching institutions such as Cornell, Harvard, and Johns Hopkins.
Features
Multiple Hierarchical Structure
MEDCIN data elements are organized in multiple clinical hierarchies, where users can easilynavigate to a medical term by following down the tree of clinical propositions. The clinical propositions define unique intellectual clinical content. An example of such similar propositions include "wheezing which is worse during cold weather" and "wheezing which is worse with a cold" differ in meaning significantly to clinicians and therefore it enables the software to present relevant items to clinical users.
This hierarchy provides an inheritance of clinical properties between data elements, which greatly enhances the capabilities of EHR systems and as well providing logical presentation structures for the clinical users. The linkage of MEDCIN data elements through the use of describing many diagnoses in the diagnostic index creates multiple hierarchies. The MEDCIN engine uses Intelligent Prompting and navigation tools to enable clinicians to select specific clinical terms that they need rather than having to create new terms for rapid documentation.
Enhances EHRs usability
MEDCIN has been designed to work as an interface terminology to include components to make EHRs more usable when it is used in conjunction with proprietary physician and nursing documentation tools. According to Rosenbloom et al. (2006), investigators such as Chute et al., McDonald et al., Rose et al. and Campbell et al. have defined clinical interface terminologies as “a systematic collection of health care-related phrases (term)” (p. 277) that supports the capturing of patient-related clinical information entered by clinicians into software programs such as clinical note capture and decision support tools.
For an interface terminology to be clinical usable, it has to be able to describe any clinical presentation with speed, ease of use, and accuracy for clinicians to accomplish the intended tasks (e.g. documenting patient care) when using the medical terminology. In addition, the terms in medical terminology must have medical relationships. MEDCIN's presentation engine, accomplishes this usability criteria by using the Intelligent Prompting capabilities to present a relevant list of MEDCIN clinical terms for rapid clinical documentations. Another usability feature that the MEDCIN presentation engine provides is the medical relationships of clinical terms through multiple clinical hierarchies for each MEDCIN term.
Support for ICD-10-CM coding
In August 2012, Medicomp Systems released an updated version of the software embedded with ICD-10-CM (International Classification of Diseases, 10th Revision, Clinical Modification) mappings and functionality to comply with the transition from ICD-9-CM to ICD-10-CM as mandated by the US Department of Health and Human Services. This new version is specially designed to make the ICD-10 more usable in the EHR systems by providing clinicians easier access to bi-directional mappings, accurate data and codes through their EHR products. The ICD-10 is published by the World Health Organization (WHO) to enable the systematic collection of morbidity and mortality data from different countries for statistical analysis.
Integration to most EHRs and Legacy systems
MEDCIN terminology engine can be easily integrated into existing EHRs and legacy systems to enable mapping of existing terminologies and other coding systems such as ICD, DSM, CPT, LOINC, SNOMED CT and the Clinical Care Classification (CCC) System to generate seamless codified data at point of care. MEDCIN's interoperability features enable easy access and sharing of patient data between health care facilities.
Interface with Electronic Health Record (EHR) systems
MEDCIN has been implemented into several commercial EHR systems as an interface terminology to support integrated care, clinical documentation, health maintenance monitoring and disease management, and the care planning functions of physicians, nurses and allied health professionals.
Such commercial EHR systems include EHRs from EPIC, Allscripts, Pulse, Mckesson, and the United States Department of Defence's (DoD) EHR system, the Armed Forces Health Longitudinal Technology Application (AHLTA).
AHLTA
AHLTA is an EHR system developed for the US Department of Defense. This application uses the Medicomp's MEDCIN terminology engine for clinical documentation purposes. Figure 1, shows an example of the MEDCIN terminology where the physician can search for the correct terms for input into the patient note.
MEDCIN Nursing Plan of Care
The Nursing Plan of Care (POC) was developed by Medicomp Systems, for the Clinical Care Classification (CCC) System. The CCC System is a standardized, coded nursing terminology that provides a unique framework and coding structure for accessing, classifying and documenting patient care by nurses and other allied health professionals. The CCC is directly linked in the MEDCIN nursing POC to medical terminology with the purpose of creating patient plan of care by extracting a pool of documentation from the EHR history. The CCC nursing terminology is integrated into the MEDCIN clinical database through a contextual hierarchical tree, providing an array of terminology standards and concepts with Intelligent Prompting capabilities of the MEDCIN engine.
See also
Clinical Care Classification System
Current Procedural Terminology (CPT)
International Classification of Diseases Revision 9 (ICD-9)
LOINC
National Drug Code (NDC)
SNOMED
Health Level 7 (HL7)
Health informatics
References
External links
Medicomp home page
Further reading
Goltra, Peter S. MEDCIN: A New Nomenclature for Clinical Medicine. Ann Arbour, MI 1997:Springer-Verlag .
Medical classification
Electronic health records
Health informatics | MEDCIN | Technology,Biology | 1,463 |
1,272,439 | https://en.wikipedia.org/wiki/Shukhov%20Tower | The Shukhov Radio Tower (), also known as the Shabolovka Tower (), is a broadcasting tower deriving from the Russian avant-garde in Moscow designed by Vladimir Shukhov. The free-standing steel diagrid structure was built between 1920 and 1922, during the Russian Civil War.
History
Design
Vladimir Shukhov invented the world's first hyperboloid structure in the year 1890. Later he wrote a book, Rafters, in which he proved that the triangular shapes are 20-25% heavier than the arched ones with a ray grating. After that, Shukhov filed a number of patents for a diagrid. He aimed not only to achieve greater strength and rigidity of the structure, but also ease and simplicity through the use of as little building material as possible.
The first diagrid tower was built for the All-Russia Exhibition in Nizhny Novgorod in 1896, and later was bought by Yury Nechaev-Maltsov, a well-known manufacturer in the city. Shukhov was responsible for constructions of a new types of lighthouses, masts, water towers and transmission towers.
The broadcasting tower at Shabolovka is a diagrid structure in the form of a rotated hyperboloid. The Khodynka radio station, built in 1914, could no longer handle the increasing amount of radiograms. On July 30, 1919, Vladimir Lenin signed a decree of the Council of Workers' and Peasants' Defense, which demanded "to install in an extremely urgent manner a radio station equipped with the most advanced and powerful devices and machines", to ensure the security of the country and allow constant communication with other republics. Tower designing was started immediately across many bureaus. Later that year Shukhov's Construction Office won a competition.
The planned height of the new nine-sectioned hyperbolic tower was ( taller than the Eiffel Tower, which was taken into consideration when creating the plan) with an estimated mass of 2,200 tons (the Eiffel Tower weighs 7,300 tons). However, in the context of the Civil War and the lack of resources, the project had to be revised: the height was reduced to , the weight to 240 tons.
Installation
Tower construction was carried out without any cranes and scaffolding, but only with winches. 240 tons of metal that was required for construction, was allocated by Lenin’s personal decree from the stocks of the Military Department. For lifting five wooden winches were used, which were moved to the upper sections.
The tower is composed of six sections, one above the other. Each section is an independent hyperboloid based on a larger one.
The sixth section was installed and finally secured on February 14, 1922.
Structure
The Shukhov tower is a hyperboloid structure (hyperbolic steel gridshell) consisting of a series of hyperboloid sections stacked on one another to approximate an overall conical shape. The tower has a diagrid structure, and its steel shell experiences minimum wind load (a significant design factor for high-rising buildings). The tower sections are single-cavity hyperboloids of rotation made of straight beams, the ends of which rest against circular foundations.
Location
The tower is located a few kilometres south of the Moscow Kremlin, but is not accessible to tourists. The street address of the tower is "Shabolovka Street, 37".
Possible demolition
As of early 2014, the tower faced demolition by the Russian State Committee for Television and Radio Broadcasting, after having been allowed to deteriorate for years despite popular calls for its restoration. Following a concerted campaign calling for the preservation of the tower, on July 3 the Ministry of Culture of Russia announced that the tower will not be demolished, and in September 2014 that Moscow City Council had placed a preservation order on the tower in order to safeguard it.
In January 2017 the RTRS placed a request for tender for a plan to renovate and preserve the monument.
Models
There is a model of Shukhov's Shabolovka Tower at the Information Age gallery at the Science Museum in London. The model is at 1:30 scale and was installed in October 2014.
In popular culture
In 1922 one of the members of ASNOVA Vladimir Krinsky made a "Radio speaker of the Revolution" mural with a picture of a tower.
A science fiction novel by Aleksey Nikolayevich Tolstoy The Garin Death Ray was inspired by the public reaction towards construction of the tower.
Shukhov Tower was a logo of a "L'art de l'ingénieur" exhibition in Centre Georges-Pompidou.
In the novel A Gentleman in Moscow by Amor Towles, set in 1922, the character Mikhail Fyodorovich Mindich declares the Shukhov tower a thing of beauty, "a two hundred foot structure of spiraling steel from which we can broadcast the latest news and intelligence - and, yes, the sentimental strains of your Tchaikovsky ..." (Windmill Books, p.85)
Gallery
See also
Constructivist architecture
Lattice tower
List of hyperboloid structures
Shukhov tower on the Oka River
References
Literature
P.Gössel, G.Leuthäuser, E.Schickler; "Architecture in the 20th century"; Taschen Verlag; 1990; .
Elizabeth C. English, “Arkhitektura i mnimosti”: The origins of Soviet avant-garde rationalist architecture in the Russian mystical-philosophical and mathematical intellectual tradition”, a dissertation in architecture, 264 p., University of Pennsylvania, 2000.
“Vladimir G. Suchov 1853–1939. Die Kunst der sparsamen Konstruktion.”, Rainer Graefe und andere, 192 S., Deutsche Verlags-Anstalt, Stuttgart, 1990, .
Jesberg, Paulgerd Die Geschichte der Bauingenieurkunst, Deutsche Verlags-Anstalt, Stuttgart (Germany), , 1996; pp. 198–9.
Ricken, Herbert Der Bauingenieur, Verlag für Bauwesen, Berlin (Germany), , 1994; pp. 230.
Picon, Antoine (dir.), "L'art de l'ingenieur : constructeur, entrepreneur, inventeur", Éditions du Centre Georges Pompidou, Paris, 1997,
Fausto Giovannardi "Vladimir Shukhov e la leggerezza dell'acciaio" at giovannardierontini.it
External links
The Shukhov's Radio Tower
International campaign to save the Shukhov Tower in Moscow
Shukhov Towers in Google Maps
3D model of the Shukhov Tower
Views of the hyperboloid tower
Invention of Hyperboloid Structures
Shukhov Tower Foundation
Shukhov's Towers
Constructivist architecture
Lattice shell structures by Vladimir Shukhov
Towers completed in 1922
Tourist attractions in Moscow
Russian avant-garde
High-tech architecture
Hyperboloid structures
Towers in Moscow
Cultural heritage monuments of regional significance in Moscow | Shukhov Tower | Technology | 1,416 |
50,986,802 | https://en.wikipedia.org/wiki/Poxytrin | Poxytrins or dihydroxy-E,Z,E-polyunsaturated fatty acids (dihydroxy-E,Z,E-PUFAs) are PUFA metabolites that possess two hydroxyl residues and three in-series conjugated double bonds in an E,Z,E cis–trans configuration. Poxytrins have platelet-inhibiting properties that are not found in isomers with three conjugated double bonds presenting in a different geometry. The unique E,Z,E configuration in poxytrins may prove to be relevant in treating human conditions and diseases that involve pathological platelet activation.
Types
Poxytrins are metabolites of docosahexaenoic acid (DHA), arachidonic acid (AA), and α-linolenic acid (ALA). Poxytrins derived from AA are termed linotrins.
PDX
Protectin DX (PDX) is perhaps the most prominent poxytrin. It is not to be confused with its isomer protectin D1 (PD1). PD1 is structurally identical to PDX except that its three conjugated double bonds 11E,13E,15Z have the E,E,Z configuration. PDX and PD1 both possess potent specialized pro-resolving mediator (SPM) anti-inflammatory activity, but only PDX inhibits human platelet aggregation responses. PDX's anti-platelet activity is shared with various other dihydroxy-E,Z,E-PUFAs, but not with dihydroxy-PUFAs that have an E,E,E or E,E,Z configuration.
Cells make PDX by metabolizing DHA by double oxygenation with a 15-lipoxygenase to form the 10R,17S-hydroxperoxy intermediate which is reduced to its 10R,17S-hydroxyl product, PDX, probably by cytosolic glutathione peroxidase 1 (GPX1). Serial metabolism by two different lipoxygenases or by a lipoxygenase and a cytochrome P45 on a 1Z,4Z,7Z-PUFA may also make a 1,7-dihydroxy 2E,4Z,6E product.
Other poxytrins
10R,17S-diHDHA is the 10R diastereomer of PDX, with the 10R hydroxyl residue being formed by aspirin-treated COX-2 or a cytochrome P450.
8S,15S-diHETE has been observed in guinea pig tissues, probably made through double oxygenation of AA by a 15-lipoxygenase (probably ALOX15) or serial metabolism by two enzymes.
10S,17S-diHDHA is the 13Z cis–trans isomer of 10-epi-protectin D (which has a 13E double bond instead). 10S,17S-diHDHA is formed in vitro by stimulated human leukocytes and possesses SPM anti-inflammatory activity.
7-epi-MaR1 is a maresin isomer1, and likewise possesses SPM activity.
Linotrins
Linotrin-1 and linotrin-2 are among the four isomeric metabolites produced by incubating ALA with ALOX15B. The extent to which the linotrins form in cells or in vitro is not clear.
Activity
Stimulating agents such as collagen depend on platelets to make and release thromboxane A2 (TXA2) to mediate and/or enhance their aggregating activity. 10R,17S-diHDHA, and PDX to a slightly lesser degree, inhibit the human platelet aggregation response to collagen at ≥ 100–200 nanomolar concentrations. This appears to reflect the ability of poxytrins to inhibit the activities of COX-1 and COX-2, thereby blocking the production of TXA2 and thus interfering with the activation of the thromboxane receptor by TXA2. The linotrins appear to use a similar mechanism, and to have similar or slightly lower potencies. However, the linotrins are 20- to 100-fold stronger in inhibiting human platelet aggregation compared to 5-HETE and 12-HETE, two mono-hydroxyl-containing eicosanoids that contain an E,Z conjugated double bond configuration. Other biologically-active poxytrins have yet to be tested for, but are projected to possess anti-platelet activity.
References
Cell biology
Cell communication
Cell signaling
Immunology
Fatty acids | Poxytrin | Biology | 987 |
15,891,439 | https://en.wikipedia.org/wiki/Gigabit%20wireless | Gigabit wireless is the name given to wireless communication systems whose data transfer speeds reach or exceed one gigabit (one billion bits) per second. Such speeds are achieved with complex modulations of the signal, such as quadrature amplitude modulation (QAM) or signals spanning many frequencies. When a signal spans many frequencies, physicists refer that a wide bandwidth signal. In the communication industry, many wireless internet service providers and cell phone companies deploy wireless radio frequency antennas to backhaul core networks, connect businesses, and even individual residential homes.
Common frequencies and bands
In general, indoor protocols follow a cross-vendor standard and communicate in the unlicensed 2.4 GHz, 5 GHz, and (soon) 60 GHz bands.
The outdoor carrier link protocols vary widely and are not compatible across vendors (and often models from the same vendor).
Note: the higher bandwidth devices require a less complex modulation to achieve high speeds.
Wireless broadband
Internet service providers (ISP's) are looking for ways to expand gigabit per second (Gbit/s) high-speed services to their customers. These can be achieved through fiber to the premises broadband network architecture, or a more affordable alternative using fixed wireless in the last mile in combination with the fiber networks in the middle mile in order to reduce the costs of trenching fiber optic cables to the users. In the United States, 60 GHz V band is unlicensed. This makes the V band an appealing choice to be used as fixed wireless access for Gbit/s services to connect to homes and businesses. Similarly, 70/80 GHz E band is lightly licensed which can be more accessible to more providers to provide such services.
There had been some early adopters of the hybrid fiber-wireless approach to provide Gbit/s services to customers. One of those ISP's was Webpass, a company founded in 2003 in San Francisco as a wireless ISP focusing on buildings in big cities. Since then, Webpass had been increasing the speeds along with improved wireless technologies. By 2015, Webpass offered 1 Gbit/s connections to commercial customers, however, the residential customers were limited to speeds of up to 500 Mbit/s to share the 1 Gbit/s wireless link among many residents in the same building. The company utilized a combination of various licensed and unlicensed bands.
In January 2016, a startup company Starry from Boston introduced Starry Point with the goal to provide Gbit/s speed internet wirelessly to homes. The device is a fixed wireless unit attached to a window as an access point to connect to Starry core networks using a millimetre wave band communication. The company did not reveal the details of the band, but claimed to be "the world’s first millimeter wave band active phased array technology for consumer internet communications". However, in January 2018, at the time that the company announced the expansion of its beta service to cover 3 cities: Boston, Los Angeles, and Washington, DC, the speeds were still limited to up to 200 Mbit/s.
In June 2016, Google Fiber acquired Webpass to boost its effort in its experiments with wireless technologies. As a result, Google Fiber put its effort on fiber to the premises on hold to explore more on the cheaper wireless alternative. By early 2017, the Webpass division of Google Fiber expanded 1 Gbit/s wireless service to customers in many cities in the United States.
In November 2016, Atlas Networks, an ISP that serves Seattle, deployed its V-band Gbit/s service to customers within the to its fiber networks. The maximum throughput for each connection was 1 gigabit per second.
In October 2017, Cloudwifi, a startup ISP based in Kitchener, Ontario started using 60 GHz band fixed wireless to provide Gbit/s connectivity to customers within the range of its fiber connection points.
In October 2017, Newark Fiber enabled its first customer in Newark, New Jersey with 10 Gbit/s fixed wireless service. Newark Fiber used V-band 10 Gbit/s transmitters with the distance of up to .
References
Wireless networking
Telecommunication services | Gigabit wireless | Technology,Engineering | 837 |
4,388,550 | https://en.wikipedia.org/wiki/Astrovirus | Astroviruses (Astroviridae) are a type of virus that was first discovered in 1975 using electron microscopes following an outbreak of diarrhea in humans. In addition to humans, astroviruses have now been isolated from numerous mammalian animal species (and are classified as genus Mamastrovirus) and from avian species such as ducks, chickens, and turkey poults (classified as genus Avastrovirus). Astroviruses are 28–35 nm diameter, icosahedral viruses that have a characteristic five- or six-pointed star-like surface structure when viewed by electron microscopy. Along with the Picornaviridae and the Caliciviridae, the Astroviridae comprise a third family of nonenveloped viruses whose genome is composed of plus-sense, single-stranded RNA. Astrovirus has a non-segmented, single stranded, positive sense RNA genome within a non-enveloped icosahedral capsid. Human astroviruses have been shown in numerous studies to be an important cause of gastroenteritis in young children worldwide. In animals, Astroviruses also cause infection of the gastrointestinal tract but may also result in encephalitis (humans and cattle), hepatitis (avian) and nephritis (avian).
Microbiology
Taxonomy
This family of viruses consists of two genera, Avastrovirus (AAstV) and Mamastrovirus (MAstV).
The International Committee on Taxonomy of Viruses (ICTV) established Astroviridae as a viral family in 1995. There have been over 50 astroviruses reported, although the ICTV officially recognizes 22 species. The genus Avastrovirus comprises three species; Chicken astrovirus (Avian nephritis virus types 1–3), Duck astrovirus (Duck astrovirus C-NGB), and Turkey astrovirus (Turkey astrovirus 1). The genus Mamastrovirus includes Bovine astroviruses 1 and 2, Human astrovirus (types 1–8), Feline astrovirus 1, Porcine astrovirus 1, Mink astrovirus 1 and Ovine astrovirus 1.
Structure
Astroviruses have a star-like appearance with five or six points. Their name is derived from the Greek word "astron" meaning star. They are non-enveloped RNA viruses with cubic capsids, approximately 28–35 nm in diameter with T=3 symmetry. Human astroviruses are part of the Mammastrovirus genus and contains 8 serotypes. The human astrovirus capsid spikes have a distinct structure. The spike domain in particular has a 3-layered beta-sandwiches fold and a core, 6-stranded beta-barrel structure. The beta-barrel has a hydrophobic core. The triple-layered beta-sandwich is packed outside the beta-barrel. The spike also forms a dimer. This unique structure was found to be similar to the protein projections found on the capsid of the hepatitis E virus. The projection domain of the human astrovirus contains a receptor binding site for polysaccharides. The amino acid sequence of the astrovirus capsid protein does not have similar homology to other known viral proteins, but the closest would be hepatitis E virus.
Life cycle
Astroviruses infect birds and mammals through the fecal-oral route. They have a tissue tropism for enterocytes. Entry into the host cell is achieved by attachment to host receptors, which mediates endocytosis. Replication follows the positive-strand RNA virus replication model. Astrovirus RNA is infectious and functions as a messenger RNA for ORF1a and ORF1b. A frame-shifting mechanism between these two nonstructural polypeptides translates RNA-dependent RNA polymerase. In replication complexes near intracellular membranes, ORF1a and ORF1b are cleaved to generate individual nonstructural proteins that are involved in replication. The resulting subgenomic RNA contains ORF2 and encodes precursor capsid protein (VP90). VP90 is proteolytically cleaved during packaging and produces immature capsids made of VP70. Following encapsidation, immature capsids are released from the cell without lysis. Extracellular virions are cleaved by Trypsin and form mature infectious virions.
Morphology
Astroviruses are 28-30 nm non-enveloped viruses with T-3 icosahedral symmetry. They have spherical shapes and consist of a capsid protein shell. Astroviruses have distinctive five or six pointed star-like projections on 10% of the virions (the other virions have smooth surfaces). The virion capsid is expressed from a subgenomic mRNA and its precursor undergoes multiple cleavages to make the VP70 protein. Capsids that are made of the VP70 protein are cleaved by trypsin to make particles that are very infectious (VP25/26, VP27/29 and VP34). The spikes that create the star-like appearance on the virion surface are made by two structural proteins (VP25 and VP27) while the capsid shell is made from VP34.
Genome
Astroviruses have a genome composed of a single strand of positive sense RNA. The strand has a poly A tail at the 3' end, but no 5' cap but instead is linked to a VPg protein. With the exclusion of polyadenylation at the 3' end, the genome is between 6.8 and 7.9 kb long. The genome is arranged into three open reading frames (ORFs), with an overlap of approximately 70 nucleotides between ORF1a and ORF1b. The remaining ORF is known as ORF2. ORF2 encode the structural proteins, which are -at least- VP26, VP29 and VP32, the most antigenic and immunogenic of these being VP26. This protein is probably involved in the first steps of viral infection, being a key factor in the biological cycle of astroviruses. The human astrovirus genome mutation rate has been estimated to be 3.7×10−3 nucleotide substitutions per site per year with the synonymous changes rate of 2.8×10−3 nucleotide substitutions per site per year. The capability for genetic recombination appears to be present in type-3 and type-4 human astroviruses, and in porcine astrovirus strains.
Replication
Replication of astroviruses occur in the cytoplasm. Astrovirus RNA is infectious and functions as a messenger RNA for ORF1a and ORF1b, with translation initiation thought to be mediated by VPg similar to Caliciviridae. A frame-shifting mechanism between these two nonstructural polypeptides translates RNA-dependent RNA polymerase (RdRp). In replication complexes near intracellular membranes, ORF1a and ORF1b are cleaved to generate individual nonstructural proteins that are involved in replication. RdRp transcribes subgenomic RNA from the subgenomic promoter, which enables higher production of structural proteins. Subgenomic RNA contains ORF2 which encodes precursor capsid protein (VP90). VP90 is proteolytically cleaved during packaging and produces immature capsids made of VP70. Following encapsidation, immature capsids are released from the cell without lysis. Extracellular virions are cleaved by Trypsin and form mature infectious virions.
Evolution
The Astroviridae capsid is related to those of the Tymoviridae. The non-structural region is related to the Potyviridae. It appears that this group of viruses may have arise at some point in the past as a result of recombination event between two distinct viruses and that this even occurred at the junction of the structural and non-structural coding regions.
Species infected
Avastrovirus
Avastrovirus 1–3 are associated with enteric infections in turkeys, ducks, chicken and guinea fowl. In turkey poults 1–3 weeks of age, some symptoms of enteritis include diarrhea, listlessness, liver eating and nervousness. These symptoms are usually mild but in cases of poult enteritis and mortality syndrome (PEMS), which has dehydration, immune dysfunction and anorexia as symptoms, mortality is high. Post mortem examination of the intestines of infected birds show fluid filled intestines. Hyperplasia of enterocytes is also observed in histopathology studies. However, in contrast to other enteric viruses, there isn't villous supply.
Avastrovirus species often infect extraintestinal sites such as the kidney or liver resulting in hepatitis and nephritis. Birds infected by avian nephritis virus typically die within 3 weeks of infection. The viral particles can be detected in fecal matter within 2 days and peak virus shedding occurs 4–5 days after infection. The virus can be found in the kidney, jejunum, spleen, liver and bursa of infected birds. Symptoms of this disease include diarrhea and weight loss. Necropsies show swollen and discolored kidneys and there is evidence of death of the epithelial cells and lymphocytic interstitial nephritis. Another extraintestinal avastrovirus is avian hepatitis virus which infects ducks. Hepatitis in ducks caused by this duck astrovirus (DAstV) is often fatal.
In birds, Avastroviruses are detected by antigen-capture ELISA. In the absence of vaccines, sanitation is the prevalent way to prevent Avastrovirus infections.
Mamastrovirus
Mamastroviruses often cause gastroenteritis in infected mammals. In animals, gastroenteritis is usually undiagnosed because most astrovirus infections are asymptomatic. However, in mink and humans, astroviruses can cause diarrhea and can be fatal. The incubation period for Mamastrovirus is 1–4 days. When symptoms occur, the incubation period is followed by diarrhea for several days. In mink, symptoms include increased secretion from apocrine glands. Human astroviruses are associated with gastroenteritis in children and immunocompromised adults. 2–8% of acute non-bacterial gastroenteritis in children is associated with human astrovirus. These viral particles are usually detected in epithelial cells of the duodenum. In sheep, ovine astroviruses were found in the villi of the small intestine.
Mamastroviruses also cause diseases of the nervous system. These diseases most commonly occur in cattle, mink and humans. In cattle, this occurs sporadically and infects individual animals. Symptoms of this infection include seizure, lateral recumbency and impaired coordination. Histological examinations showed neuronal necrosis and gliosis of the cerebral cortex, cerebellum, spinal cord and brainstem.
Signs and symptoms in humans
Members of a relatively new virus family, the astroviridae, astroviruses are now recognised as a cause of gastroenteritis in children, whose immune systems are underdeveloped, and elderly adults, whose immune systems are generally somewhat compromised. Presence of viral particles in fecal matter and in epithelial intestinal cells indicate that the virus replicates in the gastrointestinal tract of humans. The main symptoms are diarrhoea, followed by nausea, vomiting, fever, malaise and abdominal pain. Some research studies have shown that the incubation period of the disease is approximately three to four days. Astrovirus infection is not usually a severe situation and only in some rare cases leads to dehydration. The severity and variation in symptoms correlates with the region the case develops in. This could be due to climatic factors influencing the life cycle or transmission method for that particular strain of Astrovirus. Malnutrition and immunodeficiency tend to exacerbate the condition, leading to more severe cases or secondary conditions that could require hospital care. Otherwise, infected people do not need hospitalization because symptoms will reduce by themselves, after 2 to 4 days.
Human infections are usually self-limiting but may also spread systematically and infect immunocompromised individuals.
Astroviruses most frequently cause infection of the gastrointestinal tract but in some animals they may result in encephalitis (humans and cattle), hepatitis (avian) and nephritis (avian).
Diagnosis
Electron microscopy, enzyme-immunoassay (ELISA), immunofluorescence, and polymerase chain reaction have all been used for detecting virus particle, antigens or viral nucleic acid in the stools of infected people. A method using real-time RT-PCR, which can detect all human astrovirus genotypes, has been reported. Some RT-qPCR techniques are able to simultaneously detect human astroviruses and other enteric viruses associated with gastroenteritis. Microarrays are also used to differentiate between the eight different human astrovirus serotypes.
Pathogenesis
Astroviruses cause gastroenteritis by causing destruction of the intestinal epithelium, leading to the inhibition of usual absorption mechanism, loss of secretory functions, and decrease in epithelial permeability in the intestines. Inflammatory responses were seen to not affect astrovirus pathogenesis.
Epidemiology
Astroviruses are associated with 5–9% of the cases of gastroenteritis in young children. Humans of all ages are susceptible to astrovirus infection, but children, the elderly, and those that are immunocompromised are most prone. A study of intestinal disease in the UK, published in 1999, determined incidence as 3.8/1000 patient years in the community (95% CI, range 2.3–6.4), the fourth most common known cause of viral gastroenteritis. Studies in the USA have detected astroviruses in the stools of 2–9% of children presenting symptoms; illness is most frequent in children younger than two years, although outbreaks among adults and the elderly have been reported. Early studies carried out in Glasgow demonstrated that a significant proportion of babies excreting virus particles did not exhibit gastrointestinal symptoms. Seroprevalence studies carried out in the US have shown that 90% of children have antibody to HastV-1 by age 9, suggesting that (largely asymptomatic) infection is common. Looking at the pattern of disease, it suggests that antibodies provide protection through adult life until the antibody titre begins to decline later in life.
The occurrence of astrovirus infections vary depending on the season. In temperate climates, infection is highest during winter months possibly due to lower temperatures which enhance the stability of the virus. This is in contrast to tropical regions where prevalence is highest during the rainy season. The seasonal distribution in tropical climates can be explained by the effect of rain particularly on the breakdown of sanitation in developing countries.
Human astroviruses are transmitted by the fecal–oral route. The main mode of astrovirus transmission is by contaminated food and water. Young children in childcare backgrounds or adults in military barracks are most likely to develop the disease. Human astroviruses may be released in large quantities in the stool of infected individuals and contaminate groundwater, fresh water and marine water due to inadequate wastewater treatment. Fruits and vegetables grown in such contaminated water may also act as sources of viral infection. Poor food handling practices, poor hand hygiene and contamination of inanimate objects are other factors that encourage enteric virus transmission.
Astroviruses can also be transmitted to humans from other animal species. In comparison to individuals who had no contact with turkey, turkey abattoir workers were three times more likely to test positive for antibodies against turkey astroviruses. Furthermore, some human, duck, chicken and turkey astroviruses are phylogenetically related and share genetic features.
Prevention
Human astroviruses can be prevented by detection and inactivation in contaminated food and water in addition to disinfection of contaminated fomites.
Treatment
Astrovirus Immunoglobulin
In a study by Bjorkholm et al., a 78-year-old patient diagnosed with Waldenstrom's macroglobulinemia was given 0.4 g/kg of astrovirus immunoglobulin for four days, and the symptoms dissolved leading to a full recovery from astrovirus; however, further testing has yet to be completed.
Achyrocline bogotensis antiviral therapy
In a study by Tellez et al., extracts from a plant Achyrocline bogotensis was used to develop an antiviral therapy for both rotavirus and astrovirus. Achyrocline bogotensis was commonly used for skin and urinary infections. Drug testing methodology involved application of the extract to cell for pre-treatment (blocking), direct viral activity (evidence of killing the virus), and treatment (a decrease in the viral load after an infection is established). The extract demonstrated direct viral activity by killing astroviruses directly and treatment by leading to a decrease in the viral load after an established infection. A pre-treatment effect was not evident during the experiment.
Timeline
1975: Appleton and Higgins first discovered astrovirus in stool samples of children suffering from gastroenteritis by using electron microscopy (EM)
1975: Madeley and Cosgrove named the 20–30 nm viral particle Astrovirus based on the star-like EM (electron microscopy) appearance
1976-1992: Lee and Kurtz serotyped 291 astrovirus stool samples in Oxford; discovered serotypes 6 and 7
1981: Lee and Kurtz were able to grow astrovirus in tripsin-dependent tissue culture by using human embryo kidney cells (HEK)
1985: Lee and Kurtz discover two serotypes of astrovirus that are used to type 13 strains of community-acquired astrovirus
1987: Gray et al. discovered that a 22-day long gastroenteritis outbreak in an elderly home was caused by astrovirus type 1 and calicivirus
1988: Hermann and Hudson use antigen characterization of HEK grown astroviruses to develop monoclonal antibodies
1992: Cruz et al. analyzed 5,000 stool samples 7.5% of the diarrheal diseases found in Guatemalan ambulatory rural children were caused by astroviruses
1993: Jiang et al. sequence astrovirus RNA and determine the presence of three ORFs and ribosomal frameshifting
1993: Monroe et al. classify subgenomic data for astrovirus, providing support for astrovirus to be classified as a viral family
1994: Oishi et al. determine astrovirus as the main cause of gastroenteritis in schools in Katano City, Osaka, Japan
1995: Bjorkholm elt al. conducted a clinical study, and 78-year-old male Waldenström's macroglobulinemia patient with astrovirus-associated gastroenteritis was successfully treated with intravenous immunoglobulin
1995: Jonassen et al. uses PCR to detect all known serotypes (7) of astrovirus
1995: In their sixth report, ICTV establishes Astroviridae as a viral family
1996: Glass et al. states an epidemiological shift regarding astrovirus due to improvements in RT-PCT (reverse transcription PCR), monoclonal antibodies, and enzyme immunoassays (EIA); astroviruses are now considered one of the main causes of diarrheal disease worldwide
1996: Palombo and Bishop the epidemiology of astrovirus infections in children suffering from gastroenteritis in Melbourne, Australia (data collected include total incidence, genetic diversity, serotype characterization)
1998: Unicomb et al. conduct a clinical study in Bangladesh and conclude astrovirus infections involving nosocomial, acute, and persistent diarrheal diseases
1998: Gaggero et al. identify human astrovirus type 1 to be the main cause of acute gastroenteritis in Chilean children
1999: Bon et al. discover astrovirus in a gastroenteritis outbreak in Dijon, France
2001: Dennehy et al. collected stool samples from hospitalized children suffering from acute gastroenteritis; astrovirus was determined the second leading cause of gastroenteritis after rotavirus
2002: Guix et al. completes an epidemiological study on the presence of astrovirus in Barcelona, Spain; the total incidence of astrovirus in 2,347 samples was 4.95 with a peak in the number of cases in the winter
2003: Basu et al. discovered astrovirus in 2.7% of stool samples collected from 346 children suffering from gastroenteritis in Gaborone, Botswana
2009: Finkbeiner et al. used Sanger sequencing to discover a novel astrovirus in stool samples from children suffering from an acute gastroenteritis outbreak at a childcare center
2009: Using RT-PCR, Kapoor et al. discover novel astrovirus strains HMOAstV species A, B, C which are very similar to astroviruses found in mink and ovine species; this showed that the virus may have the ability to jump species
References
External links
Viralzone: Astroviridae
ICTV
African wildlife diseases
Viral diseases
Gastroenterology
Foodborne illnesses
Riboviria | Astrovirus | Biology | 4,431 |
53,787,782 | https://en.wikipedia.org/wiki/List%20of%20broadcast%20video%20formats | This list of broadcast formats is a review of the most popular formats used to broadcast video information over cable television, satellite television, the Internet, and other means. Video broadcasting was popularized by the advent of the television during the middle of the twentieth century.
Recently, Internet streaming has almost surpassed television as the top video broadcast platform.
Below is a list of broadcast video formats.
24p is a progressive scan format and is now widely adopted by those planning on transferring a video signal to film. Film and video makers use 24p even if they are not going to transfer their productions to film, simply because of the on-screen "look" of the (low) frame rate, which matches native film. When transferred to NTSC television, the rate is effectively slowed to 23.976 FPS (24×1000÷1001 to be exact), and when transferred to PAL or SECAM it is sped up to 25 FPS. 35 mm movie cameras use a standard exposure rate of 24 FPS, though many cameras offer rates of 23.976 FPS for NTSC television and 25 FPS for PAL/SECAM. The 24 FPS rate became the de facto standard for sound motion pictures in the mid-1920s. Practically all hand-drawn animation is designed to be played at 24 FPS. Actually hand-drawing 24 unique frames per second ("1's") is costly. Even in big budget films, usually hand-drawn animation is done shooting on "2's" (one hand-drawn frame is shown twice, so only 12 unique frames per second) and some animation is even drawn on "4's" (one hand-drawn frame is shown four times, so only six unique frames per second).
25p is a progressive format and runs 25 progressive frames per second. This frame rate derives from the PAL television standard of 50i (or 50 interlaced fields per second). Film and television companies use this rate in 50 Hz regions for direct compatibility with television field and frame rates. Conversion for 60 Hz countries is enabled by doing 2:2:3:2:3 pulldown. This is similar to 2:3 pulldown, and the result looks identical to a typical film transfer. While 25p captures half the temporal resolution or motion that normal 50i PAL registers, it yields a higher vertical spatial resolution per frame. Like 24p, 25p is often used to achieve "cine"-look, albeit with virtually the same motion artifacts. It is also better suited to progressive-scan output (e.g., on LCD displays, computer monitors and projectors) because the interlacing is absent.
25i, also known as 50i, is an interlaced format showing 25 interlaced frames per second, or 50 fields per second, and is the standard broadcast framerate for countries with a PAL and SECAM television history (most of the world). The interlaced format sacrifices some detail in vertical resolution in favor of a higher apparent framerate, and can be thought to be 50 individual sub-sampled frames per second. The European_Broadcasting_Union and broadcast manufacturers like VizRT refer to modern HD broadcasts of 1080 line pictures as 1080i25, but the term 1080i50 is also used in the industry by companies such as Blackmagic and Grass Valley
29.97i, also commonly referred to as 30i, 59.94i, 60i, provides 30000/1001 interlaced frames per second, or 60000/1001 fields per second. This is the standard broadcast frame rate for countries with an NTSC history—mainly the US, Canada, Japan and South Korea. This was the standard format of American TVs due to the displaying of CRT screens which were common place before the wide spread use of digital monitors. Alternating currents are used to time each scan, by which two scans are performed per frame due to interlacing of the display. The NTSC method implemented in the display of televisions bandwidth of the frequency requires an odd integer multiple of the horizontal frequency divided into two separate interlacing parallel line patterns. The Horizontal frequency was 15,750. This format was the solution caused by the problem of 30 Frames per second is not able to output to the even integer by the odd integer multiple. Each conducted scan equals 262.5 rows on the television set with interlacing totaling 525 rows. Before 1953 the entire frequency band of the TV was divided between black and white image and sound, but after the introduction of color broadcasting to widespread commercial use, more information was needed to provide for the color. The space between each frequency on the band was dedicated as to not cause electromagnetic interference between the picture and the sound. 6 MHz is dedicated to each analog broadcasting station. 1.5 MHz of which is unusable due to the electromagnetic interference gap between frequencies. This format fell out of use because of the introduction of digital broadcasting and online streaming.
30p is a progressive format and produces video at 30 frames per second. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame image capture. The effects of inter-frame judder are less noticeable than 24p yet retains a cinematic-like appearance. Shooting video in 30p mode gives no interlace artifacts but can introduce judder on image movement and on some camera pans. The widescreen film process Todd-AO used this frame rate in 1954–1956.
48p is a progressive format and that is being trialled in the film industry. At twice the traditional rate of 24p, this frame rate attempts to reduce motion blur and flicker found in films. Director James Cameron stated his intention to film the two sequels to his film Avatar higher than 24 frames per second to add a heightened sense of reality. The first film to be filmed at 48 FPS was The Hobbit: An Unexpected Journey, a decision made by its director Peter Jackson. At a preview screening at CinemaCon, the audience's reaction was mixed after being shown some of the film's footage at 48p, with some arguing that the feel of the footage was too lifelike (thus breaking the suspension of disbelief).
50p is the frame rate used for 720p broadcast HDTV in countries with a PAL or SECAM broadcasting history, as specified in SMPTE 296M. In Europe, the EBU considers 1080p50 the next step future proof system for TV broadcasts and is encouraging broadcasters to upgrade their equipment for the future. Many modern cameras can shoot video at 50p and 60p in various resolutions. YouTube allowed users to upload videos at 50 FPS and 60 FPS in June 2014. YouTube also allowed full HFR videos previously uploaded before 2014. Douglas Trumbull, who undertook experiments with different frame rates that led to the Showscan film format, found that emotional impact peaked at 60 FPS for viewers.
59.94p is the frame rate used for 720p broadcast HDTV in the USA and other countries with an NTSC broadcasting history, as specified in SMPTE 296M. The exact rate is 60000/1001, which works out to be slightly over 59.94 frames per second.
60p is a progressive format that is very close to the 60000/1001 broadcast system used in some countries. In the Broadcast industry it is sometimes but not always used as shorthand for 60000/1001 frames per second. Outside of broadcast, material is often filmed, edited, and displayed at 60 frames per second for delivery via streaming services.
72p is a progressive format and is currently in experimental stages. Major institutions such as Snell have demonstrated 720p72 pictures as a result of earlier analogue experiments, where 768 line television at 75 FPS looked subjectively better than 1150 line 50 FPS progressive pictures with higher shutter speeds available (and a corresponding lower data rate). Modern cameras such as the Red One can use this frame rate to produce slow motion replays at 24 FPS.
100p / 119.88p / 120p are progressive-scan formats standardized for UHDTV by the ITU-R BT.2020 recommendation.
See also
Digital television
Cable television
Broadcast engineering
Glossary of broadcasting terms
North American television frequencies
Broadcast television systems
References
External links
Video formats
Television technology | List of broadcast video formats | Technology | 1,693 |
53,407,220 | https://en.wikipedia.org/wiki/LEXO | LEXO is the original version of the upgraded BURNOUT temperature regulating tumbler brand from manufacturer ThermAvant International, LLC, based in Columbia, Missouri.
History
The creator of LEXO, Hongbin "Bill" Ma, is a professor of mechanical and aerospace engineering and director of the Center for Thermal Management at the University of Missouri. After noticing how often he forgot coffee while waiting for it to cool, Ma began working on a “cup with constant temperature” in the summer of 2015. The LEXO was released to the general public in December 2016.
Design
The LEXO uses bio-based phase-change and advanced heat transfer materials to absorb the initial heat of the beverage and cool it to a more drinkable temperature. When the temperature begins to drop, the LEXO slowly releases the stored heat back into the drink. The LEXO can also insulate cold liquids.
The LEXO has three layers of 18/8 stainless-steel and BPA-free plastic lids.
References
External links
Drinkware
Physical chemistry | LEXO | Physics,Chemistry | 205 |
193,595 | https://en.wikipedia.org/wiki/Level%20crossing | A level crossing is an intersection where a railway line crosses a road, path, or (in rare situations) airport runway, at the same level, as opposed to the railway line or the road etc. crossing over or under using an overpass or tunnel. The term also applies when a light rail line with separate right-of-way or reserved track crosses a road in the same fashion. Other names include railway level crossing, railway crossing (chiefly international), grade crossing or railroad crossing (chiefly American), road through railroad, criss-cross, train crossing, and RXR (abbreviated).
There are more than 100,000 level crossings in Europe and more than 200,000 in North America.
Road-grade crossings are considered incompatible with high-speed rail and are virtually non-existent in European high-speed train operations.
History
The types of early level crossings varied by location, but often they had a flagman in a nearby booth who, on the approach of a train, would wave a red flag or lantern to stop all traffic and clear the tracks. This was a dangerous job that cost the lives of gatekeepers or their family members, as the train was not given enough time to stop. Gated crossings became commonplace in many areas, as they protected the railway from people trespassing and livestock, and they protected the users of the crossing when closed by the signalman/gateman. In the second quarter of the 20th century, manual or electrical closable gates that barricaded the roadway started to be introduced, intended to be a complete barrier against intrusion of any road traffic onto the railway. Automatic crossings are now commonplace in some countries as motor vehicles replaced horse-drawn vehicles and the need for animal protection diminished with time. Full-, half- or no-barrier crossings superseded gated crossings, although crossings of older types can still be found in places.
In rural regions with sparse traffic, the least expensive type of level crossing to operate is one without flagmen or gates, with only a warning sign posted. This type has been common across North America and in many developing countries.
Some international rules have helped to harmonise level crossing. For instance, the 1968 Vienna Convention states (chapter 3, article 23b) that:
"one or two blinking red light indicates a car should stop; if they are yellow the car can pass with caution".
Article 27 suggests stop lines at level crossings.
Article 33, 34, 35 and 36 are specific to level crossings, because level crossings are recognized as dangerous.
Article 35 indicates a cross should exist when there is no barrier or lights.
This has been implemented in many countries, including countries which are not part of the Vienna Convention.
Safety
Trains have a much larger mass relative to their braking capability, and thus a far longer braking distance than road vehicles. With rare exceptions, trains do not stop at level crossings but rely on road vehicles and pedestrians to clear the tracks in advance. There have been several accidents in which a heavy load on a slow road transporter has not cleared the line in time, eg Dalfsen train crash and Hixon rail crash. At Hixon the police escort had received no training in their responsiblities.
Level crossings constitute a significant safety concern internationally. On average, each year around 400 people in the European Union and over 300 in the United States are killed in level crossing accidents. Collisions can occur with vehicles as well as pedestrians; pedestrian collisions are more likely to result in a fatality. Among pedestrians, young people (5–19 years), older people (60 years and over), and males are considered to be higher risk users. On some commuter lines most trains may slow to stop at a station, but express or freight trains will pass through stations at high speed without slowing.
As far as warning systems for road users are concerned, level crossings either have "passive" protection, in the form of various types of warning signs, or "active" protection, using automatic warning devices such as flashing lights, warning sounds, and barriers or gates. In the 19th century and for much of the 20th, a sign warning "Stop, look, and listen" (or similar wording) was the sole protection at most level crossings. Today, active protection is widely available, and fewer collisions take place at level crossings with active warning systems. Modern radar sensor systems can detect if level crossings are free of obstructions as trains approach. These improve safety by not lowering crossing barriers that may trap vehicles or pedestrians on the tracks, while signalling trains to brake until the obstruction clears. However, they cannot prevent a vehicle from moving out onto the track once it is far too late for the locomotive to slow even slightly.
Due to the increase in road and rail traffic as well as for safety reasons, level crossings are increasingly being removed. Melbourne is closing 110 level crossings by 2030 and (due to the proximity of some stations) rebuilding 51 stations.
At railway stations, a pedestrian level crossing is sometimes provided to allow passengers to reach other platforms in the absence of an underpass or bridge, or for disabled access. Where third rail systems have level crossings, there is a gap in the third rail over the level crossing, but this does not necessarily interrupt the power supply to trains since they may have current collectors on multiple cars.
Source: US Department of Transportation.
(1 mile=1.6km)
Source: Eurostat: The rail accident data are provided to Eurostat by the European Railway Agency (ERA). The ERA manages and is responsible for the entire data collection. The Eurostat data constitute a part of the data collected by ERA and are part of the so-called Common Safety Indicators (CSIs).
Note: Since 2010, use of national definitions is no longer permitted: 2010 CSI data represent the first fully harmonized set of figures
Source: Eurostat: Annual number of victims by type of accident [rail_ac_catvict] Last update: 09-02-2017
Source, Federal Railroad Administration
Traffic signal preemption
Traffic signal-controlled intersections next to level crossings on at least one of the roads in the intersection usually feature traffic signal preemption. In the US, approaching trains activate a routine where, before the road lights and barriers are activated, all traffic signal phases go to red, except for the signal immediately after the crossing, which turns green (or flashing yellow) to allow traffic on the tracks to clear (in some cases, there are auxiliary traffic signals prior to the railroad crossing which will turn red, keeping new traffic from crossing the tracks. This is in addition to the flashing lights on the crossing barriers). After enough time to clear the crossing, the signal will turn. The crossing lights may begin flashing and the barriers lower immediately, or this might be delayed until after the traffic light turns red.
The operation of a traffic signal, while a train is present, may differ from municipality to municipality. There are a number of possible arrangements:
All directions will flash red, turning the intersection into an all-way stop.
While the train is passing, the traffic parallel to the railroad track will have a flashing yellow, while the other directions face a flashing red light.
While the train is passing, the traffic parallel to the railroad track will have a green light, while the other directions face a red light.
Traffic lights can operate relatively normally, with only the blocked direction turning red while the train is passing.
Crossing cameras
In France, cameras have been installed on some level crossings to obtain images to improve understanding of an incident when a technical investigation occurs.
In England, cameras have been installed at some level crossings.
In South Australia, cameras have been installed at some level crossings to deter non-compliance with signals.
By country
Designs of level crossings vary between countries.
Major accidents
Level crossings present a significant risk of collisions between trains and road vehicles. This list is not a definitive list of the world's worst accidents and the events listed are limited to those where a separate article describes the event in question.
Runway crossings
Aircraft runways sometimes cross roads or rail lines, and require signaling to avoid collisions.
Australia
Sydney Airport had a runway crossing, when that runway was extended. The Port Botany railway line was later deviated in March 1960 to release land for new Qantas hangars with sharp curves that avoided the runway. On June 18, 1950, a Douglas DC-3 operating for Ansett Australia was involved in a ground collision with a freight train at the crossing. The accident derailed several train cars, severely damaged the aircraft, and resulted in one minor injury to the aircraft crew.
Burnie Airport had a runway crossing over the 05/23 Runway. This crossing was built over the railway line when the airfield was constructed, and has since been decommissioned with the closing of both the railway line and the 05/23 runway.
Gibraltar
Winston Churchill Avenue intersects the runway of Gibraltar International Airport at surface level; movable barricades close when aircraft land or take off.
As of March 2023, a tunnel under the runway opened to regular traffic, and the level crossing will only be available to pedestrians, cyclists and e-scooters.
Hong Kong
After the runway of Kai Tak Airport was extended in 1943, it intersected with the easternmost section of Prince Edward Road, so all road traffic had to be stopped during takeoffs and landings. The issue was relieved when the authorities constructed a new runway for replacement in September 1958.
Madagascar
The Fianarantsoa-Côte Est railway crosses the runway at Manakara Airport. It is one of the few airports in the world that crosses an active railway line.
New Zealand
A level crossing near Gisborne, sees the Palmerston North - Gisborne Line cross one of Gisborne Airport's runways. Aircraft landing on sealed 1310-metre runway 14L/32R are signalled with two red flashing lights on either side of the runway and a horizontal bar of flashing red lights to indicate the runway south of the railway line is closed, and may only land on the section of the runway north of the railway line. When the full length of the runway is open, a vertical bar of green lights signal to the aircraft, with regular rail signals on either side of the runway indicating trains to stop.
Nicaragua
The runway of Ometepe Airport crosses the highway NIC-64.
Philippines
As of February 2023, there exists one road-runway crossing at Catarman Airport in Northern Samar.
Sweden
The Visby Lärbro Line between Visby and Lärbro crossed the runway of Visby Airport between 1956 and 1960.
Switzerland
Two public roads cross the runway at Meiringen Air Base. Electrically operated gates close when aircraft land or take off.
United Kingdom
Northern Ireland: There was a runway crossing on the Belfast–Derry railway line. The runway was interlocked with conventional railway block instruments to the control tower.
Scotland: Crossing of the A970 road over Sumburgh Airport's runway in Shetland.
See also
At-grade intersection
At-grade railway
Billups Neon Crossing Signal
Boom barrier
Breakover angle
Crossbuck
Four-quadrant gate
Grade separation
Level crossing signals
Lists of rail accidents
List of train accidents by death toll
Lists of traffic collisions
Occupation crossing
Pedestrian crossing
Warning sign
Whistle post
Wigwag
Level crossings in the United Kingdom
References
Bibliography
External links
Web Accident Prediction System - Highway-rail crossing data from the U.S. Federal Railroad Administration, Office of Safety Analysis
Traffic signs
Rail junction types
Road infrastructure
Road hazards
Articles containing video clips | Level crossing | Technology | 2,317 |
40,965,675 | https://en.wikipedia.org/wiki/%E2%88%9E-groupoid | In category theory, a branch of mathematics, an ∞-groupoid is an abstract homotopical model for topological spaces. One model uses Kan complexes which are fibrant objects in the category of simplicial sets (with the standard model structure). It is an ∞-category generalization of a groupoid, a category in which every morphism is an isomorphism.
The homotopy hypothesis states that ∞-groupoids are equivalent to spaces up to homotopy.
Globular Groupoids
Alexander Grothendieck suggested in Pursuing Stacks that there should be an extraordinarily simple model of ∞-groupoids using globular sets, originally called hemispherical complexes. These sets are constructed as presheaves on the globular category . This is defined as the category whose objects are finite ordinals and morphisms are given by
such that the globular relations hold
These encode the fact that n-morphisms should not be able to see (n + 1)-morphisms. When writing these down as a globular set , the source and target maps are then written as
We can also consider globular objects in a category as functors
There was hope originally that such a strict model would be sufficient for homotopy theory, but there is evidence suggesting otherwise. It turns out for its associated homotopy -type can never be modeled as a strict globular groupoid for . This is because strict ∞-groupoids only model spaces with a trivial Whitehead product.
Examples
Fundamental ∞-groupoid
Given a topological space there should be an associated fundamental ∞-groupoid where the objects are points , are represented as paths, are homotopies of paths, are homotopies of homotopies, and so on. From this ∞-groupoid we can find an -groupoid called the fundamental -groupoid whose homotopy type is that of .
Note that taking the fundamental ∞-groupoid of a space such that is equivalent to the fundamental n-groupoid . Such a space can be found using the Whitehead tower.
Abelian globular groupoids
One useful case of globular groupoids comes from a chain complex which is bounded above, hence let's consider a chain complex . There is an associated globular groupoid. Intuitively, the objects are the elements in , morphisms come from through the chain complex map , and higher -morphisms can be found from the higher chain complex maps . We can form a globular set with
and the source morphism is the projection map
and the target morphism is the addition of the chain complex map together with the projection map. This forms a globular groupoid giving a wide class of examples of strict globular groupoids. Moreover, because strict groupoids embed inside weak groupoids, they can act as weak groupoids as well.
Applications
Higher local systems
One of the basic theorems about local systems is that they can be equivalently described as a functor from the fundamental groupoid to the category of abelian groups, the category of -modules, or some other abelian category. That is, a local system is equivalent to giving a functor
generalizing such a definition requires us to consider not only an abelian category, but also its derived category. A higher local system is then an
with values in some derived category. This has the advantage of letting the higher homotopy groups to act on the higher local system, from a series of truncations. A toy example to study comes from the Eilenberg–MacLane spaces , or by looking at the terms from the Whitehead tower of a space. Ideally, there should be some way to recover the categories of functors from their truncations and the maps whose fibers should be the categories of -functors
Another advantage of this formalism is it allows for constructing higher forms of -adic representations by using the etale homotopy type of a scheme and construct higher representations of this space, since they are given by functors
Higher gerbes
Another application of ∞-groupoids is giving constructions of n-gerbes and ∞-gerbes. Over a space an n-gerbe should be an object such that when restricted to a small enough subset , is represented by an n-groupoid, and on overlaps there is an agreement up to some weak equivalence. Assuming the homotopy hypothesis is correct, this is equivalent to constructing an object such that over any open subset
is an n-group, or a homotopy n-type. Because the nerve of a category can be used to construct an arbitrary homotopy type, a functor over a site , e.g.
will give an example of a higher gerbe if the category lying over any point is a non-empty category. In addition, it would be expected this category would satisfy some sort of descent condition.
See also
Pursuing Stacks
n-group
Groupoid
Homotopy type theory
References
Research articles
Applications in algebraic geometry
External links
Foundations of mathematics
Higher category theory
Homotopy theory
Simplicial sets | ∞-groupoid | Mathematics | 1,054 |
56,035,044 | https://en.wikipedia.org/wiki/Barbara%20Paulson | Barbara Jean Paulson (née Lewis; April 11, 1928 – February 26, 2023) was an American human computer at NASA's Jet Propulsion Laboratory (JPL) and one of the first female scientists employed there. Paulson began working as a mathematician at JPL in 1948, where she calculated rocket trajectories by hand. She is among the women who made early progress at JPL.
Early life
Barbara Jean Lewis was born in Columbus, Ohio on April 11, 1928. She was raised with three siblings
(two older sisters and one younger brother), and when she was 12 years old her father died. Beginning in 9th grade, Paulson took four years of Latin and math while her sisters took short hand as Paulson did not want to be a secretary. After attending Ohio State University for one year, Paulson's sister, who was already working in Pasadena at the time, convinced her mother to move to Pasadena as well. In 1947 the family moved to Pasadena, California, where her career at JPL would begin.
In 1959, Barbara married Harry Murray Paulson in Pasadena, where they lived until 1962 before moving to Monrovia. In 1975, they finally settled in Glendora.
Career
Paulson joined the Jet Propulsion Laboratory in 1948 as a computer, calculating rocket paths and working on the MGM-5 Corporal, the first guided missile designed by the United States to carry a nuclear warhead. Paulson and her colleagues were at one point invited to sign their names on the 100th Corporal rocket prior to its transport to the White Sands test range. The rocket exploded shortly after liftoff. On January 31, 1958, Paulson was assigned to the operations center for Explorer-1, the first satellite of the United States, launched during the Space Race with the Soviet Union. Paulson did the work with minimal equipment: a mechanical pencil, light table, and graph paper. The multi-stage launch that Paulson aided in calculations for allowed the Corporal to carry a warhead over 200 miles.
In 1960, when Paulson was 32 years old, she and her husband Harry were expecting their first child. When Paulson requested a closer parking space at work because she was pregnant, she was forced to quit as JPL did not employ pregnant women at the time and keeping a pregnant woman on staff would result in insurance policy problems. JPL had no maternity leave, so women who were fired or forced to quit their positions did not have jobs to return to after giving birth. Paulson's supervisor, Helen Ling, worked hard to rehire women who had been forced out with no parental leave, so in 1961, when her daughter was seven months old, Paulson accepted Ling's offer and returned to the lab. Paulson notably did not apply for a better parking spot when she got pregnant for the second time. At one point during Paulson's early years at JPL a beauty contest was held amongst the other female human computers. Paulson came in third place, and the queen of the contest was called 'Miss Guided Missile'.
In the 1960s, with JPL's reputation cemented by the success of Explorer-1, JPL began to set its sights on the moon and other interplanetary exploration missions. Paulson and her colleague Helen Ling worked overtime to calculate the trajectories of the Mariner probes that would later be sent to Venus and Mars. Paulson and her colleagues determined that only brief timeframes and launch opportunities existed that allowed for the ideal transit from Earth to its target. In the late 1960s, Paulson was given the title of engineer and eventually became a supervisor in the lab.
In the 1970s, Paulson later went on to play a vital role in the Viking program, becoming the first lander to reach the surface of Mars. Paulson successfully calculated the trajectory the Viking probe needed on its 11-month transit between Earth and Mars. Paulson's calculations also proved to be essential during the entry, descent, and landing (EDL) phase of the mission, in which the lander would detach from the spacecraft, enter the martian atmosphere, and parachute down to the surface. In the late 1970s, Paulson and her colleagues worked on some of the first interstellar trajectories during the planning of the Voyager missions, in which Voyager 1, having launched in September 1977, and as of April 13, 2019, is the most distant from Earth of all human-made objects.
Paulson retired from JPL in 1993 and remained in Pasadena until 2003 before moving to Iowa.
Personal life
Paulson and her husband Harry had daughters Karen (née Paulson) Bishop and Kathleen (née Paulson) Knutson, four grandchildren (Jonathan, Kyle, Harrison, and Corrine), and several nieces and nephews. Throughout pregnancy and her eventual return to work at JPL, Barbara's husband at the time was a real estate appraiser and member of the Pasadena Board of Realtors. Fortunately, upon Barbara's return to work, Harry was able to adjust his schedule, as did many of the other human computers with children, to ensure that their children were taken care of.
Barbara Paulson's husband, Harry Murray Paulson, died on July 9, 2003. They were married for 44 years. In 2003, Paulson sold her home following her husband's death and moved to Iowa to be closer to her daughters and their families.
Barbara Paulson died in Des Moines, Iowa, on February 26, 2023, at the age of 94.
Recognition and legacy
In 1959 Paulson was recognized and received her 10-year pin for her work at the Jet Propulsion Laboratory. Paulson would work at the Jet Propulsion Laboratory for 45 years, and retire in 1993. In 2016, Nathalia Holt wrote Rise of the Rocket Girls, a book about Paulson and other women who were early employees at NASA.
References
1928 births
2023 deaths
21st-century American women
American women mathematicians
Human computers
NASA people
People from Columbus, Ohio | Barbara Paulson | Technology | 1,222 |
31,591,708 | https://en.wikipedia.org/wiki/Steel%20catenary%20riser | A steel catenary riser (SCR) is a common method of connecting a subsea pipeline to a deepwater floating or fixed oil production platform. SCRs are used to transfer fluids like oil, gas, injection water, etc. between the platforms and the pipelines.
Description
In the offshore industry the word catenary is used as an adjective or noun with a meaning wider than is its historical meaning in mathematics. Thus, an SCR that uses a rigid, steel pipe that has a considerable bending stiffness is described as a catenary. That is because in the scale of depth of the ocean, the bending stiffness of a rigid pipe has little effect on the shape of the suspended span of an SCR. The shape assumed by the SCR is controlled mainly by weight, buoyancy and hydrodynamic forces due to currents and waves. The shape of the SCR is well approximated by stiffened catenary equations. In preliminary considerations, in spite of using conventional, rigid steel pipe, the shape of the SCR can be also approximated with the use of ideal catenary equations, when some further loss of accuracy is acceptable. Ideal catenary equations are used historically to describe the shape of a chain suspended between points in space. A chain line has by definition a zero bending stiffness and those described with the ideal catenary equations use infinitesimally short links.
SCRs were invented by Dr. Carl G. Langner P.E., NAE who described an SCR together with a flexible joint used to accommodate angular deflections of the top region of the SCR relative a support platform, as the platform and the SCR move in currents and waves. SCRs use thousands of feet of long unsupported pipe spans. Complex dynamics, hydrodynamics, including vortex induced vibrations (VIVs) and physics of pipe interactions with the seabed are involved. Those are tough on materials used to build the SCR pipe. Dr. Langner had carried out years of analytical and design work before an application for his US patent was filed. That work started before 1969, and it was reflected in internal Shell documents, which are confidential, but a patent on an early 'Bare Foot' SCR design was issued. VIVs are predominantly controlled with a use of devices attached to the SCR pipe. Those can be for example VIV suppression devices, like helicoidal strakes or fairings that considerably reduce VIV amplitudes. The development of VIV prediction engineering programs, like for example the SHEAR7 program, is an ongoing process that originated in cooperation between MIT and Shell Exploration & Production in parallel to the development of the SCR concept, while having SCR development in mind.
The rigid pipe of the SCR forms a catenary between its hang-off point on the floating or rigid platform, and the seabed. A free-hanging SCR assumes a shape roughly similar to the letter 'J'. A catenary of a Steel Lazy Wave Riser (SLWR) consists in fact of at least three catenary segments. The top and the seabed segments of the catenary have negative submerged weight, and their curvatures 'bulge' towards the seabed. The middle segment has buoyant material attached along its entire length, so that the ensemble of the steel pipe and the buoyancy is positively buoyant. Accordingly, the curvature of the buoyant segment 'bulges' upwards (inverted catenary), and its shape can also be well approximated with the same stiffened or ideal catenary equations. The positively and negatively buoyant segments are tangent to each other at the points where they join. The overall catenary shape of the SLWR has inflection points at those locations. SLWRs were first installed on a turret moored FPSO offshore Brazil (BC-10, Shell) in 2009, even though Lazy Wave configuration flexible risers had been in a wide use for several decades beforehand.
The deepest application of Lazy Wave SCRs (SLWRs) is at present on the Stones turret-moored FPSO (Shell), which is moored in 9,500 feet water depth in the Gulf of Mexico. The Stones FPSO turret features a disconnectable buoy, so that the vessel with the crew can be disconnected from the buoy supporting the SLWRs, and moved to a suitable shelter before an arrival of a hurricane.
The SCR pipe and a short segment of pipe lying on the seabed use 'dynamic' pipe, i.e. steel pipe having slightly greater wall thickness than the pipeline wall thickness, in order to sustain dynamic bending and steel material fatigue associated in the touch-down zone of the SCR. Beyond that the SCR is typically extended with a rigid pipeline, but use of a flexible pipeline is also feasible.
The risers are typically 8-12 inches in diameter and operate at a pressure of 2000-5000 psi. Designs beyond those ranges of pipe sizes and operating pressures are also feasible.
Free hanging SCRs were first used by Shell on the Auger tension leg platform (TLP) in 1994 which was moored in 872 m of water. Proving to Shell that the SCR concept was technically sound for use on the Auger TLP was a major achievement of Dr. Carl G. Langner. It was a technological leap. The acceptance of the SCR concept by the entire Offshore Industry followed relatively quickly. SCRs have performed reliably on oil and gas fields all over the world since their first Auger installation.
References
Offshore engineering
Petroleum engineering | Steel catenary riser | Engineering | 1,139 |
57,430,974 | https://en.wikipedia.org/wiki/Internal%20working%20model%20of%20attachment | Internal working model of attachment is a psychological approach that attempts to describe the development of mental representations, specifically the worthiness of the self and expectations of others' reactions to the self. This model is a result of interactions with primary caregivers which become internalized, and is therefore an automatic process. John Bowlby implemented this model in his attachment theory in order to explain how infants act in accordance with these mental representations. It is an important aspect of general attachment theory.
Such internal working models guide future behavior as they generate expectations of how attachment figures will respond to one's behavior. For example, a parent rejecting the child's need for care conveys that close relationships should be avoided in general, resulting in maladaptive attachment styles.
Influences
The most influential figure for the idea of the internal working model of attachment is Bowlby, who laid the groundwork for the concept in the 1960s. He was inspired by both psychoanalysis, especially object relations theory, and more recent research into ethology, evolution and information-processing.
In psychoanalytic theory, there has been the idea of an inner or representational world (proposed by Freud) as well as the internalization of relationships (Fairbairn, Winnicott). According to Freud first schemata evolve out of experiences regarding need fulfilment via the attachment figure. He argued that the resulting mental representation is an internal copy of the external world made up from memories, and thinking serves the role of experimental action. Fairbairn and Winnicott proposed that these early patterns of relationships become internalized and govern future relationships.
However, the ethological-evolutionary aspects of the theory received more attention. Bowlby was interested in separation distress, and bonding in animals. He noticed that many infant behaviours are organized around the goal of maintaining proximity to the caregiver. He proposed that human infants like other mammals must have an attachment motivational-behavioural system which enhances chances for survival. Ainsworth observed mother-infant interaction and came to the conclusion that individual differences in reaction to separation could not be explained by simple absence or presence of the caregiver but must be the result of a cognitive process.
However, when Bowlby developed his attachment theory, cognitive psychology was still at its beginning. Only in 1967, Neisser proposed a theory of mental representation based on schemas which later led to the development of schema theory. It was said that these scripts might be the base of the structure of internal working models.
The term internal working model, however, was coined quite early by Craik (1943). What he called internal working model was a more elaborate and modern version of the psychoanalytical idea of the internal world. In essence, he claimed that humans carry a small-scale representation, or model of reality, and their own potential actions within it in their mind.
In summary, Bowlby remodelled Freud’s work about relationship development in terms of newer fields of research (evolutionary biology, ethology, information-processing theory), drawing both from Craik’s idea of representations as the formation and use of dynamic models and Piaget’s theory of cognitive development.
Function
There are several hypothesized functions of an internal working model of attachment, both in terms of its evolutionary origins and inherent functioning.
Bowlby proposed that proximity-seeking behaviour evolved out of selection pressure. In the context of survival, a healthy internal working model helps the infant to maintain proximity to their caregiver in the face of threat or danger. This is especially important for species with prolonged periods of development, like humans. Due to the relative immaturity of the infant at birth, offspring that manages to maintain a close relationship to their caregiver by seeking their proximity has a survival advantage. A close emotional bond to the caregiver is therefore crucial for protection from physical harm, and thus the internal working model mediates attachment. This regulation is enforced via a motivational-behaviour system, motivating both infant and caregiver to seek proximity. Specifically, caregiving is regulated by behavioural processes complementary to the infant’s proximity-seeking, e.g. the baby smiles, the adult feels reward as a result.
Having an adequate internal model or representation of the self and the caregiver also serves the adaptive function of ensuring appropriate interpretation and prediction of, as well as response to the environment. Craik especially emphasized that those organisms that are capable of forming complex internal working models have higher chances of survival. The better the internal working model can simulate reality, the better the individual’s capacity to plan and respond. According to Bowlby, individuals form both models of the world and the self within it. These models, initially the product of specific experiences of reality, then aid future attention to and perception and interpretation of the world, which in turn creates certain expectations about possible future events, allowing foresightful and appropriate behaviour. Hence, having adequate representations of the self and caregivers serves an adaptive function.
Lastly, if the infant can be sure about the availability of the attachment figure, it will be less prone to fear due to the supportive presence or secure base function of the caregiver, which makes exploration of the environment and hence learning possible. This felt security is the primary goal of all working models. Ainsworth researched the secure base phenomenon in her strange situation procedure in which an infant uses their mother as a secure base. The attachment system provides the child with a sense of security in the form of this base, which supports exploration of the environment and hence independence. A securely attached child will, in turn, achieve a balance between intimacy and independence. This corresponds to a balance between the attachment system which serves the function of protection and the exploration system which facilitates learning.
The function of other attachment styles can be explained in terms of an imbalance of intimacy and independence, a preoccupation with one of these goals. This overriding chronic goal is intimacy in preoccupied children, independence or self-protection in dismissive children, and in case of the fearful child, there is a conflicting chronic goal of achieving both intimacy and independence at the same time or an approach-avoidance conflict due to relative inflexibility in comparison to secure attachment.
The internal working model functions largely outside of conscious awareness. Those subconscious aspects might be especially important for the function of self-protection and serve as a defence mechanism in the face of contradicting models, where one of them operates within the subconscious to prevent a threat to the self. This is mostly the case for dismissive-avoidant attachment where conflicting ideas of the caregiver as both loving and neglecting cause the defence mechanism of downplaying the need for intimacy, not relying on the attachment figure, and emphasizing independence.
Types
Infants develop different types of internal working models dependent on two factors: the responsiveness and accessibility of the parent and the worthiness of the self to be loved and supported. Thus, by the age of three years, infants will have developed several expectations about how attachment figures will react to their need for help and start to evaluate how likely the self is worth of support in general. These internalized representations of the self, of attachment figures, and of relationships are constructed as a result of experiences with primary caregivers. It guides the individual’s expectations about relationships throughout life, subsequently influencing social behavior, perception of others and development of self-esteem.
Essentially, four different internal working models can be defined which are based on positive or negative images of self and others. Children who feel securely attached seek their parent as a secure base and are willing to explore their environment. In adulthood, they hold a positive model of self and others, therefore, feeling comfortable with intimacy and autonomy. On the contrary, adults who develop a fearful-avoidant internal working model (negative self, negative others) construct defense mechanisms in order to protect themselves from being rejected by others. Consequently, they avoid intimate relationships. The third category is classified as the preoccupied model, indicating a combination of negative self-evaluation and the appreciation of others, which makes them overly dependent on their environment. Finally, dismissive-avoidant adults aim for independence as they view themselves as valuable and autonomous. They rarely open up and mainly rely on themselves due to lack of trust in others.
Development
Internal working models are considered to result out of generalized representations of past events between attachment figure and the child. Thus, in forming an internal working model a child takes into account past experiences with the caregiver as well as the outcomes of past attempts to establish contact with the caregiver. One important factor in the establishment of generalized representations is caregiver behaviour. Accordingly, a child whose caretaker exhibits high levels of parental sensitivity, responsiveness and reliability is likely to develop a positive internal working model of the self. Conversely, frequent experiences of unreliability and neglect by the attachment figure foster the emergence of negative internal working models of self and others.
As infants have been shown to possess the social and cognitive capacities necessary to form internal working models, initial development of these may occur within the first year of life. Once established, internal working models are assumed to remain largely consistent over time, developing primarily in complexity and sophistication. As such, internal working models of young children may include representations of past instances of caregiver responsiveness or availability, while older children's and adults' internal working models may integrate more advanced cognitive abilities such as the imagination of hypothetical future interactions. However, changes to internal representations of attachment relationships can occur. This is most likely to happen upon repeated experiences that are incompatible with the internal working model in place at the time. One way this can happen is during major periods (meaning weeks or months) of absence of the attachment figure. During such prolonged absence, a child's expectation of the caregiver's availability to respond is continuously violated. This results in a change of behaviour toward the caregiver upon reunion, reflecting changes in the child's internal working model of the relationship.
Intergenerational transmission
Internal working models are subject to intergenerational transmission, meaning that parents' internal working model patterns may be passed on to their children. Indeed, high correlations have been found between security of early infant attachment and parental internal working model security. A central aspect in intergenerational transmission of internal working models is that caretakers themselves are influenced in their behaviour toward children by their own internal working models. For instance, a parent with a secure and consistent internal working model is likely to interpret an infant's attachment signals appropriately, whereas a parent with an insecure internal working model is less likely to do so. In the latter case, the infant itself might be drawn to construct a negative working model of the self and the relationship. Furthermore, a parent with a negative, poorly organized and inconsistent working model might fail to provide useful feedback about the parent-infant dyad and other relationships, thus disrupting the infant's forming of a well-adapted working model at an early stage. The result will be a negative, disorganized internal working model employed by the infant.
One mechanism by which attachment (and thus, internal working models of attachment) can be transmitted is joint reminiscing about past events or memories. For instance, mothers who are securely attached tend to communicate about past events in more elaborate ways than do mothers who are not securely attached. While reminiscing together about past events, securely attached mothers will then engage in more elaborate reasoning with their child, thereby stimulating the development of a more elaborate, coherent internal working model by the child itself.
Notes
Adoption, fostering, orphan care and displacement
Ethology
Evolutionary biology
Human development
Interpersonal relationships
Object relations theory
Philosophy of love | Internal working model of attachment | Biology | 2,379 |
21,449,756 | https://en.wikipedia.org/wiki/Information%20visualization%20reference%20model | The Information visualization reference model is an example of a reference model for information visualization, developed by Ed Chi in 1999, under the name of the data state model. Chi showed that the framework successfully modeled a wide array of visualization applications and later showed that the model was functionally equivalent to the data flow model used in existing graphics toolkits such as VTK.
Overview
In previous work, according to Chi (2000), "researchers have attempted to construct taxonomies of information visualization techniques by examining the data domains that are compatible with these techniques. This is useful because implementers can quickly identify various techniques that can be applied to their domain of interest. However, these taxonomies do not help the implementers understand how to apply and implement these techniques".
According to Chi (2000), he and J.T. Reidl "in 1998 extends and proposes a new way to taxonomize information visualization techniques by using the Data State Model. Many of the techniques share similar operating steps that can easily be reused. The Data State Model not only helps researchers understand the space of design, but also helps implementers understand how information visualization techniques can be applied more broadly".
In 1999 Stuart Card, Jock D. Mackinlay, and Ben Shneiderman present their own interpretation of this pattern, dubbing it the information visualization reference model.
References
Further reading
Ed H. Chi (2002). "Expressiveness of the Data Flow and Data State Models in Visualization Systems".
External links
Description of the Information Visualization Reference Model at the InfoVis:Wiki - includes references describing the model and its use.
Computational science | Information visualization reference model | Mathematics | 336 |
35,921,294 | https://en.wikipedia.org/wiki/Technicien%20sup%C3%A9rieur%20des%20%C3%A9tudes%20et%20de%20l%27exploitation%20de%20l%27aviation%20civile | In France, the corps of Technicien supérieur des études et de l'exploitation de l'aviation civile (TSEEAC, in English Civil aviation opérations Technicians) of the Directorate General for Civil Aviation (DGAC) is a B-class job within the Ministry of Ecology, Sustainable Development, Transport and Housing.
Enrollment
The TSEEAC competitive examination is open to individuals who have their Baccalauréat. Between about 30 and 60 entries may be available each year. After the enrollment, students are trained during three years at the École nationale de l'aviation civile (French civil aviation university) of Toulouse.
Career
This corps has five grades (in descending order):
TSEEAC exceptional class: 5 levels.
TSEEAC main class: 8 levels.
TSEEAC normal class: 11 levels.
TSEEAC internship: 1 level.
TSEEAC student: 1 level.
The TSEEAC, under certain conditions, can be assigned to two functional positions of management:
Cadre Technique de l'Aviation Civile (CTAC) (Technical middle management of civil aviation): 8 levels.
Responsable Technique de l'Aviation Civile (RTAC) (Technical manager of civil aviation): 5 levels.
There are also possibilities to move to another corps of the DGAC like air traffic controllers, air traffic safety electronics personnel or Ingénieur des études et de l'exploitation de l'aviation civile. They should go for one or two years at ENAC to perform a complementary training.
Job
TSEEAC personnel can do many jobs at the DGAC, including: technical assistant, technical operating controller (SAFA programme), aerodrome air traffic controller, ramp manager (at Paris-Charles de Gaulle Airport), aeronautical information operator, officer of the regional offices of information and assistance in flight (BRIA), and investigation assistant (BEA).
Training
In 2010, a reform of TSEEAC initial training was initiated. It is now called TSA "Technicien supérieur de l'aviation" and has, since 2011, two curricula:
TSA civils training: admits by a competitive examination or by Validation des Acquis de l'Experience, they perform a two-year training before graduation.
TSEEAC also called TSA fonctionnaires (TSA civil servants), admits by a competitive examination, they perform the same training as the TSA civils during two years. After this, they are integrated into the TSEEAC corps and perform a one-year complementary dual education system training at the DGAC.
History
The corps of TSEEAC has had several name changes since the 1960s:
1962 - TNA: Technicien de la Navigation Aérienne (Air navigation Technician) including the operations air traffic controller and the air traffic safety electronics personnel.
1964 - division of the corps: the corps of the TNA remains, despite the departure of air traffic controllers and electricians, respectively to the new corps of "Officiers Contrôleurs de la Circulation Aérienne" (OCCA) - became air traffic controller after that - without those of electricians - became Air Traffic Safety Electronics Personnel.
1975 - TAC: Techniciens de l'Aviation Civile (Civil aviation technicians).
1993 - TEEAC: Technicien des Etudes et d'Exploitation de l'Aviation Civile.
2000 - TSEEAC: Technicien Supérieur des Etudes et d'Exploitation de l'Aviation Civile.
2011 - The initial training at ENAC of TSEEAC is opened to non civil servant and renamed TSA (Technicien supérieur de l'aviation).
Decree
Décret n°93-622 du 27 mars 1993 concerning the corps of techniciens supérieurs des études et de l'exploitation de l'aviation civile.
See also
French Civil Service
Technicien supérieur de l'aviation (TSA)
Technicien Supérieur de l'Aviation (civilian) (TSA civilian)
References
Bibliography
Ariane Gilotte, Jean-Philippe Husson et Cyril Lazerge, 50 ans d'Énac au service de l'aviation, Édition S.E.E.P.P, 1999
External links
TSEEAC on ENAC website
Air traffic control in France
Aviation licenses and certifications
École nationale de l'aviation civile
Professional titles and certifications
Occupations in aviation | Technicien supérieur des études et de l'exploitation de l'aviation civile | Engineering | 905 |
5,172,638 | https://en.wikipedia.org/wiki/Omicron%20Centauri | The Bayer designation Omicron Centauri (ο Cen / ο Centauri) is shared by two star systems, in the constellation Centaurus:
ο Centauri
ο Centauri
They are separated by 0.07° on the sky. They may be physically related.
Centauri, Omicron
Centaurus | Omicron Centauri | Astronomy | 69 |
43,751,582 | https://en.wikipedia.org/wiki/Equivalent%20input | Equivalent input (also input-referred, referred-to-input (RTI), or input-related), is a method of referring to the signal or noise level at the output of a system as if it were due to an input to the same system. This input's value is called the Equivalent input. This is accomplished by removing all signal changes (e.g. amplifier gain, transducer sensitivity, etc.) to get the units to match the input.
Examples
Equivalent input noise
A microphone converts acoustical energy to electrical energy. Microphones have some level of electrical noise at their output. This noise may have contributions from random diaphragm movement, thermal noise, or a dozen other sources, but those can all be thought of as an imaginary acoustic noise source injecting sound into the (now noiseless) microphone. The units on this noise are no longer volts, but units of sound pressure (pascals or dBSPL), which can be directly compared to the desired sound pressure inputs. This is called equivalent input noise (EIN), or input-referred noise (IRN), or referred-to-input (RTI) noise.
Input-related interference level
A device which uses a microphone may be susceptible to electromagnetic interference which causes sonic artifacts. The problem is not in the microphone, but the interference level can be related back to the input to compare to the level of typical inputs to see how audible the artifact is. This is called input-related interference level (IRIL).
References
Further reading
(67 pages)
Acoustics
Noise (electronics) | Equivalent input | Physics | 326 |
10,545,003 | https://en.wikipedia.org/wiki/Wave%20dash | Wave dash () is a character represented in Japanese character encoding mainly used as a dash and chōonpu. The wave dash is similar to, but not the same as, the tilde character (), which is often used interchangeably with it.
The vertical wave dash () is not currently included in Unicode, but there is a similar symbol available called the wavy line (). It is created by rotating right (clockwise) the wavy dash symbol () to form a vertical wave-like pattern.
Wave dash is also written in vertical text layout. Vertical wave dash is the vertical form by rotation and flip in Unicode and JIS C 6226.
See also
Dash#Swung dash
Tilde#Unicode and Shift JIS encoding of wave dash
Japanese punctuation#Wave dash
Code reference
References
Encodings of Japanese
Typographical symbols
Punctuation | Wave dash | Mathematics | 174 |
2,398,993 | https://en.wikipedia.org/wiki/Valsartan | Valsartan, sold under the brand name Diovan among others, is a medication used to treat high blood pressure, heart failure, and diabetic kidney disease. It belongs to a class of medications referred to as angiotensin II receptor blockers (ARBs). It is a reasonable initial treatment for high blood pressure. It is taken by mouth.
Common side effects include feeling tired, dizziness, high blood potassium, diarrhea, and joint pain. Other serious side effects may include kidney problems, low blood pressure, and angioedema. Use in pregnancy may harm the baby and use when breastfeeding is not recommended. It is an angiotensin II receptor antagonist and works by blocking the effects of angiotensin II.
Valsartan was patented in 1990, and came into medical use in 1996. It is available as a generic medication. In 2022, it was the 117th most commonly prescribed medication in the United States, with more than 5million prescriptions.
Medical uses
Valsartan is used to treat high blood pressure, heart failure, and to reduce death for people with left ventricular dysfunction after having a heart attack.
High blood pressure
Valsartan (and other ARBs) are an appropriate initial treatment option for most people with high blood pressure and no other coexisting conditions, as are ACE inhibitors, thiazide diuretics and calcium channel blockers. If patients have coexisting diabetes or kidney disease, ARBs or ACE inhibitors may be considered over other classes of blood pressure medicines.
Heart failure
Valsartan has reduced rates of mortality and heart failure hospitalisations when used alone or in combination with beta blockers in the treatment of heart failure. Importantly, the combination of valsartan and ACE inhibitors has not shown morbidity or mortality benefits but rather increases mortality risk when added to combination beta blocker and ACE inhibitor therapy, and increases the risk of adverse events like hyperkalaemia, hypotension and renal failure. As shown in the PARADIGM-HF study, valsartan combined with sacubitril for the treatment of heart failure, significantly reduced all cause and cardiovascular mortality and hospitalisations due to heart failure.
Diabetic kidney disease
In people with type 2 diabetes, antihypertensive therapy with valsartan decreases the rate of progression of albuminuria (albumin in urine), promotes regression to normoalbuminuria and may reduce the rate of progression to end-stage kidney disease.
Contraindications
The packaging for valsartan includes a warning stating the drug should not be used with the renin inhibitor aliskiren in people with diabetes. It also states the safety of the drug in severe renal impairment has not been established.
Valsartan includes a black box warning for fetal toxicity. Discontinuation of these agents is recommended immediately after detection of pregnancy and an alternative medication should be started. Breastfeeding is not recommended.
Side effects
Side effects depend on the reason the medication is being used.
Heart failure
Adverse effects are based on a comparison versus placebo in people with heart failure. Most common side effects include dizziness (17% vs 9% ), low blood pressure (7% vs 2%), and diarrhea (5% vs 4%). Less common side effects include joint pain, fatigue, and back pain (all 3% vs 2%).
Hypertension
Clinical trials for valsartan treatment for hypertension versus placebo demonstrate side effects like viral infection (3% vs 2%), fatigue (2% vs 1%) and abdominal pain (2% vs 1%). Minor side effects that occurred at >1% but were similar to rates from the placebo group include:
headache
dizziness
upper respiratory infection
cough
diarrhea
rhinitis/sinusitis
nausea
pharyngitis
edema
arthralgia
Kidney failure
People treated with ARBs including valsartan or diuretics are susceptible to conditions of developing low renal blood flow such as abnormal narrowing of blood vessels in the kidney, hypertension, renal artery stenosis, heart failure, chronic kidney disease, severe congestive heart failure, or volume depletion whose renal function is in part dependent on the activity of the renin-angiotensin system like efferent arteriolar vasoconstriction done by angiotensin II are at high risk of deterioration of renal function comprising acute kidney failure, oliguria, worsening azotemia or heightened serum creatinine. When blood flow to the kidneys is reduced, the kidney activates a series of responses that triggers angiotensin release to constrict blood vessels and facilitate blood flow in the kidney. So long as the nephron function degradation is progressive or reaches clinically significant levels, withholding or discontinuing valsartan is warranted.
Interactions
The US prescribing information lists the following drug interactions for valsartan:
Other inhibitors of the renin-angiotensin system may increase the risks of low blood pressure, kidney problems, and hyperkalemia.
Potassium-sparing diuretics, potassium supplements, salt substitutes containing potassium may increase the risk of hyperkalemia.
NSAIDs may increase the risk of kidney problems and may interfere with blood pressure-lowering effects.
Valsartan may increase the concentration of lithium.
Valsartan and other angiotensin-related blood pressure medications may interact with the antibiotics co-trimoxazole or ciprofloxacin to increase risk of sudden death due to cardiac arrest.
Food interaction
With the tablet, food decreases the valsartan tablet taker's exposure to valsartan by about 40% and peak plasma concentration (Cmax) by about 50%, evidenced by AUC change.
Pharmacology
Mechanism of action
Valsartan blocks the actions of angiotensin II, which include constricting blood vessels and activating aldosterone, to reduce blood pressure. The drug binds to angiotensin type I receptors (AT1), working as an antagonist. This mechanism of action is different than that of the ACE inhibitor drugs, which block the conversion of angiotensin I to angiotensin II. As valsartan acts at the receptor, it can provide more complete angiotensin II antagonism since angiotensin II is generated by other enzymes as well as ACE. Also, valsartan does not affect the metabolism of bradykinin like ACE inhibitors do.
Pharmacodynamics
Pharmacokinetics
The peak concentration of valsartan in plasma occurs 2 to 4 hours after dosing. AUC and Cmax values of valsartan are observed to be approximately linearly dose-dependent over therapeutic dosing range. Owing to its relatively short elimination half life attribution, valsartan concentration in plasma does not accumulate in response to repeated dosing.
Society and culture
Economics
In 2010, valsartan (trade name Diovan) achieved annual sales of $2.052billion in the United States and $6.053billion worldwide. The patents for valsartan and valsartan/hydrochlorothiazide expired in September 2012.
Combinations
Versions are available as the combinations valsartan/hydrochlorothiazide, valsartan/amlodipine, valsartan/amlodipine/hydrochlorothiazide, valsartan/nebivolol, and valsartan/sacubitril.
Valsartan is combined with amlodipine or hydrochlorothiazide (HCTZ) (or both) into single-pill formulations for treating hypertension with multiple drugs.
Valsartan is also available as the combination valsartan/sacubitril. It is used to treat heart failure with reduced ejection fraction.
Recalls
In July 2018, the European Medicines Agency (EMA) recalled certain batches of valsartan and valsartan/hydrochlorothiazide film-coated tablets distributed in 22 countries in the European Union. (ZHP) in Linhai, China manufactured the bulk ingredient contaminated by N-nitrosodimethylamine (NDMA), a carcinogen. The active pharmaceutical ingredient was subsequently imported by a number of generic drugmakers, including Novartis, and marketed in Europe and Asia under their subsidiary Sandoz labeling, and in the UK by Dexcel Pharma Ltd and Accord Healthcare.
Valsartan was recalled in Canada. Authorities believe the degree of contamination is negligible. In July 2018, The National Agency of Drug and Food Control (NA-DFC or Badan POM Indonesia) announced voluntary recalls for two products containing valsartan produced by Actavis Indonesia and Dipa Pharmalab Intersains. In July 2018, the US Food and Drug Administration (FDA) announced voluntary recalls of certain supplies of valsartan and valsartan/hydrochlorothiazide in the US distributed by Solco Healthcare LLC, Major Pharmaceuticals, and Teva Pharmaceutical Industries. Hong Kong's Department of Health initiated a similar recall. In August 2018, the FDA published two lengthy, updated lists, classifying hundreds of specific US products containing valsartan into those included versus excluded from the recall. A week later, the FDA cited two more drugmakers, Zhejiang Tianyu Pharmaceuticals of China and Hetero Labs Limited of India, as additional sources of the contaminated valsartan ingredient.
In September 2018, the FDA announced that retesting of all valsartan supplies had found a second carcinogenic impurity, N-nitrosodiethylamine (NDEA), in the recalled products made by ZHP in China and marketed in the US under the Torrent Pharmaceuticals (India) brand.
According to a 2018 Reuters analysis of national medicines agencies' records, more than 50 companies around the world have recalled valsartan mono-preparations or combination products manufactured from the tainted valsartan ingredient. The contamination has likely been present since 2012 when the manufacturing process was changed and approved by EDQM and FDA authorities. Based on inspections in late 2018, both agencies have suspended the Chinese and Indian manufacturers' certificates of suitability for the supply of valsartan in the EU and the US.
In 2019, many more preparations of valsartan and its combinations were recalled due to the presence of the contaminant NDMA.
In August 2020, the European Medicines Agency (EMA) provided guidance to marketing authorization holders on how to avoid the presence of nitrosamine impurities in human medicines and asked them to review all chemical and biological human medicines for the possible presence of nitrosamines and to test the products at risk.
The FDA issued revised guidelines about nitrosamine impurities in September 2024.
Shortages
Since July 2018, numerous recalls of losartan, valsartan and irbesartan drug products have caused marked shortages of these life saving medications in North America and Europe, particularly for valsartan. In March 2019, the FDA approved an additional generic version of valsartan to address the issue. According to the agency, the shortage of valsartan was resolved in April 2020, but the availability of the generic form remained unstable into July 2020. Pharmacies in the European Union were notified that the supply of the drug, particularly for higher dosage forms, would remain unstable well into December 2020.
Research
In people with impaired glucose tolerance, valsartan may decrease the incidence of developing diabetes mellitus type 2. However, the absolute risk reduction is small (less than 1 percent per year) and diet, exercise or other drugs, may be more protective. In the same study, no reduction in the rate of cardiovascular events (including death) was shown.
In one study of people without diabetes, valsartan reduced the risk of developing diabetes mellitus over amlodipine, mainly for those with hypertension.
A prospective study demonstrated a reduction in the incidence and progression of Alzheimer's disease and dementia.
References
Angiotensin II receptor antagonists
Biphenyls
Carboxamides
Carboxylic acids
Drugs developed by Novartis
Wikipedia medicine articles ready to translate
Tetrazoles | Valsartan | Chemistry | 2,522 |
7,955,890 | https://en.wikipedia.org/wiki/Bowhammer | In music, a bowhammer is a device used when playing a cymbalum to strike, pull across or pick the strings in order to make them vibrate and emit sound. It was devised to replace the mallets that were traditionally used to play the cymbalum. Unlike mallets, which almost exclusively are used for striking, the bowhammer allows for greater versatility, "expanding the sonic and expressive scope of an ancient instrument."
It consists of a ring, which holds the bowhammer on the finger, a shaped handle attached to the ring, and a 3 inch section of violin bow at the end. Bowhammers are typically worn in groups of eight, one on each finger except the thumb.
The tension on the bow allows the player to stroke the string or strike it. Additionally the string can be plucked it with the end of the bowhammer.
The bowhammer is a recent musical invention created by the musician Michael Masley, who is the premiere user of this tool. The sound generated is significantly different from that generated by the traditional hammering of the cymbalom, that the artist considers the bowhammer cymbalom a specific instrument. The bowhammer may be usable on other string instruments, such as the guitar or hammered dulcimer, but no other uses have surfaced to date.
References
Musical instrument parts and accessories | Bowhammer | Technology | 272 |
18,953,009 | https://en.wikipedia.org/wiki/SucA%20RNA%20motif | The sucA RNA motif is a conserved RNA structure found in bacteria of the order Burkholderiales. RNAs within this motif are always found in the presumed 5' UTR of sucA genes. sucA encodes a subunit of an enzyme that participates in the citric acid cycle by synthesizing succinyl-CoA from 2-oxoglutarate. A part of the conserved structure overlaps predicted Shine-Dalgarno sequences (involved in ribosome binding) of the downstream sucA genes. Because of the RNA motif's consistent gene association and a possible mechanism for sequestering the ribosome binding site, it was proposed that the sucA RNA motif corresponds to a cis-regulatory element. Its relatively complex secondary structure could indicate that it is a riboswitch. However, the function of this RNA motif remains unknown.
See also
SucA-II RNA motif
SucC RNA motif
References
External links
Cis-regulatory RNA elements | SucA RNA motif | Chemistry | 199 |
44,273,187 | https://en.wikipedia.org/wiki/Interchromatin%20granule | An interchromatin granule is a cluster in the nucleus of a mammal cell which is enriched in pre-mRNA splicing factors. Interchromatin granules are located in the interchromatin regions of the mammalian Cell nuclei. They usually appear as irregularly shaped structures that vary in size and number. They can be observed by immunofluorescence microscopy.
Interchromatin granules are structures undergoing constant change, and their components exchange continuously with the nucleoplasm, active transcription sites and other nuclear locations.
Research on dynamics of interchromatin granules has provided new insight into the functional organisation of the nucleus and gene expression.
Interchromatin granule clusters vary in size anywhere between one and several micrometers in diameter. They are composed of 20–25 nm granules that are connected in a beaded chain fashion appearance by thin fibrils.
Interchromatin granule clusters (IGCs) have been proposed to be stockpiles of fully mature snRNPs and other RNA processing components that are ready to be used in the production of mRNA.
See also
are subnuclear structures that are enriched in pre-messenger RNA splicing factors
References
Cell biology | Interchromatin granule | Biology | 251 |
3,767,155 | https://en.wikipedia.org/wiki/168%20%28number%29 | 168 (one hundred [and] sixty-eight) is the natural number following 167 and preceding 169.
It is the number of hours in a week, or 7 x 24 hours. 168 is the fourth Dedekind number, and the 128th composite number.
Mathematics
Number theory
168 is the fourth Dedekind number, and one of sixty-five idoneal numbers. It is one less than a square (132), equal to the product of the first two perfect numbers
There are 168 primes less than 1000.
Composite index
The 128th composite number is 168, one of a few numbers in the list of composites whose indices are the product of strings of digits of in decimal representation.
The first nine with this property are the following:
The next such number is 198 where . The median between twenty-one integers is 58, where 148 is the median of forty-one integers .
Totient and sigma values
For the Euler totient there is , where is also equivalent to the number of divisors of 168; only eleven numbers have a totient of 48:.
The number of divisors of 168 is 16, making it a largely composite number.
408, with a different permutation of the digits where 048 is 48, has an totient of 128. So does the sum-of-divisors of 168,
as one of nine numbers total to have a totient of 128.
48 sets the sixteenth record for sum-of-divisors of positive integers (of 124), and the seventeenth record value is 168, from six numbers (60, 78, 92, 123, 143, and 167).
The difference between 168 and 48 is the factorial of five (120), where their sum is the cube of six (216).
Idoneal number
Leonhard Euler noted 65 idoneal numbers (the most known, of only a maximum possible of two more), such that for an integer , expressible in only one way, yields a prime power or twice a prime power.
Of these, 168 is the forty-fourth, where the smallest number to not be idoneal is the fifth prime number 11. The largest such number 1848 (that is equivalent with the number of edges in the union of two cycle graphs of order 42) contains a total of thirty-two divisors whose arithmetic mean is 180 (the second-largest number to have a totient of 48). Preceding 1848 in the list of idoneal numbers is 1365, whose arithmetic mean of divisors is equal to 168 (while 1365 has a totient of 576 = 242).
Where 48 is the 27th ideoneal number, 408 is the 58th. On the other hand, the total count of known idoneal numbers (65), that is also equal to the sum of ten integers , has a sum-of-divisors of 84 (or, one-half of 168).
Numbers of the form 2n
In base 10, 168 is the largest of ninety-two known such that does not contain all numerical digits from that base (i.e. 0, 1, 2, ..., 9).
is the first number to have such an expression where between the next two is an interval of ten integers: ; the median values between these are (75, 74), where the smaller of these two values represents the composite index of 100.
Cunningham number
As a number of the form for positive integers , and not a perfect power, 168 is the thirty-second Cunningham number, where it is one less than a square:
On the other hand, 168 is one more than the third member of the fourth chain of nearly doubled primes of the first kind , where 167 represents the thirty-ninth prime (with 39 × 2 = 78). The smallest such chain is .
Eisenstein series
168 is also coefficient four in the expansion of Eisenstein series , which also includes 144 and 96 (or 48 × 2) as the fifth and third coefficients, respectively — these have a sum of 240, which follows 144 and 187 in the list of successive composites ;cf. the latter holds a sum-of-divisors of 216 = 63, which is the 168th composite number.
Abstract algebra
168 is the number of maximal chains in the Bruhat order of symmetric group which is the largest solvable symmetric group with a total of elements.
168 is the order of the second smallest nonabelian simple group From Hurwitz's automorphisms theorem, 168 is the maximum possible number of automorphisms of a genus 3 Riemann surface, this maximum being achieved by the Klein quartic, whose symmetry group is ; the Fano plane, isomorphic to the Klein group, has 168 symmetries.
In other fields
Numerology
Some Chinese consider 168 a lucky number, because 一六八 ("168") Mandarin is roughly homophonous with the phrase "一路發" Mandarin which means "fortune all the way", or, as the United States Mint claims, "Prosperity Forever".
Notes
References
External links
Number Facts and Trivia: 168
The Number 168
The Positive Integer 168
Prime curiosities: 168
Integers | 168 (number) | Mathematics | 1,075 |
16,691,518 | https://en.wikipedia.org/wiki/Specialisterne | Specialisterne (The Specialists) is a Danish social innovator company using the characteristics of neurodivergent people (including autism/Asperger's, ADHD, OCD, and dyslexia) as competitive advantages in the business market.
Specialisterne provides services such as software testing, quality control, metadata management, data conversion, and logistics, as well as in other areas such as agriculture, for businesses in 26 countries.
In addition, Specialisterne assesses and trains people to meet the requirements of the business sector.
The company's branches, as well as the concept and name, are owned and operated by Specialisterne Foundation, with offices in 13 countries and local partnerships in others.
The company provides a working environment in which skills common to neurodivergent employees – such as pattern recognition, detection of deviations, attention to detail, and extended focus – are integral, and where the role of the management and staff is to create the best possible working environment for the employees with ASD.
Background
The youngest son of Specialisterne founder Thorkil Sonne, Lars, was diagnosed as having "infantile autism, normal intelligence", at age three, denoting an Autism Spectrum Disorder (ASD). Sonne became active in the Danish Autism Association, then president of a local chapter of Autism Denmark, for three years, where he learned that people with ASD seldom have a chance to use their special skills in the labour market.
After 15 years working with IT within telecommunication companies, Sonne knew the value of the skills he saw in people with ASD. With the support of his family, Sonne founded Specialisterne in 2004, based on a home mortgage and his family's belief in his vision.
Today
, Specialisterne has over 600 employees and has operated in 26 countries, with offices in 13 of them: Australia, Brazil, Canada, Denmark, France, Iceland, India, Ireland, Italy, Mexico, Spain, the UK, and the United States. A 2017 report by the Organisation for Economic Co-operation and Development (OECD) stated approximately 75% of the company's 50 IT employees in Denmark had a diagnosis within the autism spectrum. The head offices are in Copenhagen. The company includes training programs to access and build up personal, social and professional skills for people with ASD – no formal education or job experiences are expected. A number of appropriate strategies are used for individuals on the spectrum, including LEGO Mindstorms robot technology, helping to detect the strengths, the motivation and the development opportunities of the individual.
Services offered by Specialisterne include software testing, quality control, documentation, data entry, and logistics with a high attention to detail and accuracy for customers including TDC A/S, Grundfos, KMD, CSC, SAP, Microsoft, Parexel, Hewlett-Packard, and Oracle.
Specialisterne maintains a focus on transferring knowledge on how to turn disabilities into abilities. Speeches, workshops and courses are based on the method of positive thinking called The Dandelion Model. (The dandelion was symbolically chosen as a beneficial weed found in unexpected places, akin to the ethos of ASD including valuable skills.)
In December 2008, Thorkil Sonne donated all shares of Specialisterne to the Specialist People Foundation (later Specialisterne Foundation), a nonprofit organization founded by Sonne.
In September 2009, Specialisterne started a school where youth with ASD could get an education with focus on social development and interaction with its offices. The school is funded with help from the Lego Foundation and the Danish Ministry of Education.
In August 2010, Specialisterne opened in Scotland with David Farrell-Shaw as general manager. The Scottish company was a subsidiary of the social enterprise company Community Enterprise in Scotland (CEiS), funded by £700,000 from the Scottish government; the project also received £407,036 from the Big Lottery Fund and £30,000 from Glasgow City Council.
In October 2010, assisted by funding support from a European Commission's Lifelong Learning Programme called the Leonardo da Vinci programme (project number 2010-1-IS1-LEO05-00579), a project began to link Scotland, Denmark, Germany and Iceland. The Icelandic offering went live in January 2011.
In September 2012, Specialisterne Scotland closed, and a branch opened in the US called Specialisterne Minnesota. The following year, the Minnesota organization became Specialisterne USA, based in Delaware, and the company also opened an office in Canada. An Australian branch was founded in 2015, which then formed a partnership with the Dandelion Program and, in 2017, announced plans to form a partnership in New Zealand. The same year, a branch was founded in Milan, Italy. It has also opened branches in a number of other countries. A similar organisation in the United States, Aspiritech, is based on the same concept.
See also
auticon
References
Further reading
'Specialisterne: Sense & Details', Case Study, Harvard Business School
Management Today
The Independent
Official website
Autism-related organizations
Service companies of Denmark
Companies of Scotland
Disability organizations based in Denmark
Accessibility
Danish companies established in 2003 | Specialisterne | Engineering | 1,067 |
46,918,469 | https://en.wikipedia.org/wiki/Lai-Sang%20Young | Lai-Sang Lily Young (, born 1952) is a Hong Kong-born American mathematician who holds the Henry & Lucy Moses Professorship of Science and is a professor of mathematics and neural science at the Courant Institute of Mathematical Sciences of
New York University. Her research interests include dynamical systems, ergodic theory, chaos theory, probability theory, statistical mechanics, and neuroscience. She is particularly known for introducing the method of Markov returns in 1998, which she used to prove exponential correlation delay in Sinai billiards and other hyperbolic dynamical systems.
Education and career
Although born and raised in Hong Kong, Young came to the US for her education, earning a bachelor's degree from the University of Wisconsin–Madison in 1973. She moved to the University of California, Berkeley for her graduate studies, earning a master's degree in 1976 and completing her doctorate in 1978, under the supervision of Rufus Bowen. She taught at Northwestern University from 1979 to 1980, Michigan State University from 1980 to 1986, the University of Arizona from 1987 to 1990, and the University of California, Los Angeles from 1991 to 1999. She has been the Moses Professor at NYU since 1999.
Awards and honors
Young became a Sloan Fellow in 1985, and a Guggenheim Fellow in 1997.
In 1993, Young was given the Ruth Lyttle Satter Prize in Mathematics of the American Mathematical Society "for her leading role in the investigation of the statistical (or ergodic) properties of dynamical systems". This is a biennial award for outstanding research contributions by a female mathematician.
In 2004, she was elected as a fellow of the American Academy of Arts and Sciences.
Young was an invited speaker at the International Congress of Mathematicians in 1994, and an invited plenary speaker at the 2018 International Congress of Mathematicians.
In 2005, she presented the Noether Lecture of the Association for Women in Mathematics; her talk was entitled "From Limit Cycles to Strange Attractors". In 2007, she presented the Sonia Kovalevsky lecture, jointly sponsored by the AWM and the Society for Industrial and Applied Mathematics.
In 2020 she was elected a member of the National Academy of Sciences. She is the recipient of the 2021 Jürgen Moser Lecture prize "for her sustained and deep contributions to the theory of non-uniformly hyperbolic dynamical systems." In 2023, she was awarded the Heinz Hopf Prize and in 2024 the Rolf Schock Prize.
Selected publications
.
.
.
.
.
.
.
References
External links
Young's Homepage
(Plenary Lecture 8)
Kevin Hartnett, A Mathematical Model Unlocks the Secrets of Vision, Quanta Magazine, 21 August 2019
1952 births
Living people
20th-century American mathematicians
20th-century American women mathematicians
21st-century American mathematicians
21st-century American women mathematicians
Courant Institute of Mathematical Sciences faculty
Dynamical systems theorists
Fellows of the American Academy of Arts and Sciences
Members of the United States National Academy of Sciences
Hong Kong emigrants to the United States
Hong Kong mathematicians
Michigan State University faculty
Northwestern University faculty
University of Arizona faculty
University of California, Berkeley alumni
University of California, Los Angeles faculty
University of Wisconsin–Madison alumni | Lai-Sang Young | Mathematics | 630 |
2,060,183 | https://en.wikipedia.org/wiki/Barnes%20G-function | In mathematics, the Barnes G-function G(z) is a function that is an extension of superfactorials to the complex numbers. It is related to the gamma function, the K-function and the Glaisher–Kinkelin constant, and was named after mathematician Ernest William Barnes. It can be written in terms of the double gamma function.
Formally, the Barnes G-function is defined in the following Weierstrass product form:
where is the Euler–Mascheroni constant, exp(x) = ex is the exponential function, and Π denotes multiplication (capital pi notation).
The integral representation, which may be deduced from the relation to the double gamma function, is
As an entire function, G is of order two, and of infinite type. This can be deduced from the asymptotic expansion given below.
Functional equation and integer arguments
The Barnes G-function satisfies the functional equation
with normalisation G(1) = 1. Note the similarity between the functional equation of the Barnes G-function and that of the Euler gamma function:
The functional equation implies that G takes the following values at integer arguments:
(in particular, )
and thus
where denotes the gamma function and K denotes the K-function. The functional equation uniquely defines the Barnes G-function if the convexity condition,
is added. Additionally, the Barnes G-function satisfies the duplication formula,
,
where is the Glaisher–Kinkelin constant.
Characterisation
Similar to the Bohr–Mollerup theorem for the gamma function, for a constant , we have for
and for
as .
Reflection formula
The difference equation for the G-function, in conjunction with the functional equation for the gamma function, can be used to obtain the following reflection formula for the Barnes G-function (originally proved by Hermann Kinkelin):
The log-tangent integral on the right-hand side can be evaluated in terms of the Clausen function (of order 2), as is shown below:
The proof of this result hinges on the following evaluation of the cotangent integral: introducing the notation for the log-cotangent integral, and using the fact that , an integration by parts gives
Performing the integral substitution gives
The Clausen function – of second order – has the integral representation
However, within the interval , the absolute value sign within the integrand can be omitted, since within the range the 'half-sine' function in the integral is strictly positive, and strictly non-zero. Comparing this definition with the result above for the logtangent integral, the following relation clearly holds:
Thus, after a slight rearrangement of terms, the proof is complete:
Using the relation and dividing the reflection formula by a factor of gives the equivalent form:
Adamchik (2003) has given an equivalent form of the reflection formula, but with a different proof.
Replacing z with − z in the previous reflection formula gives, after some simplification, the equivalent formula shown below (involving Bernoulli polynomials):
Taylor series expansion
By Taylor's theorem, and considering the logarithmic derivatives of the Barnes function, the following series expansion can be obtained:
It is valid for . Here, is the Riemann zeta function:
Exponentiating both sides of the Taylor expansion gives:
Comparing this with the Weierstrass product form of the Barnes function gives the following relation:
Multiplication formula
Like the gamma function, the G-function also has a multiplication formula:
where is a constant given by:
Here is the derivative of the Riemann zeta function and is the Glaisher–Kinkelin constant.
Absolute value
It holds true that , thus . From this relation and by the above presented Weierstrass product form one can show that
This relation is valid for arbitrary , and . If , then the below formula is valid instead:
for arbitrary real y.
Asymptotic expansion
The logarithm of G(z + 1) has the following asymptotic expansion, as established by Barnes:
Here the are the Bernoulli numbers and is the Glaisher–Kinkelin constant. (Note that somewhat confusingly at the time of Barnes the Bernoulli number would have been written as , but this convention is no longer current.) This expansion is valid for in any sector not containing the negative real axis with large.
Relation to the log-gamma integral
The parametric log-gamma can be evaluated in terms of the Barnes G-function:
The proof is somewhat indirect, and involves first considering the logarithmic difference of the gamma function and Barnes G-function:
where
and is the Euler–Mascheroni constant.
Taking the logarithm of the Weierstrass product forms of the Barnes G-function and gamma function gives:
A little simplification and re-ordering of terms gives the series expansion:
Finally, take the logarithm of the Weierstrass product form of the gamma function, and integrate over the interval to obtain:
Equating the two evaluations completes the proof:
And since then,
References
Number theory
Special functions
Gamma and related functions | Barnes G-function | Mathematics | 1,048 |
77,827,597 | https://en.wikipedia.org/wiki/Masupirdine | Masupirdine is an investigational new drug that is being evaluated for the treatment of agitation in Alzheimer's dementia. It is a selective 5-HT6 receptor antagonist.
References
Indoles
4-Methylpiperazin-1-yl compounds
Sulfonamides
Bromobenzene derivatives
Methoxy compounds | Masupirdine | Chemistry | 68 |
30,054,770 | https://en.wikipedia.org/wiki/Biham%E2%80%93Middleton%E2%80%93Levine%20traffic%20model | The Biham–Middleton–Levine traffic model is a self-organizing cellular automaton traffic flow model. It consists of a number of cars represented by points on a lattice with a random starting position, where each car may be one of two types: those that only move downwards (shown as blue in this article), and those that only move towards the right (shown as red in this article). The two types of cars take turns to move. During each turn, all the cars for the corresponding type advance by one step if they are not blocked by another car. It may be considered the two-dimensional analogue of the simpler Rule 184 model. It is possibly the simplest system exhibiting phase transitions and self-organization.
History
The Biham–Middleton–Levine traffic model was first formulated by Ofer Biham, A. Alan Middleton, and Dov Levine in 1992. Biham et al found that as the density of traffic increased, the steady-state flow of traffic suddenly went from smooth flow to a complete jam. In 2005, Raissa D'Souza found that for some traffic densities, there is an intermediate phase characterized by periodic arrangements of jams and smooth flow. In the same year, Angel, Holroyd and Martin were the first to rigorously prove that for densities close to one, the system will always jam. Later, in 2006, Tim Austin and Itai Benjamini found that for a square lattice of side N, the model will always self-organize to reach full speed if there are fewer than N/2 cars.
Lattice space
The cars are typically placed on a square lattice that is topologically equivalent to a torus: that is, cars that move off the right edge would reappear on the left edge; and cars that move off the bottom edge would reappear on the top edge.
There has also been research in rectangular lattices instead of square ones. For rectangles with coprime dimensions, the intermediate states are self-organized bands of jams and free-flow with detailed geometric structure, that repeat periodically in time. In non-coprime rectangles, the intermediate states are typically disordered rather than periodic.
Phase transitions
Despite the simplicity of the model, it has two highly distinguishable phases – the jammed phase, and the free-flowing phase. For low numbers of cars, the system will usually organize itself to achieve a smooth flow of traffic. In contrast, if there is a high number of cars, the system will become jammed to the extent that no single car will move. Typically, in a square lattice, the transition density is when there are around 32% as many cars as there are possible spaces in the lattice.
Intermediate phase
The intermediate phase occurs close to the transition density, combining features from both the jammed and free-flowing phases. There are principally two intermediate phases – disordered (which could be meta-stable) and periodic (which are provably stable). On rectangular lattices with coprime dimensions, only periodic orbits exist. In 2008 periodic intermediate phases were also observed in square lattices. Yet, on square lattices disordered intermediate phases are more frequently observed and tend to dominate densities close to the transition region.
Rigorous analysis
Despite the simplicity of the model, rigorous analysis is very nontrivial. Nonetheless, there have been mathematical proofs regarding the Biham–Middleton–Levine traffic model. Proofs so far have been restricted to the extremes of traffic density. In 2005, Alexander Holroyd et al proved that for densities sufficiently close to one, the system will have no cars moving infinitely often. In 2006, Tim Austin and Itai Benjamini proved that the model will always reach the free-flowing phase if the number of cars is less than half the edge length for a square lattice.
Non-orientable surfaces
The model is typically studied on the orientable torus, but it is possible to implement the lattice on a Klein bottle. When the red cars reach the right edge, they reappear on the left edge except flipped vertically; the ones at the bottom are now at the top, and vice versa. More formally, for every , a red car exiting the site would enter the site . It is also possible to implement it on the real projective plane. In addition to flipping the red cars, the same is done for the blue cars: for every , a blue car exiting the site would enter the site .
The behaviour of the system on the Klein bottle is much more similar to the one on the torus than the one on the real projective plane. For the Klein bottle setup, the mobility as a function of density starts to decrease slightly sooner than in the torus case, although the behaviour is similar for densities greater than the critical point. The mobility on the real projective plane decreases more gradually for densities from zero to the critical point. In the real projective plane, local jams may form at the corners of the lattice even though the rest of the lattice is free-flowing.
Randomization
A randomized variant of the BML traffic model, called BML-R, was studied in 2010. Under periodic boundaries, instead of updating all cars of the same colour at once during each step, the randomized model performs updates (where is the side length of the presumably square lattice): each time, a random cell is selected and, if it contains a car, it is moved to the next cell if possible. In this case, the intermediate state observed in the usual BML traffic model does not exist, due to the non-deterministic nature of the randomized model; instead the transition from the jammed phase to the free flowing phase is sharp.
Under open boundary conditions, instead of having cars that drive off one edge wrapping around the other side, new cars are added on the left and top edges with probability and removed from the right and bottom edges respectively. In this case, the number of cars in the system can change over time, and local jams can cause the lattice to appear very different from the usual model, such as having coexistence of jams and free-flowing areas; containing large empty spaces; or containing mostly cars of one type.
References
External links
CUDA implementation by Daniel Lu
WebGL implementation by Jason Davies
JavaScript implementation by Maciej Baron
Cellular automaton rules
Lattice models
Articles containing video clips
Traffic flow | Biham–Middleton–Levine traffic model | Physics,Materials_science | 1,297 |
40,333,285 | https://en.wikipedia.org/wiki/Cis-2-Decenoic%20acid | cis-2-Decenoic acid is an unsaturated fatty acid. It is a colorless oil.
Preparation and occurrence
The compound can be prepared from 1-iodonon-1-ene by lithium halogen exchange followed by carbonation.
cis-2-Decenoic acid is produced by Pseudomonas aeruginosa. It may have potential in fighting biofilm implied in infectious diseases that are present in more than 60% of Hospital-acquired infection.
References
Fatty acids
Enoic acids | Cis-2-Decenoic acid | Chemistry | 106 |
60,998,533 | https://en.wikipedia.org/wiki/Estradiol%20valerate/gestonorone%20caproate | Estradiol valerate/gestonorone caproate (EV/GC), known by the developmental code names SH-834 and SH-8.0834, is a high-dose combination medication of estradiol valerate (EV), an estrogen, and gestonorone caproate (GC; norhydroxyprogesterone caproate), a progestin, which was developed and studied by Schering in the 1960s and 1970s for potential use in the treatment of breast cancer in women but was ultimately never marketed. It contained 90 mg EV and 300 mg GC in each 3 mL of oil solution and was intended for use by intramuscular injection once a week. The combination has also been studied incidentally in the treatment of ovarian cancer.
Both high-dose estrogens and high-dose progestogens have been found to be independently effective in the treatment of breast cancer in women. High-dose estrogens show greater and more consistent effectiveness than high-dose progestogens for this indication. The combination of an estrogen and progestogen, specifically estradiol benzoate and progesterone, was first studied in breast cancer in rodents and women by Charles Huggins and colleagues in 1962. Initially progesterone and hydroxyprogesterone caproate were used as the progestogen component in such studies; the need for a more potent progestogen in such combinations led to the development of EV/GC, which was first reported in the treatment of breast cancer in women in 1966. GC is a relatively pure progestogen that has about 5- to 10-fold the progestogenic potency of hydroxyprogesterone caproate in humans. New reports on EV/GC in breast cancer continued until 1976.
Both progesterone and hydroxyprogesterone caproate, which are relatively pure progestogens, have been found to have modest or negligible effectiveness when employed by themselves in the treatment of breast cancer in women. Conversely, progestins with off-target glucocorticoid and/or androgenic activity, such as medroxyprogesterone acetate, megestrol acetate, and 19-nortestosterone derivatives, have been found to have greater and more clinically useful effectiveness in comparison. This has raised the possibility that the beneficial therapeutic effects of progestogens in breast cancer may be more related to their off-target activity than their progestogenic activity. In accordance, a study found that the effectiveness of an estrogen alone and the combination of EV/GC in the treatment of breast cancer in women was not significantly different. This was the last study of EV/GC to be published.
See also
High-dose estrogen/pseudopregnancy
List of combined sex-hormonal preparations § Estrogens and progestogens
References
Abandoned drugs
Combined estrogen–progestogen formulations | Estradiol valerate/gestonorone caproate | Chemistry | 630 |
72,766,444 | https://en.wikipedia.org/wiki/Buchwaldoboletus%20kivuensis | Buchwaldoboletus kivuensis is a species of bolete fungus in the family Boletaceae native to Africa.
Taxonomy and naming
Originally described by Paul Heinemann in 1951 as Gyrodon lignicola, it was given its current name by Ernst Both and Beatriz Ortiz-Santana in A preliminary survey of the genus Buchwaldoboletus, published in "Bulletin of the Buffalo Society of Natural Sciences" in 2011.
Description
The cap is convex, tomentose-pulverulent and dry. Its color is cinnamon-brown. Easily peeled off the mushroom, the skin is separated from the flesh by a thin gelatinous layer. The pores are small and angular, ochraceous-decurrent, and the pore surface stains blue with injury. The stipe is cylindrical and eccentric, and there is a yellow mycelium at the stipe base.
Spores measure 5.3–6.8 by 3.3–4.7 μm.
Distribution
Buchwaldoboletus kivuensis has been recorded in Congo, in the region of lakes Edward and Kivu, at altitude 1650 m. It grows on soil covered with dry branches and on Coffea arabica and Eucalyptus plantations.
References
External links
Boletaceae
Fungi described in 1951
Fungi of Africa
Fungus species | Buchwaldoboletus kivuensis | Biology | 268 |
36,347,343 | https://en.wikipedia.org/wiki/Adolf%20Hir%C3%A9my-Hirschl | Adolf Hirémy-Hirschl (1860–1933) was a Hungarian, Jewish artist known for historical and mythological painting, particularly of subjects pertaining to ancient Rome. Some of his major history paintings have been lost, and many of his smaller works were retained by his heirs until the early 1980s. Although he was one of the most successful artists of fin-de-siècle Vienna, these circumstances, along with the rise of Gustav Klimt and the Vienna Secessionists, put his reputation in eclipse.
Education and career
Hirémy-Hirschl was born 31 January 1860 in Temesvár, then a part of Hungary, but at an early age he went to Vienna to study. He received a scholarship to attend the Akademie der bildenden Künste in 1878. He won his first prize two years later with Farewell: Scene from Hannibal Crossing the Alps, followed in 1882 by a prize that allowed him to travel to Rome.
His time in Rome was a major influence on his choice of subject matter. After returning to Vienna, he produced the acclaimed large-scale canvas The Plague in Rome (1884), a work that is now lost. He enjoyed a successful career with numerous commissions and high praise for his historical and allegorical works, culminating in the Imperial Prize in 1891. During the rise of Klimt and the Vienna Secession movement, he began using the name Adolf Hirémy and moved to Rome, where he spent the last 35 years of his life as an eminent member of the expatriate art community. In 1904, seventy of his works were exhibited at a retrospective. He was admitted to the Accademia di San Luca in 1911.
One of his last works was Sic Transit … (1912), an immense allegorical polyptych on the fall of the Roman Empire and the rise of Christianity. His heirs retained possession of his studio for decades following his death. A large number of his drawings, watercolors, pastels, and oil sketches became public only in the early 1980s.
He died in Rome on 7 April in 1933 and was buried in the Protestant Cemetery, Rome. Gravestone: S647 TOMB NUMBER - 393, Zone 1, Row 15, Plot 54.
As an artist
Hirémy-Hirschl is regarded as an accomplished draughtsman. His numerous figure and drapery studies in charcoal or chalk were mostly intended to be preparatory studies for paintings. His studies for Souls on the Banks of the Acheron and Sic Transit … were often executed on blue, lavender, or orange paper that enhanced the play of light in relation to the forms he drew. His female nudes are known for their "directness and overt sexuality". He also produced landscape studies in pastel, watercolor and gouache. Fragmentation is characteristic of his drawings, "the sustained attempt to perfect the single part" which at the same time represented a "means of escape from completion and synthesis".
Some of his paintings are considered Symbolist. Ahasuerus at the End of the World (1888) is executed in a restricted palette of blue, gray, black, white, with touches of gold and lingering warmth in the flesh of the foregrounded female nude. The title figure "is the last man in the polar wilderness, caught between the angel of hope and the specter of death. Before him lies a fallen female figure, the personification of dead humanity, as crows circle ominously. … The primary light appears to radiate from the distant angel, who hovers before a stormy sky."
Works
Paintings by Hirémy-Hirschl include:
Farewell: Scene from Hannibal Crossing the Alps (1880)
The Plague of Rome (1884, lost)
Saint Cecilia
Prometheus
The Vandals Entering Rome
Ahasuerus at the End of the World (1888)
Souls on the Banks of the Acheron (1898)
The Tomb of Achilles
Sic Transit … (1912)
Between Scylla and Charybdis (1910)
The largest collection of the work of Adolf Hirémy-Hirschl in the United States is held by the Jack Daulton Collection in Los Altos Hills, California.
References
External links
Study of a Standing Female Nude, brown chalk highlighted with white chalk on light brown paper, artnet
Draughtsmen
1860 births
1933 deaths
19th-century Hungarian painters
20th-century Hungarian painters
Jewish painters
Jewish Hungarian artists
Hungarian male painters
19th-century Hungarian male artists
20th-century Hungarian male artists | Adolf Hirémy-Hirschl | Engineering | 894 |
1,890,618 | https://en.wikipedia.org/wiki/Action%20spectrum | An action spectrum is a graph of the rate of biological effectiveness plotted against wavelength of light. It is related to absorption spectrum in many systems. Mathematically, it describes the inverse quantity of light required to evoke a constant response. It is very rare for an action spectrum to describe the level of biological activity, since biological responses are often nonlinear with intensity.
Action spectra are typically written as unit-less responses with peak response of one, and it is also important to distinguish if an action spectrum refers to quanta at each wavelength (mol or log-photons), or to spectral power (W).
It shows which wavelength of light is most effectively used in a specific chemical reaction. Some reactants are able to use specific wavelengths of light more effectively to complete their reactions. For example, chlorophyll is much more efficient at using the red and blue regions than the green region of the light spectrum to carry out photosynthesis. Therefore, the action spectrum graph would show spikes above the wavelengths representing the colours red and blue.
The first action spectrum was made by T. W. Engelmann, who split light into its components by the prism and then illuminated Cladophora placed in a suspension of aerobic bacteria. He found that bacteria accumulated in the region of blue and red light of the split spectrum. He thus discovered the effect of the different wavelengths of light on photosynthesis and plotted the first action spectrum of photosynthesis.
Action spectra have a wide variety of uses in biological and chemical research, particularly in understanding the effect of ultraviolet (UV) light on biological molecules and systems. UV light wavelengths range between 295 nm-400 nm and are known to induce skin and DNA damage. As a result, action spectra have been used to measure the efficiency of different light wavelengths in disinfecting water, the rate and mechanism of photodegradation of folic acid in the blood, and the chirality of molecules to determine secondary structure. Further examples include suppression of melatonin by wavelength and a variety of hazard functions, related to tissue damage from visible and near-visible light.
See also
Photosynthetically active radiation
Photosynthesis
Absorption spectrum
Chlorophyll a
References
External links
Plant Physiology Online: Principles of Spectrophotometry
Photosynthesis
Absorption spectroscopy | Action spectrum | Physics,Chemistry,Biology | 466 |
2,758,531 | https://en.wikipedia.org/wiki/Gliese%20581 | Gliese 581 () is a red dwarf star of spectral type M3V which hosts a planetary system, away from Earth in the constellation Libra. Its estimated mass is about a third of that of the Sun, and it is the 101st closest known star system to the Sun. Gliese 581 is one of the oldest, least active M dwarfs known. Its low stellar activity improves the likelihood of its planets retaining significant atmospheres, and lessens the sterilizing impact of stellar flares.
History of observations
Gliese 581 is known at least from 1886, when it was included in Eduard Schönfeld's Southern (SD)—the fourth part of the . The corresponding designation is BD -7 4003.
Characteristics
The name Gliese 581 refers to the catalog number from the 1957 survey Gliese Catalogue of Nearby Stars of 965 stars located within 20 parsecs of the Earth. Other names of this star include BD-07° 4003 (BD catalogue, first known publication) and HO Librae (variable star designation). It does not have an individual name such as Sirius or Procyon. The star is a red dwarf with spectral type M3V, located 20.5 light-years away from Earth. It is located about two degrees north of Beta Librae, the brightest star in the Libra constellation. Its mass is estimated to be approximately a third that of the Sun, and it is the 101st closest known star system (including brown dwarfs) to the Sun.
An M-class dwarf star such as Gliese 581 has a much lower mass than the Sun, causing the core region of the star to fuse hydrogen at a significantly lower rate. From the apparent magnitude and distance, astronomers have estimated an effective temperature of 3200 K and a visual luminosity of 0.2% of that of the Sun. However, a red dwarf such as Gliese 581 radiates primarily in the near infrared, with peak emission at a wavelength of roughly 830 nm (estimated using Wien's displacement law, which assumes the star radiates as a black body), so such an estimate will underestimate the star's total luminosity. (For comparison, the peak emission of the Sun is roughly 530 nm, in the middle of the visible part of the spectrum.) When radiation over the entire spectrum is taken into account (not just the part that humans are able to see), something known as the bolometric correction, this star has a bolometric luminosity 1.2% of the Sun's total luminosity. A planet would need to be situated much closer to this star in order to receive a comparable amount of energy as the Earth. The region of space around a star where a planet would receive roughly the same energy as the Earth is sometimes termed the "Goldilocks Zone", or, more prosaically, the habitable zone. The extent of such a zone is not fixed and is highly specific for each planetary system. Gliese 581 is a very old star. Its slow rotation makes it very inactive, making it better suited than most red dwarfs for having habitable planets.
Gliese 581 is classified as a variable star of the BY Draconis type, and has been given the variable star designation HO Librae. This is a star that exhibits variability because of the presence of star spots combined with the rotation of the star. However, the measured variability is close to the margin of error, and, if real, is most likely a long term variability. Its brightness is stable to 1%. Gliese 581 emits X-rays.
Planetary system
The Gliese 581 planetary system is the gravitationally bound system comprising the star Gliese 581 and the objects that orbit it. The system is known to consist of at least three planets discovered using the radial velocity method, along with a debris disk. The system's notability is due primarily to early exoplanetology discoveries, between 2008 and 2010, of possible terrestrial planets orbiting within its habitable zone and the system's relatively close proximity to the Solar System at 20 light years away. However, its observation history has been controversial due to false detections, and the radial velocity method yields little information about the planets themselves beyond their orbit and mass.
The confirmed planets are believed to be located close to the star with near-circular orbits. In order of distance from the star, these are Gliese 581e, Gliese 581b, and Gliese 581c. The letters represent the discovery order, with b being the first planet to be discovered around the star.
Observation history
The first announcement of a planet around the star was Gliese 581b discovered by astronomers at the Observatory of Geneva in Switzerland and Grenoble University in France. Detected in August 2005 and using extensive data from the ESO/HARPS spectrometer it was the fifth planet to be discovered around a red dwarf. Further observations by the same group resulted in the detection of two more planets, Gliese 581c and Gliese 581d. The orbital period of Gliese 581d was originally thought to be 83 days but was later revised to a lower value of 67 days. The revised orbital distance would place it at the outer limits of the habitable zone, the distance at which it is believed possible for liquid water to exist on the surface of a planetary body, given favourable atmospheric conditions. Gliese 581d was estimated to receive about 30% of the intensity of light the Earth receives from the Sun. By comparison, sunlight on Mars has about 40% of the intensity of that on Earth, though if high levels of carbon dioxide are present in the planetary atmosphere, the greenhouse effect could keep temperatures above freezing.
The next discovery was the inner planet Gliese 581e, also by the Observatory of Geneva and using data from the HARPS instrument, was announced on 21 April 2009. This planet, at a minimum mass of 1.9 Earths, was at the time the least massive confirmed exoplanet identified around a main-sequence star.
On 29 September 2010, astronomers using the Keck Observatory proposed two additional planets, Gliese 581f and Gliese 581g, both in nearly circular orbits based on analysis of a combination of data sets from the HARPS and HIRES instruments. The proposed planet Gliese 581f was thought to be a 7 Earth-mass planet in a 433-day orbit and too cold to support liquid water. The candidate planet Gliese 581g attracted more attention: nicknamed Zarmina's World by one of its discoverers, the predicted mass of Gliese 581g was between 3 and 4 Earth-masses, with an orbital period of 37 days. The orbital distance was calculated to be well within the star's habitable zone, though the planet was expected to be tidally locked with one side of the planet always facing the star. In an interview with Lisa-Joy Zgorski of the National Science Foundation, Steven Vogt was asked what he thought about the chances of life existing on Gliese 581g. Vogt was optimistic: "I'm not a biologist, nor do I want to play one on TV. Personally, given the ubiquity and propensity of life to flourish wherever it can, I would say that ... the chances of life on this planet are 100%. I have almost no doubt about it."
Two weeks after the announcement of the discovery of and , astronomer Francesco Pepe of the Geneva Observatory reported that in a new analysis of 179 measurements taken by the HARPS spectrograph over 6.5 years, neither planet g nor planet f was detectable, and the relevant measurements were included in a paper uploaded to the arXiv preprint server, though still unpublished in a refereed journal. The non-existence of Gliese 581f was accepted relatively quickly: it was shown that the radial velocity variations that led to the claimed discovery of Gliese 581f were instead associated with the stellar activity cycle rather than an orbiting planet. Nevertheless, the existence of planet g remained controversial: Vogt responded in the media that he stood by the discovery and questions arose as to whether the effect was due to the assumption of circular rather than eccentric orbits or the statistical methods used.
Bayesian analysis found no clear evidence for a fifth planetary signal in the combined HIRES/HARPS data set, though other studies led to the conclusion that the data did support the existence of planet g, albeit with strong degeneracies in the parameters as a result of the first eccentric harmonic with the outer planet Gliese 581d.
On 27 November 2012, the European Space Agency announced that the Herschel space observatory had discovered a comet belt "at 25 ± 12 AU to more than 60 AU". It must have "at least 10 times" as many comets as does the Solar system. This likely rules out Saturn-mass planets beyond 0.75 AU. However another (undiscovered) planet further out, say a Neptune-mass planet at 5 AU, might be required to keep the comet belt replenished.
Using the assumption that the noise present in the data was correlated (red noise rather than white noise), Roman Baluev called into question not only the existence of planet g, but Gliese 581d as well, suggesting there were only three planets (Gliese 581b, c, and e) present. This result was further supported by a 2014 study, whose authors argued that Gliese 581d is "an artifact of stellar activity which, when incompletely corrected, causes the false detection of the planet g." While a response was published questioning the methodology of this study, all subsequent studies of the radial velocity data have confirmed the stellar, rather than planetary, origin of the signal corresponding to Gliese 581d, though some dispute has remained.
A 2024 study, in addition to confirming evidence for a three-planet system, determined the orbital inclination of the planets. This allowed their true masses to be determined; previously only minimum masses were known. The planets' true masses are about 30% greater than their minimum masses.
Planets
Analysis of the radial velocity data has produced several models for the orbital arrangement of the system. 3-planet, 4-planet, 5-planet and 6-planet models have been proposed to address the available radial velocity data, with the current consensus being a 3-planet model (e, b, c). Most of these models predict, however, that the inner planets are close in with circular orbits, while outer planets, particularly Gliese 581d, should it exist, are on more elliptical orbits.
Models of the habitable zone of Gliese 581 show that it extends from about 0.1 to 0.25 AU. The three confirmed planets orbit closer to the star than the inner edge of the habitable zone, while planets g and d would have orbited within it.
Gliese 581e
Gliese 581e is the innermost planet and, with a mass of 2.5 Earth masses, is the least massive of the three. Discovered in 2009, it is also the most recent confirmed planet to have been discovered in this system. It takes 3.15 days to complete an orbit. Initial analyses suggested that the planet's orbit is quite elliptical but after correcting the radial velocity measurements for stellar activity, the data now indicate a circular orbit.
Gliese 581b
Gliese 581b is the most massive planet known to be orbiting Gliese 581 and was the first to be discovered. It is about 20 times the mass of Earth and completes an orbit in 5.37 days.
Gliese 581c
Gliese 581c is the third planet orbiting Gliese 581. It was discovered in April 2007. In their 2007 paper, Udry et al. asserted that if Gliese 581c has an Earth-type composition, it would have a radius of 1.5R🜨, which would have made it at the time "the most Earth-like of all known exoplanets". A direct measurement of the radius cannot be taken because, viewed from Earth, the planet does not transit its star. The mass of the planet is 6.8 times that of Earth. The planet initially attracted attention as being potentially habitable, though this has since been discounted. The mean blackbody surface temperature has been estimated to lie between −3 °C (for a Venus-like albedo) and 40 °C (for an Earth-like albedo), however, the temperatures could be much higher (about 500 degrees Celsius) due to a runaway greenhouse effect akin to that of Venus. Some astronomers believe the system may have undergone planetary migration and Gliese 581c may have formed beyond the frost line, with a composition similar to icy bodies like Ganymede. Gliese 581c completes a full orbit in just under 13 days.
Doubtful and disproven planets
Gliese 581g
Gliese 581g, unofficially known as Zarmina's World, was a candidate exoplanet claimed to orbit Gliese 581, but its existence was ultimately refuted. It was thought to orbit with a period of 36.6 days at a distance of 0.146 AU, placing it within the habitable zone, and to have a minimum mass of .
Gliese 581d
Gliese 581d is a possible candidate exoplanet thought to orbit Gliese 581, which is frequently heavily disputed, as it has been argued by a number of studies to be a false positive originating from stellar activity. The planet's minimum mass is thought to be and its radius, assuming an Earth-like composition, is estimated (assuming the planet's existence) to be , making it a super-Earth. Its orbital period is thought to be 66.87 days long, with a semi-major axis of 0.21847 AU, with an unconstrained eccentricity. Previous analyses suggested that the planet (if existing) orbits within the star's habitable zone, where the temperatures are just right to support life.
Gliese 581f
Gliese 581f was a candidate exoplanet claimed to orbit Gliese 581, but its existence was ultimately refuted. It was thought to orbit with a period of 433 days at a distance of 0.758 AU, and to have a minimum mass of .
SETI
The Gliese 581 system has been the target of both SETI and Active SETI searches for extraterrestrial life.
A Message from Earth (AMFE) is a high-powered digital radio signal that was sent on 9 October 2008, toward Gliese 581c. The signal is a digital time capsule containing 501 messages that were selected through a competition on the social networking site Bebo. The message was sent using the Yevpatoria RT-70 radio telescope radar telescope of the National Space Agency of Ukraine. The signal will reach Gliese 581 in early 2029.
Using optical SETI, Ragbir Bhathal claimed to have detected an unexplained pulse of light from the direction of the Gliese 581 system in 2008.
In 2012, the International Centre for Radio Astronomy Research at Curtin University in Perth, Gliese 581 was precisely targeted by Australian Long Baseline Array using three radio telescope facilities across Australia and the Very Long Baseline Interferometry technique, however no candidate signals were found.
Debris disk
At the outer edge of the system is a massive debris disk containing more comets than the Solar System. The debris disc has an inclination between 30° and 70°. If the planetary orbits lie in the same plane, their masses would be between 1.1 and 2 times the minimum mass values. This is supported by a 2024 study, which found an inclination for the planetary orbits of about 47°.
See also
Habitability of red dwarf systems
Lists of exoplanets
Gliese 876
Notes
References
External links
Major Discovery: New Planet Could Harbor Water and Life Space.com
Gliese 581 / HO Librae Solstation.com
Extrasolar Planets Encyclopaedia: Gl 581
AstronomyCast: Gliese 581
SETI Range Calculator
All Wet? Astronomers Claim Discovery of Earth-like Planet , Scientific American
Position of Gliese 581 marked on local space (Top right corner)
Photo of Constellation Libra with Gliese 581
Artist conceptions of planets of Gliese 581
Speculation about geology/geochemistry of Gliese 581c The Geochemical Society
Computer models suggest planetary and extrasolar planet atmospheres – A gas, gas, gas Washington University in St. Louis
Extrasolar Planet Interactions by Rory Barnes & Richard Greenberg, Lunar and Planetary Lab, University of Arizona
Earth-like planet found that may support life Ctv.ca
Gliese 581 – The "Red Dwarf" and implications for its "earthlike" planet Gliese 581c
Earth's Twin Discovered?
M-type main-sequence stars
BY Draconis variables
Libra (constellation)
Planetary systems with three confirmed planets
Circumstellar disks
0581
074995
Librae, HO
BD-07 4003
36853511
0562 | Gliese 581 | Astronomy | 3,610 |
46,712,786 | https://en.wikipedia.org/wiki/Annual%20Reports%20on%20the%20Progress%20of%20Chemistry | Annual Reports on the Progress of Chemistry was a yearly review journal published by the Royal Institute of Chemistry and after 1980 the Royal Society of Chemistry. It was established in 1904. In 1967 the journal was split into two sections, A and B, covering inorganic and organic chemistry, respectively., In 1980, a third series was started, C, covering physical chemistry. The journal was discontinued in 2013.
References
External links
Chemistry journals
Royal Institute of Chemistry
Royal Society of Chemistry academic journals
Publications established in 1904
Publications disestablished in 2013
Defunct journals of the United Kingdom
English-language journals
Annual journals | Annual Reports on the Progress of Chemistry | Chemistry | 118 |
74,110,097 | https://en.wikipedia.org/wiki/Kleene%20equality | In mathematics, Kleene equality, or strong equality, () is an equality operator on partial functions, that states that on a given argument either both functions are undefined, or both are defined and their values on that arguments are equal.
For example, if we have partial functions and , means that for every :
and are both defined and
or and are both undefined.
Some authors are using "quasi-equality", which is defined like this:
where the down arrow means that the term on the left side of it is defined.
Then it becomes possible to define the strong equality in the following way:
References
Computability theory
Equivalence (mathematics) | Kleene equality | Mathematics | 134 |
471,600 | https://en.wikipedia.org/wiki/Total%20institution | A total institution or residential institution is a residential facility where a great number of similarly situated people, cut off from the wider community for a considerable time, together lead an enclosed, formally administered round of life. Privacy and civil liberties are limited or non-existent in total institutions, as all aspects of life including sleep, play, and work, are conducted in the same place. The concept is mostly associated with the work of sociologist Erving Goffman.
Characteristics of total institutions usually include at least a few of the following
Strict limitations on personal property or an outright confiscation of personal possessions with the exception of certain medical devices like eyeglasses
(most common in prisons and mental hospitals, other types of residential institutions typically allow more personal property)
Uniforms and\or dress codes (especially in prisons, boarding schools and military bases)
A certain degree of dehumanization
Scheduling covering and organizing every minute of the day
Ridged authoritarian hierarchy
Stringent rules
Cell or barracks\dorm type housing where multiple people sleep in each bedroom (in boarding schools, ships, remote work camps\outposts, military bases and some prisons)
Communal meals in dining halls, recreation, etc
Etymology
The term is sometimes credited as having been coined and defined by Canadian sociologist Erving Goffman in his paper "On the Characteristics of Total Institutions", presented in April 1957 at the Walter Reed Institute's Symposium on Preventive and Social Psychiatry. An expanded version appeared in Donald Cressey's collection, The Prison, and was reprinted in Goffman's 1961 collection, Asylums. Fine and Manning, however, note that Goffman heard the term in lectures by Everett Hughes (likely during the late-1940s seminar, "Work and Occupations"). Regardless of whether Goffman coined the term, he can be credited with popularizing it.
Typology
Total institutions are divided by Goffman into five different types:
institutions established to care for people felt to be both harmless and incapable: orphanages, poor houses, group homes and nursing homes.
places established to care for people felt to be incapable of looking after themselves and a threat to the community, albeit an unintended one: leprosariums, mental hospitals, certain types of group homes, and tuberculosis sanitariums.
institutions organised to protect the community against what are felt to be intentional dangers to it, with the welfare of the people thus sequestered not the immediate issue: concentration camps, P.O.W. camps, penitentiaries, and jails.
institutions purportedly established to better pursue some worklike tasks and justifying themselves only on these instrumental grounds: colonial compounds, work camps, boarding schools, ships, army barracks, and large mansions from the point of view of those who live in the servants' quarters.
establishments designed as retreats from the world even while often serving also as training stations for the religious; examples are convents, abbeys, monasteries, and other cloisters.
David Rothman states that "historians have confirmed the validity of Goffman's concept of 'total institutions' which minimizes the differences in formal mission to establish a unity of design and structure."
In Discipline and Punish, Michel Foucault discussed total institutions in the language of complete and austere institutions.
Nursing homes
According to S. Lammers and A. Verhey, some 80 percent of Americans will ultimately die not in their home, but in a total institution.
Tourism
Sociologists have pointed out that tourist venues such as cruise ships are acquiring many of the characteristics of total institutions. Tourists may not be aware that they are being controlled, even constrained, but the environment has been designed to subtly manipulate the behavior of patrons. These examples differ from the traditional examples in that the influence is short term.
Spaceship Earth
Sociologist Steffen Roth has demonstrated that, if realized, Spaceship Earth would epitomise the most total institution ever created in human history.
See also
Concentration camp
Conscription
Disciplinary institution
Mental asylum
Psych ward
Workhouse
Psychiatric institution
Totalitarianism
Transinstitutionalisation
References
Further reading
Social phenomena
Social philosophy
Organizational cybernetics
Sociological terminology
Functionalism (social theory)
Erving Goffman | Total institution | Biology | 848 |
52,585,846 | https://en.wikipedia.org/wiki/Industrial%20furnace | An industrial furnace, also known as a direct heater or a direct fired heater, is a device used to provide heat for an industrial process, typically higher than 400 degrees Celsius. They are used to provide heat for a process or can serve as reactor which provides heats of reaction. Furnace designs vary as to its function, heating duty, type of fuel and method of introducing combustion air. Heat is generated by an industrial furnace by mixing fuel with air or oxygen, or from electrical energy. The residual heat will exit the furnace as flue gas. These are designed as per international codes and standards the most common of which are ISO 13705 (Petroleum and natural gas industries — Fired heaters for general refinery service) / American Petroleum Institute (API) Standard 560 (Fired Heater for General Refinery Service). Types of industrial furnaces include batch ovens, metallurgical furnaces, vacuum furnaces, and solar furnaces. Industrial furnaces are used in applications such as chemical reactions, cremation, oil refining, and glasswork.
Overview
Fuel flows into the burner and is burnt with air provided from an air blower. There can be more than one burner in a particular furnace which can be arranged in cells which heat a particular set of tubes. Burners can also be floor mounted, wall mounted or roof mounted depending on design. The flames heat up the tubes, which in turn heat the fluid inside in the first part of the furnace known as the radiant section or firebox. In this chamber where combustion takes place, the heat is transferred mainly by radiation to tubes around the fire in the chamber.
The fluid to be heated passes through the tubes and is thus heated to the desired temperature. The gases from the combustion are known as flue gas. After the flue gas leaves the firebox, most furnace designs include a convection section where more heat is recovered before venting to the atmosphere through the flue gas stack. (HTF=Heat Transfer Fluid. Industries also use their furnaces to heat a secondary fluid with special additives like anti-rust and high heat transfer efficiency. This heated fluid is then circulated round the whole plant to heat exchangers to be used wherever heat is needed instead of directly heating the product line as the product or material may be volatile or prone to cracking at the furnace temperature.)
Components
Radiant section
The radiant section is where the tubes receive almost all its heat by radiation from the flame. In a vertical, cylindrical furnace, the tubes are vertical. Tubes can be vertical or horizontal, placed along the refractory wall, in the middle, etc., or arranged in cells. Studs are used to hold the insulation together and on the wall of the furnace. They are placed about 1 ft (300 mm) apart in this picture of the inside of a furnace.
The tubes, shown below, which are reddish brown from corrosion, are carbon steel tubes and run the height of the radiant section. The tubes are a distance away from the insulation so radiation can be reflected to the back of the tubes to maintain a uniform tube wall temperature. Tube guides at the top, middle and bottom hold the tubes in place.
Convection section
The convection section is located above the radiant section where it is cooler to recover additional heat. Heat transfer takes place by convection here, and the tubes are finned to increase heat transfer. The first three tube rows in the bottom of the convection section and at the top of the radiant section is an area of bare tubes (without fins) and are known as the shield section ("shock tubes"), so named because they are still exposed to plenty of radiation from the firebox and they also act to shield the convection section tubes, which are normally of less resistant material from the high temperatures in the firebox.
The area of the radiant section just before flue gas enters the shield section and into the convection section called the bridgezone. A crossover is the tube that connects from the convection section outlet to the radiant section inlet. The crossover piping is normally located outside so that the temperature can be monitored and the efficiency of the convection section can be calculated. The sightglass at the top allows personnel to see the flame shape and pattern from above and visually inspect if flame impingement is occurring. Flame impingement happens when the flame touches the tubes and causes small isolated spots of very high temperature.
Radiant coil
This is a series of tubes horizontal/ vertical hairpin type connected at ends (with 180° bends) or helical in construction. The radiant coil absorbs heat through radiation. They can be single pass or multi pass depending upon the process-side pressure drop allowed. The radiant coils and bends are housed in the radiant box. Radiant coil materials vary from carbon steel for low temperature services to high alloy steels for high temperature services. These are supported from the radiant side walls or hanging from the radiant roof. Material of these supports is generally high alloy steel. While designing the radiant coil, care is taken so that provision for expansion (in hot conditions) is kept.
Burner
The burner in the vertical, cylindrical furnace as above, is located in the floor and fires upward. Some furnaces have side fired burners, such as in train locomotives. The burner tile is made of high temperature refractory and is where the flame is contained. Air registers located below the burner and at the outlet of the air blower are devices with movable flaps or vanes that control the shape and pattern of the flame, whether it spreads out or even swirls around. Flames should not spread out too much, as this will cause flame impingement. Air registers can be classified as primary, secondary and if applicable, tertiary, depending on when their air is introduced.
The primary air register supplies primary air, which is the first to be introduced in the burner. Secondary air is added to supplement primary air. Burners may include a pre-mixer to mix the air and fuel for better combustion before introducing into the burner. Some burners even use steam as premix to preheat the air and create better mixing of the fuel and heated air. The floor of the furnace is mostly made of a different material from that of the wall, typically hard castable refractory to allow technicians to walk on its floor during maintenance.
A furnace can be lit by a small pilot flame or in some older models, by hand. Most pilot flames nowadays are lit by an ignition transformer (much like a car's spark plugs). The pilot flame in turn lights up the main flame. The pilot flame uses natural gas while the main flame can use both diesel and natural gas. When using liquid fuels, an atomizer is used, otherwise, the liquid fuel will simply pour onto the furnace floor and become a hazard. Using a pilot flame for lighting the furnace increases safety and ease compared to using a manual ignition method (like a match).
Sootblower
Sootblowers are found in the convection section. As this section is above the radiant section and air movement is slower because of the fins, soot tends to accumulate here. Sootblowing is normally done when the efficiency of the convection section is decreased. This can be calculated by looking at the temperature change from the crossover piping and at the convection section exit.
Sootblowers utilize flowing media such as water, air or steam to remove deposits from the tubes. This is typically done during maintenance with the air blower turned on. There are several different types of sootblowers used. Wall blowers of the rotary type are mounted on furnace walls protruding between the convection tubes. The lances are connected to a steam source with holes drilled into it at intervals along its length. When it is turned on, it rotates and blows the soot off the tubes and out through the stack.
Stack
The flue gas stack is a cylindrical structure at the top of all the heat transfer chambers. The breeching directly below it collects the flue gas and brings it up high into the atmosphere where it will not endanger personnel.
The stack damper contained within works like a butterfly valve and regulates draft (pressure difference between air intake and air exit) in the furnace, which is what pulls the flue gas through the convection section. The stack damper also regulates the heat lost through the stack. As the damper closes, the amount of heat escaping the furnace through the stack decreases, but the pressure or draft in the furnace increases which poses risks to those working around it if there are air leakages in the furnace, the flames can then escape out of the firebox or even explode if the pressure is too great.
Insulation
Insulation is an important part of the furnace because it improves efficiency by minimizing heat escape from the heated chamber. Refractory materials such as firebrick, castable refractories and ceramic fibre, are used for insulation. The floor of the furnace are normally castable type refractories while those on the walls are nailed or glued in place. Ceramic fibre is commonly used for the roof and wall of the furnace and is graded by its density and then its maximum temperature rating. For example, 8# 2,300 °F means 8 lb/ft3 density with a maximum temperature rating of 2,300 °F. The actual service temperature rating for ceramic fiber is a bit lower than the maximum rated temperature. (i.e. 2300 °F is only good to 2145 °F before permanent linear shrinkage).
Foundations
Concrete pillars are foundation on which the heater is mounted. They can be four nos. for smaller heaters and may be up to 24 nos. for large size heaters. Design of pillars and entire foundation is done based on the load bearing capacity of soil and seismic conditions prevailing in the area. Foundation bolts are grouted in foundation after installation of the heater.
Access doors
The heater body is provided with access doors at various locations. Access doors are to be used only during shutdown of heater. The normal size of the access door is 600x400 mm, which is sufficient for movement of people/ material into and out of the heater. During operation the access doors are properly bolted using leak proof high temperature gaskets.
See also
Electric arc furnace
Flued boiler
Fire test furnaces
European Conference on Industrial Furnaces and Boilers
International Flame Research Foundation
References
External links | Industrial furnace | Chemistry | 2,108 |
52,720,510 | https://en.wikipedia.org/wiki/Miproxifene%20phosphate | Miproxifene phosphate (former developmental code name TAT-59) is a nonsteroidal selective estrogen receptor modulator (SERM) of the triphenylethylene group that was under development in Japan for the treatment of breast cancer but was abandoned and never marketed. It reached phase III clinical trials for this indication before development was discontinued. The drug is a phosphate ester and prodrug of miproxifene (DP-TAT-59) with improved water solubility that was better suited for clinical development. Miproxifene has been found to be 3- to 10-fold as potent as tamoxifen in inhibiting breast cancer cell growth in in vitro models. It is a derivative of afimoxifene (4-hydroxytamoxifen) in which an additional 4-isopropyl group is present in the β-phenyl ring.
References
External links
Dimethylamino compounds
Hormonal antineoplastic drugs
Phosphate esters
Prodrugs
Selective estrogen receptor modulators
Triphenylethylenes
Isopropyl compounds
Ethers | Miproxifene phosphate | Chemistry | 235 |
13,629,249 | https://en.wikipedia.org/wiki/Listeria%20Hfq%20binding%20LhrC | Listeria LhrC ncRNA was identified by screening for RNA molecules which co-immunoprecipitated with the RNA chaperone Hfq. However, neither the stability nor the activity of LhrC seem to depend on the presence of Hfq. This RNA is transcribed from an intergenic region between the protein coding genes cysK, a putative cysteine synthase and sul, a putative dihydropteroate synthase. In Listeria monocytogenes four additional copies of lhrC have been identified in the genome, three of which are located in tandem repeat upstream of the originally characterised lhrC. This RNA molecule appears to be conserved amongst Listeria species but has not been identified in other bacterial species. It is involved in virulence. The direct mRNA targets of LhrC are the virulence adhesion LapB, and the oligopeptide binding protein OppA. The 3 conserved UCCC motifs common to all copies of LhrC are involved in the mRNA binding and post-transcriptional repression of the target genes. Two other Listerina monocytogenes sRNAs Rli22 and Rli33 contain 2 UCCC motifs and use them to repress oppA mRNA expression.
See also
Listeria Hfq binding LhrA
References
External links
Non-coding RNA | Listeria Hfq binding LhrC | Chemistry | 279 |
51,326,627 | https://en.wikipedia.org/wiki/WD%201145%2B017%20b | WD 1145+017 b (also known by its EPIC designation EPIC 201563164.01), is a confirmed exoasteroid or minor planet orbiting around and being vaporized by the white dwarf star WD 1145+017, likely one of multiple such objects around this star. It was discovered by NASA's Kepler spacecraft on its "Second Light" mission. It is located about away from Earth in the constellation of Virgo. The object was found by using the transit method, in which the dimming effect that a planet causes as it crosses in front of its star is measured.
The minor planet is notable because it is the first observed planetary object to transit a white dwarf, providing clues of its possible interactions when its parent star reached the end of its lifetime as a red giant. The object is also the least-massive exoplanetary object ever discovered, being about one-tenth the mass of the dwarf planet Ceres.
Physical characteristics
Mass, radius, and temperature
WD 1145+017 b has been described as a minor planet, an asteroid, and a planetesimal. It likely has a surface temperature of around based on its extreme proximity to its star. Its mass and radius are not well known; it is thought to be about one-tenth the mass of Ceres and about 200 km in radius. The Extrasolar Planets Encyclopaedia claims a mass of 0.0006678 for this object, 4.45 times the mass of Ceres and about the mass of Haumea, but this is not substantiated by the cited source (Croll et al. 2017), which considers the object to be Ceres-mass or less.
Host star
The planetary object orbits a (DB-type) white dwarf. It has ended its main sequence lifetime and will continue to cool for billions of years to come in the future. Based on recent studies and its mass, the star was likely an early A-type main sequence star with a mass of 2.46 and main sequence lifetime of 550 million years before it expanded and became a red giant. The star has a mass of 0.6 and a radius of 0.02 (1.4 ). It has a temperature of 15,020 K and is 774 million years old, with a cooling age of about 224 million years. In comparison, the Sun is 4.6 billion years old and has a surface temperature of 5778 K.
The star's apparent magnitude, or how bright it appears from Earth's perspective, is 17. Therefore, it is far too dim to be seen with the naked eye.
Orbit
WD 1145+017 b orbits its host star with an orbital period of 0.1875 days (4.5 hours) and an orbital radius of about 0.005 times that of Earth's (750,000 km), twice the distance between the Moon and the Earth or just over one solar radius, while Mercury orbits the Sun at about 0.38 AU (57 million km). It is one of the shortest orbital periods known so far, with several other exoplanets having shorter periods.
Vaporization
WD 1145+047 b is currently being vaporized by its star because of its extreme proximity to it. White dwarfs are usually the size of the Earth, and have half as much mass as they did during the main sequence. Due to this and the searing hot temperature of the stellar remnant, rocky minerals are being vaporized off the surface of this object, into orbit around the star, which is responsible for a hot dusty disk that was observed around its host star. It is likely that WD 1145+017 b is bound to disintegrate in the future (around 100–200 million years from now) due to further vaporization and ablation. The minor planet is likely being pelted by several smaller objects of up to , as it is likely not just a single object orbiting the white dwarf star, but likely several planetesimals, which is probably responsible for some of the variations in the light curve data. The smaller objects can also throw debris into orbit upon impact, which may also be responsible for the variations.
In some way, it helps explain how a planetary system may evolve after its host star has thrown off its outer layers in a planetary nebula, eventually dying as a black dwarf.
Discovery
The planetary object was discovered by the Kepler spacecraft on its secondary mission, K2: Second Light, an extension of its original mission from 2009 to 2013. Observations were taken over a period of about a month starting from April 2015 using the 1.2-meter telescope at the Fred L. Whipple Observatory along with another telescope located in Chile. The white dwarf star was not originally targeted as part of the mission, however data revealed that there were dips in this star's light curve, and as such investigations were made to figure out what was causing the dips, the same procedure that was used on the stars that were targeted by the K2 mission. Two transits were detected on 11 April over a period of 4 hours apart, and again on 17 April, however this one was 180° out of phase (inclination probably) from the 11 April transits. The spectra of WD 1145+047 was studied and it revealed that the star contained magnesium, aluminum, silicon, calcium, iron, and nickel. The settling times for these elements were much shorter than the cooling age of the white dwarf (175 Myr), so they must have been deposited fairly recently, as much as probably only 1–2 million years ago. It was suggested that this was evidence for a disintegrating rocky minor planet orbiting around WD 1145+047 with a low mass of one-tenth that of Ceres, comparable to the mass of some of the large asteroids in the Solar System.
The discovery was then published in the online journal Nature on 22 October 2015, describing the nature of the system.
See also
Disrupted planet
GD 66 – white dwarf assumed to have an orbiting giant planet, but later ruled out
Pulsar planet – type of planet that orbits another stellar remnant, a pulsar
References
External links
NASA – Kepler Mission.
NASA – Kepler Discoveries – Summary Table.
NASA – WD 1145+017 b at The Extrasolar Planets Encyclopaedia.
Exoplanets discovered in 2015
Exoplanets discovered by the Kepler space telescope
White dwarfs
Virgo (constellation)
Sub-Earth exoplanets | WD 1145+017 b | Astronomy | 1,337 |
25,994,359 | https://en.wikipedia.org/wiki/C41H26O26 | {{DISPLAYTITLE:C41H26O26}}
The molecular formula C41H26O26 (molar mass: 934.63 g/mol, exact mass: 934.0712 u) may refer to:
Alnusiin, an ellagitannin
Castalagin, an ellagitannin
Molecular formulas | C41H26O26 | Physics,Chemistry | 74 |
34,891,384 | https://en.wikipedia.org/wiki/Chen%20Yu-chie | Chen Yu-chie () is a Taiwanese chemist and is a Professor of Chemistry in the National Chiao Tung University, Hsinchu, Taiwan. She received her Ph.D. from Montana State University (United States).
Research
Chen's research interests include biological mass spectrometry, analytical nanotechnology, and nanobiotechnology.
Achievements
Chen is the inventor of Ultrasonication-Assisted Spray Ionization (UASI), as well as Contactless Atmospheric Pressure Ionization (Contactless API), and one of the inventors of the surface-assisted laser desorption/ionization (SALDI) techniques for mass spectrometric analysis of chemical molecules.
References
Living people
Montana State University alumni
Taiwanese chemists
Academic staff of the National Chiao Tung University
Mass spectrometrists
Taiwanese women chemists
Year of birth missing (living people) | Chen Yu-chie | Physics,Chemistry | 179 |
5,285 | https://en.wikipedia.org/wiki/STS-51-F | STS-51-F (also known as Spacelab 2) was the 19th flight of NASA's Space Shuttle program and the eighth flight of Space Shuttle Challenger. It launched from Kennedy Space Center, Florida, on July 29, 1985, and landed eight days later on August 6, 1985.
While STS-51-F's primary payload was the Spacelab 2 laboratory module, the payload that received the most publicity was the Carbonated Beverage Dispenser Evaluation, which was an experiment in which both Coca-Cola and Pepsi tried to make their carbonated drinks available to astronauts. A helium-cooled infrared telescope (IRT) was also flown on this mission, and while it did have some problems, it observed 60% of the galactic plane in infrared light.
During launch, Challenger experienced multiple sensor failures in its Engine 1 Center SSME engine, which led to it shutting down and the shuttle had to perform an "Abort to Orbit" (ATO) emergency procedure. It is the only Shuttle mission to have carried out an abort after launching. As a result of the ATO, the mission was carried out at a slightly lower orbital altitude.
Crew
As with previous Spacelab missions, the crew was divided between two 12-hour shifts. Acton, Bridges and Henize made up the "Red Team" while Bartoe, England and Musgrave comprised the "Blue Team"; commander Fullerton could take either shift when needed. Challenger carried two Extravehicular Mobility Units (EMU) in the event of an emergency spacewalk, which would have been performed by England and Musgrave.
Crew seat assignments
Launch
STS-51-F's first launch attempt on July 12, 1985, was halted with the countdown at T−3 seconds after main engine ignition, when a malfunction of the number two RS-25 coolant valve caused an automatic launch abort. Challenger launched successfully on its second attempt on July 29, 1985, at 17:00 p.m. EDT, after a delay of 1 hour 37 minutes due to a problem with the table maintenance block update uplink.
At 3 minutes 31 seconds into the ascent, one of the center engine's two high-pressure fuel turbopump turbine discharge temperature sensors failed. Two minutes and twelve seconds later, the second sensor failed, causing the shutdown of the center engine. This was the only in-flight RS-25 failure of the Space Shuttle program. Approximately 8 minutes into the flight, one of the same temperature sensors in the right engine failed, and the remaining right-engine temperature sensor displayed readings near the redline for engine shutdown. Booster Systems Engineer Jenny M. Howard acted quickly to recommend that the crew inhibit any further automatic RS-25 shutdowns based on readings from the remaining sensors, preventing the potential shutdown of a second engine and a possible abort mode that may have resulted in the loss of crew and vehicle (LOCV).
The failed RS-25 resulted in an Abort to Orbit (ATO) trajectory, whereby the shuttle achieved a lower-than-planned orbital altitude. The plan had been for a by orbit, but the mission was carried out at by .
Mission summary
STS-51-F's primary payload was the laboratory module Spacelab 2. A special part of the modular Spacelab system, the "igloo", which was located at the head of a three-pallet train, provided on-site support to instruments mounted on pallets. The main mission objective was to verify performance of Spacelab systems, determine the interface capability of the orbiter, and measure the environment created by the spacecraft. Experiments covered life sciences, plasma physics, astronomy, high-energy astrophysics, solar physics, atmospheric physics and technology research. Despite mission replanning necessitated by Challengers abort to orbit trajectory, the Spacelab mission was declared a success.
The flight marked the first time the European Space Agency (ESA) Instrument Pointing System (IPS) was tested in orbit. This unique pointing instrument was designed with an accuracy of one arcsecond. Initially, some problems were experienced when it was commanded to track the Sun, but a series of software fixes were made and the problem was corrected. In addition, Anthony W. England became the second amateur radio operator to transmit from space during the mission.
Spacelab Infrared Telescope
The Spacelab Infrared Telescope (IRT) was also flown on the mission. The IRT was a aperture helium-cooled infrared telescope, observing light between wavelengths of 1.7 to 118 μm. It was thought heat emissions from the Shuttle would corrupt long-wavelength data, however it still returned useful astronomical data. Another problem was that a piece of mylar insulation broke loose and floated in the line-of-sight of the telescope. IRT collected infrared data on 60% of the galactic plane. (see also List of largest infrared telescopes) A later space mission that experienced a stray light problem from debris was Gaia astrometry spacecraft launch in 2013 by the ESA - the source of the stray light was later identified as the fibers of the sunshield, protruding beyond the edges of the shield.
Other payloads
The Plasma Diagnostics Package (PDP), which had been previously flown on STS-3, made its return on the mission, and was part of a set of plasma physics experiments designed to study the Earth's ionosphere. During the third day of the mission, it was grappled out of the payload bay by the Remote Manipulator System (Canadarm) and released for six hours. During this time, Challenger maneuvered around the PDP as part of a targeted proximity operations exercise. The PDP was successfully grappled by the Canadarm and returned to the payload bay at the beginning of the fourth day of the mission.
In a heavily publicized marketing experiment, astronauts aboard STS-51-F drank carbonated beverages from specially designed cans from Cola Wars competitors Coca-Cola and Pepsi. According to Acton, after Coke developed its experimental dispenser for an earlier shuttle flight, Pepsi insisted to American president Ronald Reagan that Coke should not be the first cola in space. The experiment was delayed until Pepsi could develop its own system, and the two companies' products were assigned to STS-51-F.
Blue Team tested Coke, and Red Team tested Pepsi. As part of the experiment, each team was photographed with the cola logo. Acton said that while the sophisticated Coke system "dispensed soda kind of like what we're used to drinking on Earth", the Pepsi can was a shaving cream can with the Pepsi logo on a paper wrapper, which "dispensed soda filled with bubbles" that was "not very drinkable". Acton said that when he gives speeches in schools, audiences are much more interested in hearing about the cola experiment than in solar physics. Post-flight, the astronauts revealed that they preferred Tang, in part because it could be mixed on-orbit with existing chilled-water supplies, whereas there was no dedicated refrigeration equipment on board to chill the cans, which also fizzed excessively in microgravity.
In an experiment during the mission, thruster rockets were fired at a point over Tasmania and also above Boston to create two "holes" – plasma depletion regions – in the ionosphere. A worldwide group of geophysicists collaborated with the observations made from Spacelab 2.
Landing
Challenger landed at Edwards Air Force Base, California, on August 6, 1985, at 12:45:26 p.m. PDT. Its rollout distance was . The mission had been extended by 17 orbits for additional payload activities due to the Abort to Orbit. The orbiter arrived back at Kennedy Space Center on August 11, 1985.
Mission insignia
The mission insignia was designed by Houston, Texas, artist Skip Bradley. is depicted ascending toward the heavens in search of new knowledge in the field of solar and stellar astronomy, with its Spacelab 2 payload. The constellations Leo and Orion are shown in the positions they were in relative to the Sun during the flight. The nineteen stars indicate that the mission is the 19th shuttle flight.
Legacy
One of the purposes of the mission was to test how suitable the Shuttle was for conducting infrared observations, and the IRT was operated on this mission. However, the orbiter was found to have some draw-backs for infrared astronomy, and this led to later infrared telescopes being free-flying from the Shuttle orbiter.
See also
List of human spaceflights
List of Space Shuttle missions
Salyut 7 (a space station of the Soviet Union also in orbit at this time)
Soyuz T-13 (a mission to salvage that space station in the summer of 1985)
References
External links
NASA mission summary
Press Kit
STS-51F Video Highlights
Space Coke can
Carbonated Drinks in Space
YouTube: STS-51F launch, abort and landing
July 12 launch attempt
Space Shuttle Missions Summary
Space Shuttle missions
Edwards Air Force Base
1985 in spaceflight
1985 in the United States
Crewed space observatories
Spacecraft launched in 1985
Spacecraft which reentered in 1985 | STS-51-F | Astronomy | 1,853 |
68,238,375 | https://en.wikipedia.org/wiki/Artificial%20sky | The artificial sky is a daylight simulation device that replicates the light coming from the sky dome. An architectural scale model or 1:1 full-scaled aircraft is placed under an artificial sky to predict daylight penetration within buildings or aircraft that subjects to different situations, complex geometries, or heavily obstructed windows. The concept of the artificial sky was derived due to heliodon’s limitation in providing a stable lighting environment for evaluating the diffuse skylight component.
Description
An artificial sky is primarily utilized in the field of architecture to analyze daylight in buildings and spaces. Architectural students, architects, researchers, lighting designers, lighting engineers, automotive and aerospace engineering use the simulation device for various purposes. Several versions of the instrument are used in laboratories of architectural schools and practice for daylighting studies and research. Lighting engineers and designers use the artificial sky to measure illumination levels. The instrument is utilized to examine the visibility of tools in the cockpit in automotive and aerospace engineering to improve flight safety.
Since 1914, Artificial skies were used by architects and lighting engineers to find ways to stimulate the sky from which physical models of buildings could be measured for interior daylighting.
Generally, interior daylighting of buildings is analyzed at the design stage using physical models by observation and evaluation of physical models of light levels under a real sky, but the luminance is constantly varying, and regular results are difficult to obtain, therefore artificial sky forms the ideal way to predict daylight penetration.
The artificial sky can replicate standard and statistical skies and are not restricted by the weather conditions of the natural sky. In general, the artificial sky is operational with lux meter heads, data logging systems, micro photo cameras and can be a manual or computerized system. The sky vault is partly or completely replicated. Three ways of replicating sky light are by direct lighting, by reflection, or by diffusion. Through reflection, spotlights directed under the model illuminate a white dome, the reflections on the dome illuminate the model. If the real sky emits a diffuse light, the most realistic principle is sky functioning by diffusion. Normally, the artificial sky has spherical forms. The most practical systems integrate the artificial sky with a mechanical Sun for reproducing the sunlight.
By measuring and estimating daylight penetration using artificial skies, building designers and engineers can reduce energy by controlling lighting, the simulation can provide a daylight design that reduces the environmental impact of buildings by decreasing the need for lighting, heating, and cooling. By analyzing issues of architectural light simulation, the simulation models which use artificial skies gives valuable advice to attain the best design solution for buildings and spaces. Daylight studies help in the design of passive houses, zero-energy buildings, and ecological building design.
To address readability issues that arise due to glare and faded screens under ambient lighting conditions in automotive displays, artificial skies provide a luminous environment that allows designers and engineers to handle any areas of concern.
The use of simulation aids in avoiding glare and reflected heat from building's facades mainly due to innovative design forms. Since the intense sun rays affect the surrounding urban environment, the heat and glare affect people on nearby streets and buildings. The simulation device will allow designers to avoid unexpected events that occurred in concave surfaces of the Walkie Talkie skyscraper and Walt Disney Concert Hall where it caused damages due to reflected heat and glare. To avoid overheating in outdoor areas and buildings from reflected sun rays, simulation using artificial sky for such types of building forms during design stages allows architects to avoid the high cost of retrofitting and damages.
Artificial sky types include mirror boxes, full-dome sky, virtual dome, and reflectors.
Types of artificial sky
Mirror box
A mirror box is an artificial sky consisting of a luminous ceiling and mirrored walls, used to replicate uniform or overcast skies. In a mirror box, a consistent luminance distribution is created from reflections of the light from the mirrored walls and an appropriate estimation of CIE standard overcast sky is simulated. The light source is the white diffusing material illuminated by several lamps from behind to diffuse the light throughout the room with help of sensors. The walls of the room are surrounded by plane mirrors organized vertically on all sides, which produces an image of the luminous ceiling by reflection and inter-reflection.
A typical mirror box is a rectangular or octagonal box that can be installed in any laboratory. The mirror box is a simple, compact, and inexpensive artificial sky. But it can only replicate the standard overcast sky; therefore, it is suitable for Daylight Factor (DF) analysis.
Mirror box artificial type is used in universities such as:
At CEPT University, a mirror box artificial sky is installed at their laboratories of Center for Advanced Research in Buildings and Energy (CARBSE) for daylight analysis. In the university's living laboratory for Net Zero Energy Building (NZEB), the test chamber includes a mirror box artificial sky for both scholarly research and industry testing.
At the University of Westminster (custom-made artificial sky), the fabrication lab designed a custom-made mirror box artificial sky. Within an interior dimension of 2.5mx2.5m, the tool can contain large scale architectural models to measure Daylight Factor.
Reflectors
The reflecting dome sky simulator is formed with a reflective opaque dome surface to reproduce uniform and non-uniform skies. The lighting system on the interior of the dome is formed to stimulate sky distributions that are different from a standard overcast sky. The artificial dome uses a reflective surface to illuminate sky distributions and evaluate daylighting on scale models placed on a rotatable tabletop. Also, it can be integrated with the artificial Sun to replicate sunlight. Compared to mirror boxes, reflecting dome skies are more adjustable in utilization and their variants are widely available in the market.
Reflecting artificial sky is available in university and research laboratories such as:
Slovak Academy of Sciences, Bratislava, Slovakia, the facility built a flexible reflecting dome in the Institute of Construction and Architecture 1973. The 8m diameter hemispherical artificial sky is fully adjustable to uniform and non-uniform overcast skies with an artificial Sun, a parabolic mirror of diameter 1.2m. The artificial sky is a tubular construction that consists of gypsum plaster on metal mesh and designed on a circular ‘horizon’ tube suspended from the ceiling of the laboratory like a large white chandelier. Lawrence Berkeley Laboratory, California, USA, built the 7.32m diameter reflecting dome in 1981 which was designed to replicate a uniform sky, an overcast sky and various clear-sky luminance distributions. The sun simulator of diameter 1.5 m is used. The metal dome was kept at a height of a seven-foot-high cylindrical plywood wall which enables large models to be transferred in and out through the large doors. The reflectivity of up to 80% is achievable due to high-reflectance white paint sprayed on the interiors. The illumination system of high-output fluorescent lamps and ballasts provides an illumination level of around 5000 lx for a uniform sky, 3500 lx for the overcast sky and more than 6000 lx for a regular clear sky. Large architectural scale models of up to 6 feet across can be accommodated with the ability of the whole platform to rotate.
Central Research Institute of Industrial Buildings, Perovo, Moscow, Russia evaluates research projects under the artificial sky and illuminating engineering facilities of the new laboratory for the Central Research Institute of Building Physics, Moscow. A 9m diameter skydome with 16 lamps of uniform luminance is accompanied by a Sun simulator of 0.9m diameter with a parabolic mirror outside the sky.
The University of Michigan, Ann Arbor, MI, USA, the university uses a 9.2m diameter artificial sky, for measuring and evaluating overcast, uniform, and clear sky conditions with a Sun simulator of diameter 1.5m parabolic disc.
Virtual dome
Virtual dome replicates the sky vault with a scanning process for any time and any location on Earth. This type of artificial sky is flexible due to its ability to replicate any type of sky. To limit cost and space, the virtual dome utilizes heavy robotic and fine control systems. The results of the simulation are measured only through a computer screen after a process of combinations of multiple simulations. It provides daylighting simulations on scale models on a rotating platform using an artificial sky and a Sun simulator. The artificial dome was found in the early nineties, and therefore it is the latest type of artificial sky.
Although it is the most precise tool, direct perceptions of the simulations are not achievable. Since direct perception is not possible in the virtual dome, the tool is largely used by scientists and not made for designers.
Virtual dome artificial sky is available in university and research laboratories such as:
EPFL Solar Energy and Building Physics Laboratory LESO-PB, Vaud (laboratory-made artificial sky), The research laboratory developed a scanning sky simulator as a basis for other sky simulators which enables precise replication of the luminance distribution of all types of the sky. The tool uses a scanning process to rebuild the entire sky hemisphere, beginning with a sixth of the hemisphere. The overall hemisphere, established on Tregenza's model of 145 light zones, is reconstructed by a six-step scan. Quantitative data of illuminance and qualitative data of video digitized images are supplemented to the end of the procedure. It's an accurate tool to obtain diffuse light measurements within physical scale models for any time and location for the evaluation of innovative architectural solutions and daylighting systems. The laboratory built the instrument to reduce energy savings and enhance user comfort through the efficient use of daylighting.
Daylighting Laboratory of the Politecnico di Torino (IT), Turin (laboratory-made artificial sky), the laboratory built an artificial scanning sky with addition to the artificial sun. A sky scanning simulator is characterized on the subdivision models of the sky hemisphere. The dome is subdivided into 145 circular areas, each of which is recognized of uniform luminance. The areas are replicated using circular luminaires established on a hemispherical surface. The structure consists of 25 luminaires, conforming to one-sixth of the entire hemisphere of 7m diameter. Various sky conditions of overcast, clear and intermediate are replicated corresponding to both standard models and real luminance values. The sky scanning simulator and sun simulator enables daylighting simulations produced inside scale models utilized for research and design outcomes. Photometric data and digital images of the luminous space are the outcomes that are attained. The dome was built for architects, engineers, lighting designers and researchers.
Berkeley Education Alliance for Research in Singapore (BEARS), Singapore (commercial artificial sky), the laboratory which focuses on sustainable and low-carbon solutions utilizes a commercially available virtual dome manufactured by Betanit.com. The device evaluates the visual and lighting performance of buildings replicated with building scale models in a limited laboratory space. The artificial sky can simulate any sky distributions with the Tregenza subdivision using 145 patches.
CEPT University, Ahmedabad (commercial artificial sky), the university uses components of an available virtual dome known as Kiwi Artificial Sky manufactured by Betanit.com at its CARBSE research laboratory. The light source was developed by the CARBSE team. Placed on the platform of the turntable, the building scale models are evaluated for daylighting studies. To perform analysis, the turntable can rotate the model in about two different axes and provide measurement for daylighting studies used for academic and research purposes.
Full dome
A full dome is a type of artificial sky that can replicate any kind of sky distribution using dimmable luminaires. The simulation and obtaining daylighting metrics are performed through computers. When integrated with a heliodon, the device can replicate direct sunlight at any global location. The full dome is the most advanced type of artificial sky available. They are the fastest, most powerful, and highly expensive simulators. It is used by students and researchers for optimizing daylighting studies in architectural spaces.
Full dome artificial sky is available in the university, research laboratories and large lighting companies such as:
Cardiff University, Wales (custom-made artificial sky) - the Welsh school of architecture uses an 8m diameter artificial sky with 640 fluorescent lamps. The lamps are controlled in sections that replicate sky distributions for daylight and Sun-path studies on building scale models on a rotating table.
Bartenbach, Tyrol (custom-made artificial sky) - the lighting firm uses a 6.5m diameter artificial sky with 393 lamps for daylighting design with visualization models and calculations.
UAE University, Al Ain (commercial artificial sky) - the university installed full dome artificial sky integrated with robotic heliodon in their Daylighting Simulation Laboratory, designed and manufactured by betanit.com. The hemi-dome of 4.5m diameter replicates a wide variety of sky conditions using computer control. The illuminance level of the overcast sky is 20000 lx and exceeds 60000 lx for a clear sky. The accuracy and flexibility of the artificial sky are due to thermally monitored light patches that are computer-controlled to facilitate the reproduction of any sky for any location at any time of the day with a steady and stable gradation of luminous distribution. The artificial sky is used for teaching purposes in courses such as illumination & daylighting design and other areas of research in sustainable building design and technology at the university.
The Bartlett, University College London (UCL), London (custom-made artificial sky) - the Bartlett Faculty of the Built Environment architecture students utilize a 5.2m diameter geodesic hemispherical dome to simulate several sky conditions on concept scale models. The device was custom-made by Peter Raynham of UCL and research assistant. The 810 individually controlled LED modules and 850mm-wide parabolic reflector in an arched interior can replicate the Sun's trajectory. The system provides an interactive studying than 3D models in CAD.
University of Malta, Malta (commercial artificial sky), the university uses the artificial sky with a heliodon manufactured by betanit.com for daylighting studies, mainly for teaching, research, and design purposes.
Universiti Teknologi MARA (UiTM), Kuala Lumpur (commercial-made artificial sky), the university incorporated Durian Artificial Sky manufactured from betanit.com, a full dome type of artificial sky for their daylighting laboratory in Kuala Lumpur. The Durian Artificial Sky dome was utilized for design parameters of temperature, light intensity, building orientation and position, and illuminated areas for a tropical climate building in the Malaysian context. It offers a common modelling of site context through an urban simulation and determines data of sky intensity of various locations of the site. The simulation helps the configuration of the design module to disperse out conferring to the light intensity of the site. Moreover, the simulation helps in achieving the energy efficiency of buildings located in a tropical climate.
NIISF – Research Institute of Building Physics, Moscow, Silver Pines, Russia, uses a hemispherical sky simulator of diameter 16.8m with 2,000 light modules and five parabolic stable sun reflectors with fixed altitudes.
HFT, Stuttgart University of Applied Sciences, Stuttgart, the Daylight Planning Lab uses a translucent hemisphere of 4.20m diameter with 30 fluorescent lamps and the artificial sun simulator of a halogen bulb with a parabolic reflector. The device can provide accurate replication of the sky's brightness and the circumsolar radiation. It can reproduce sky distributions of sunny, overcast, or cloudy sky distributions.
Oklahoma State University (OSU) utilized a transilluminated artificial sky built-in 2007 which consists of a geodesic dome of translucent diffusing ‘Lexan’ plastics in flanged self-supporting panels.
See also
Daylight harvesting
References
Architectural design | Artificial sky | Engineering | 3,215 |
19,679,163 | https://en.wikipedia.org/wiki/Vacuum-anchor | In large scale oceanic civil engineering, vacuum-anchors are used to anchor gravity-based structures (such as the Troll A Oil Platform) in the soft bottomed muck found on many oil bearing continental shelves and the world's shallower seas.
This design is modeled on how the webbed feet of aquatic animals increase the surface area on the ground.
The lowest part of the vacuum-anchors form downward-facing cylindrical cups connecting to the legs of the gravity-based structure. The top of the cups have a valve to exhaust gases and liquids trapped from the sea bottom looking to escape. This is conceptually similar to a tall drinking glass filled with water, then inverted.
When a lifting or sideways force is applied to the cup, the weight and inertia of the enclosed solution must also be displaced. Any material that spills out of the enclosure creates a vacuum that anchors the structure to the soft bottom.
References
See also
Troll A platform
Offshore concrete structure
Suction caisson
Oceanography
Civil engineering
Ship anchors | Vacuum-anchor | Physics,Chemistry,Engineering,Environmental_science | 204 |
15,030,082 | https://en.wikipedia.org/wiki/TMC6 | Transmembrane channel-like protein 6 is a protein that in humans is encoded by the TMC6 gene. In vivo, TMC6 and its homolog TMC8, interact and form a complex with the zinc transporter 1 (SLC30A1) and localize mostly to the endoplasmic reticulum, but also to the nuclear membrane and Golgi apparatus.
Inactivating mutations in TMC6 or TMC8 have been implicated as the genetic cause of the rare skin disorder epidermodysplasia verruciformis, which is characterized by abnormal susceptibility to human papillomaviruses (HPVs) of the skin resulting in the growth of scaly macules and papules, particularly on the hands and feet.
References
Further reading | TMC6 | Chemistry | 168 |
17,857,659 | https://en.wikipedia.org/wiki/Computer%20Control%20Company | Computer Control Company, Inc. (1953–1966), informally known as 3C, was a pioneering minicomputer company known for its DDP-series (Digital Data Processor) computers, notably:
DDP-24 24-bit (1963)
DDP-224 24-bit (1965)
DDP-116 16-bit (1965)
DDP-124 24-bit (1966) using monolithic ICs
It was founded in 1953 by Dr. Louis Fein, the physicist who had earlier designed the Raytheon RAYDAC computer.
The company moved to Framingham, Massachusetts, in 1959. Prior to the introduction of the DDP-series it developed a series of digital logical modules, initially based on vacuum tubes.
In 1966 it was sold to Honeywell, Inc. As the Computer Controls division of Honeywell, it introduced further DDP-series computers, and was a $100,000,000 business until 1970 when Honeywell purchased GE's computer division and discontinued development of the DDP line.
In a 1970 essay, Murray Bookchin used the DDP-124 as his example of computer progress:
One of the oddest of the DDP series was the DDP 19—of which only three were built on custom order for the U.S. Weather service. Its architecture was based on a 19-bit word structure consisting of six octal bytes plus a sign bit, which in arithmetic operations could create the unusual value of "negative zero". One of these machines was donated by the government to the Milwaukee Area Technical College in 1972, which included a drum-based line printer and dual Ampex magnetic tape drives. It was used for a limited number of students as an "extra credit project device" for the next 2–3 years, after which it was scrapped to make space for newer equipment. The fate of the other two units is unknown.
Notes
References
External links
Oral history interview with Louis Fein at Charles Babbage Institute, University of Minnesota, Minneapolis. Fein discusses establishing computer science as an academic discipline at Stanford Research Institute (SRI) as well as contacts with the University of California—Berkeley, the University of North Carolina, Purdue, International Federation for Information Processing and other institutions.
The 3C Legacy Project
1953 establishments in Massachusetts
1966 disestablishments in Massachusetts
1966 mergers and acquisitions
American companies established in 1953
American companies disestablished in 1966
Companies based in Framingham, Massachusetts
Computer companies established in 1953
Computer companies disestablished in 1966
Defunct computer companies based in Massachusetts
Defunct computer companies of the United States
Defunct computer hardware companies
Defunct computer systems companies
Electronics companies established in 1953
Minicomputers | Computer Control Company | Technology | 535 |
67,129,616 | https://en.wikipedia.org/wiki/Lichtheimiaceae | Lichtheimiaceae is a family of fungi in the order Mucorales. The family was circumscribed in 2013 after a molecular phylogenetic analysis helped delineate a new family structure for the Mucorales.
Genera
Circinella – 11 spp.
Dichotomocladium – 5 spp.
Fennellomyces – 4 spp.
Lichtheimia – 7 spp.
Phascolomyces – 1 sp.
Rhizomucor – 6 spp.
Thamnostylum – 4 spp.
Thermomucor – 1 sp.
Zychaea – 1 sp.
References
Zygomycota
Fungus families
Taxa described in 2009 | Lichtheimiaceae | Biology | 139 |
11,252,689 | https://en.wikipedia.org/wiki/Pitteway%20triangulation | In computational geometry, a Pitteway triangulation is a point set triangulation in which the nearest neighbor of any point p within the triangulation is one of the vertices of the triangle containing p.
Alternatively, it is a Delaunay triangulation in which each internal edge crosses its dual Voronoi diagram edge. Pitteway triangulations are named after Michael Pitteway, who studied them in 1973. Not every point set supports a Pitteway triangulation. When such a triangulation exists it is a special case of the Delaunay triangulation, and consists of the union of the Gabriel graph and convex hull.
History
The concept of a Pitteway triangulation was introduced by . See also , who writes "An optimal partition
is one in which, for any point within any triangle, that point lies at least
as close to one of the vertices of that triangle as to any other data point." The name "Pitteway triangulation" was given by .
Counterexamples
points out that not every point set supports a Pitteway triangulation. For instance, any triangulation of a regular pentagon includes a central isosceles triangle such that a point p near the midpoint of one of the triangle sides has its nearest neighbor outside the triangle.
Relation to other geometric graphs
When a Pitteway triangulation exists, the midpoint of each edge interior to the triangulation must have the two edge endpoints as its nearest neighbors, for any other neighbor would violate the Pitteway property for nearby points in one of the two adjacent triangles. Thus, a circle having that edge as diameter must be empty of vertices, so the Pitteway triangulation consists of the Gabriel graph together with the convex hull of the point set. Conversely, when the Gabriel graph and convex hull together form a triangulation, it is a Pitteway triangulation.
Since all Gabriel graph and convex hull edges are part of the Delaunay triangulation, a Pitteway triangulation, when it exists, is unique for points in general position and coincides with the Delaunay triangulation. However point sets with no Pitteway triangulation will still have a Delaunay triangulation.
In the Pitteway triangulation, each edge pq either belongs to the convex hull or crosses the edge of the Voronoi diagram that separates the cells containing p and q. In some references this property is used to define a Pitteway triangulation, as a Delaunay triangulation in which all internal Delaunay edges cross their dual Voronoi edges. However, a Pitteway triangulation may include convex hull edges that do not cross their duals.
Notes
References
.
.
.
.
Triangulation (geometry) | Pitteway triangulation | Mathematics | 582 |
26,722,761 | https://en.wikipedia.org/wiki/Michael%20Elowitz | Michael B. Elowitz is a biologist and professor of Biology, Bioengineering, and Applied Physics at the California Institute of Technology, and investigator at the Howard Hughes Medical Institute. In 2007 he was the recipient of the Genius grant, better known as the MacArthur Fellows Program for the design of a synthetic gene regulatory network, the Repressilator, which helped initiate the field of synthetic biology. He was the first to show how inherently random effects, or 'noise', in gene expression could be detected and quantified in living cells, leading to a growing recognition of the many roles that noise plays in living cells. His work in Synthetic Biology and Noise represent two foundations of the field of Systems Biology. Since then, his laboratory has contributed to the development of synthetic biological circuits that perform a range of functions inside cells, and revealed biological circuit design principles underlying epigenetic memory, cell fate control, cell-cell communication, and multicellular behaviors.
Career
His laboratory studies the dynamics of genetic circuits in individual living cells using synthetic biology, time-lapse microscopy, and mathematical modeling, with a particular focus on the way in which cells make use of noise to implement behaviors that would be difficult or impossible without it.
Recently, his lab has expanded their approaches beyond bacteria to include eukaryotic and mammalian cells.
Life
Elowitz grew up in Los Angeles, California, where he attended the humanities magnet at Alexander Hamilton High School (Los Angeles).
He studied Physics and graduated with a B.A. from the University of California, Berkeley in 1992, and from Princeton University with a Ph.D. in 1999.
In 1997–1998, he spent one year at the European Molecular Biology Laboratory at Heidelberg.
Afterwards, he was a postdoctoral fellow at the Rockefeller University in New York City.
While working as a graduate student at Princeton he co-authored songs such as Sunday at the Lab with Uri Alon.
Awards
2023 Clarivate citation laureate
2022 Elected to the US National Academy of Sciences
2019 Raymond and Beverly Sackler International Prize in Biophysics
2016 Elected to the European Molecular Biology Organization (EMBO)
2016 Fellow, American Academy for the Advancement of Science.
2015 Elected to the American Academy of Arts and Sciences
2011 HFSP Nakasone Award
2008 Presidential Early Career Award in Science and Engineering
2008 Discover Magazine "Top 20 under 40"
2007 MacArthur Fellows Program
2006 Packard Fellow
2004 Technology Review TR100 List of Top Innovators
2004 Searle Scholar
2003 Burroughs Welcome Fund Career Award at the Scientific Interface
Peer-reviewed publications
Li P, Markson JS, Wang S, Chen S, Vachharajan V, Elowitz MB, "Morphogen gradient reconstitution reveals Hedgehog pathway design principles," Science (2018).
Bintu L, Yong J, Antebi YE, McCue K, Kazuki Y, Uno N, Oshimura M, Elowitz MB, "Dynamics of epigenetic regulation at the single-cell level," Science (2016).
Lin Y, Sohn CH, Dalal CK, Cai L, Elowitz MB, Combinatorial gene regulation by modulation of relative pulse timing, Nature, 2015
References
External links
"Hacking DNA", IEEE Spectrum, Paul McFedries, October 2009
"Michael B Elowitz", Scientific Commons
"Michael Elowitz", Science blog
21st-century American biologists
MacArthur Fellows
California Institute of Technology faculty
Howard Hughes Medical Investigators
Living people
Year of birth missing (living people)
University of California, Berkeley alumni
Princeton University alumni
Synthetic biologists
Systems biologists
Fellows of the American Academy of Arts and Sciences
Recipients of the Presidential Early Career Award for Scientists and Engineers | Michael Elowitz | Biology | 747 |
54,572,000 | https://en.wikipedia.org/wiki/NGC%207073 | NGC 7073 is a spiral galaxy located about 230 million light-years away in the constellation of Capricornus. NGC 7073 was discovered by astronomer Albert Marth on August 25, 1864.
See also
List of NGC objects (7001–7840)
NGC 7019
References
External links
Spiral galaxies
Capricornus
7073
66847
Astronomical objects discovered in 1864
Markarian galaxies | NGC 7073 | Astronomy | 83 |
37,032,551 | https://en.wikipedia.org/wiki/1%20Geminorum | 1 Geminorum (1 Gem) is a star in the constellation Gemini. Its apparent magnitude is 4.15.
In the 19th century, John Flamsteed numbered the brighter stars, by constellation, from west to east, and 1 Geminorum was the first star listed in Gemini. It is also listed in the Bright Star Catalogue as star 2134, usually designated HR 2134 with the HR standing for the Harvard Revised catalog, the precursor to the Bright Star Catalogue.
In 1948, 1 Geminorum was discovered to be a close double star whilst using it to focus a telescope for observations of the planet Uranus. From initial observations of the spectrum, it was estimated that both components were giants and that the secondary was itself double. Radial velocity variations had been found in 1906, but only one set of absorption lines could be detected in the spectrum and it was not possible to calculate a reliable orbit until 1976.
1 Geminorum is a triple star system 0.17 degree south of the ecliptic. The primary component of the system, 1 Geminorum A, is a K-type red clump giant star around twice the mass of the Sun. Component A is orbited by a spectroscopic binary pair of stars at a separation of about 9.4 astronomical units every 4877.6 days. The two secondary components, 1 Geminorum Ba and Bb, have not been resolved, but regular periodic Doppler shifts in the spectrum indicate orbital motion of a binary pairing consisting of an F-type subgiant and a solar-mass star that may be G-type, separated by approximately 0.1234 astronomical units.
In 1893, a 14th magnitude companion was reported by Sherburne Wesley Burnham from the naked-eye star, but it is a distant background object.
1 Geminorum is listed as a suspected variable star with an amplitude of 0.05 magnitudes.
References
Gemini (constellation)
K-type giants
Geminorum, 01
Spectroscopic binaries
Triple star systems
Durchmusterung objects
041116
028734
2134
Suspected variables
F-type subgiants
G-type main-sequence stars | 1 Geminorum | Astronomy | 440 |
39,922,027 | https://en.wikipedia.org/wiki/Cyclodisparity | In vision science, cyclodisparity is the difference in the rotation angle of an object or scene viewed by the left and right eyes. Cyclodisparity can result from the eyes' torsional rotation (cyclorotation) or can be created artificially by presenting to the eyes two images that need to be rotated relative to each other for binocular fusion to take place.
Human and animal vision
The eyes and visual system can compensate for cyclodisparity up to a certain point; if the cyclodisparity is larger than a threshold, the images cannot be fused, resulting stereoblindness, and in double vision in subjects who otherwise have full stereo vision.
When a human subject is presented with images that have artificial cyclodisparity, cyclovergence is evoked, that is, a motor response of the eye muscles that rotates the two eyes in opposite directions, thereby reducing cyclodisparity. Visually-induced cyclovergence of up to 8 degrees has been observed in normal subjects. Furthermore, up to about 8 degrees can usually be compensated by purely sensory means, that is, without physical eye rotation. This means that the normal human observer can achieve binocular image fusion in presence of cyclodisparity of up to approximately 16 degrees.
Cyclodisparity due to images having been rotated inward can be compensated better when the gaze is directed downwards, and cyclodisparity due to an outward rotation can be compensated better when the gaze is directed upwards. A proposed explanation for this phenomenon is that the motor system is coordinated in such a way that the eyes perform a torsional movement to reduce the size of the search zones and thus the computational load required for solving the correspondence problem. The resulting cyclovergence at near gaze is smaller than the cyclovergence predicted by Listing's law.
Video processing and computer vision
Active camera torsion can be used in machine and computer vision for several purposes. For instance, camera torsion can be used to make improved use of the search range over which matching detectors or stereo matching algorithms operate, or to make a 3D slanted surface appear frontoparallel for further stereo processing.
For image compression purposes, images with cyclodisparity are advantageously encoded using global motion compensation using a rotational motion model.
References
Stereoscopy
Vision
Computer vision | Cyclodisparity | Engineering | 494 |
46,743,384 | https://en.wikipedia.org/wiki/Peerio | Peerio was a cross-platform end-to-end encrypted application that provided secure messaging, file sharing, and cloud file storage. Peerio was available as an application for iOS, Android, macOS, Windows, and Linux. Peerio (Legacy) was originally released on 14 January 2015, and was replaced by Peerio 2 on 15 June 2017. The app is discontinued.
Messages and user files stored on the Peerio cloud were protected by end-to-end encryption, meaning the data was encrypted in a way that could not be read by third parties, such as Peerio itself or its service providers. Security was provided by a single permanent key-password, which in Peerio was called an "Account Key".
The company, Peerio Technologies Inc., was founded in 2014 by Vincent Drouin. The intent behind Peerio was to provide a security program that is easier to use than the PGP standard.
Peerio was acquired by WorkJam, a digital workplace solutions provider, on January 13, 2019.
Features
Peerio allowed users to share encrypted messages and files in direct messages or groups that Peerio called "rooms".
Peerio "rooms" were offered as a team-oriented group chat, allowing administrative functionality to add and remove other users from the group chat.
Peerio allows users to store encrypted files online, offering limited cloud storage for free with optional paid upgrades.
Peerio messages and files persist between logins and hardware, differing from ephemeral encrypted messaging apps which do not retain message or file history between logins or different devices.
Peerio supported application based multi-factor authentication.
Peerio allowed users to share animated GIFs.
Security
End-to-End Encryption
Peerio utilized end-to-end encryption and it was applied by default to all message and file data. End-to-end encryption is intended to encrypt data in a way that only the sender and intended recipients are able to decrypt, and thus read, the data.
Taken from Peerio's privacy policy:
"Peerio utilizes the NaCl (pronounced "salt") cryptographic framework, which itself uses the following cryptographic primitives:
X25519 for public key agreement over elliptic curves.
ed25519 for public key signatures.
XSalsa20 for encryption and confidentiality.
Poly1305 for ensuring the integrity of encrypted data.
Additionally, Peerio uses scrypt for memory-hard key derivation and BLAKE2s is used for various hashing operations.
For in-transit encryption, Peerio Services used Transport Layer Security (TLS) with best-practice cipher suite configuration, including support for perfect forward secrecy (PFS). You can view a detailed and up-to-date independent review of Peerio's TLS configuration on SSL Labs."
Code Audits
Prior to Peerio's initial release, the software was audited by the German security firm Cure53, which found only non-security related bugs, all of which were fixed prior to the applications release.
According to Peerio's website, the application was also audited in March 2017 by Cure53.
Open Source
Peerio was partly open source and published code publicly on GitHub
Bug Bounty
Peerio offered a bug bounty, offering cash rewards for anyone who reports security vulnerabilities.
Peerio (Legacy)
The first iteration of Peerio, Peerio (Legacy), was developed by Nadim Kobeissi and Florencia Herra-Vega and was released on 14 January 2015 and was closed on 8 January 2018.
Peerio (Legacy) was a free application, available for Android, iOS, Windows, macOS, Linux, and as a Google Chrome extension. It offered end-to-end encryption, which is enabled by default. The encryption used the miniLock open-source security standard, which was also developed by Kobeissi.
On 15 June 2017, Peerio 2 was launched as the successor to Peerio (Legacy). According to the company's blog, Peerio 2 is purported to be a "radical overhaul" of the original application's core technology. Claimed benefits in comparison to Peerio (Legacy) include increased speed, support for larger file transfers (up to 7000GB), and a re-designed user interface. Peerio also stated an added focus towards businesses looking for encrypted team collaboration software.
References
Cryptographic software
Internet privacy software
Privacy software
Open standards | Peerio | Mathematics | 926 |
6,231,567 | https://en.wikipedia.org/wiki/Nucleotidase | A nucleotidase is a hydrolytic enzyme that catalyzes the hydrolysis of a nucleotide into a nucleoside and a phosphate.
A nucleotide + H2O = a nucleoside + phosphate
For example, it converts adenosine monophosphate to adenosine, and guanosine monophosphate to guanosine.
Nucleotidases have an important function in digestion in that they break down consumed nucleic acids.
They can be divided into two categories, based upon the end that is hydrolyzed:
: 5'-nucleotidase - NT5C, NT5C1A, NT5C1B, NT5C2, NT5C3
: 3'-nucleotidase - NT3
5'-Nucleotidases cleave off the phosphate from the 5' end of the sugar moiety. They can be classified into various kinds depending on their substrate preferences and subcellular localization. Membrane-bound 5'-nucleotidases display specificity toward adenosine monophosphates and are involved predominantly in the salvage of preformed nucleotides and in signal transduction cascades involving purinergic receptors. Soluble 5'-nucleotidases are all known to belong to the haloacid dehalogenase superfamily of enzymes, which are two domain proteins characterised by a modified Rossman fold as the core and variable cap or hood. The soluble forms are further subclassified based on the criterion mentioned above. mdN and cdN are mitochondrial and cytosolic 5'-3'-pyrimidine nucleotidases. cN-I is a cytosolic nucleotidase(cN) characterized by its affinity toward AMP as its substrate. cN-II is identified by its affinity toward either IMP or GMP or both. cN-III is a pyrimidine 5'-nucleotidase. A new class of nucleotidases called IMP-specific 5'-nucleotidase has been recently defined. 5'-Nucleotidases are involved in varied functions like cell–cell communication, nucleic acid repair, purine salvage pathway for the synthesis of nucleotides, signal transduction, membrane transport, etc.
References
Further reading
External links
EC 3.1.3
Chemical pathology | Nucleotidase | Chemistry,Biology | 501 |
4,042,270 | https://en.wikipedia.org/wiki/Steering%20ratio | Steering ratio refers to the ratio between the turn of the steering wheel (in degrees) or handlebars and the turn of the wheels (in degrees).
The steering ratio is the ratio of the number of degrees of turn of the steering wheel to the number of degrees the wheel(s) turn as a result. In motorcycles, delta tricycles and bicycles, the steering ratio is always 1:1, because the steering wheel is fixed to the front wheel. A steering ratio of x:y means that a turn of the steering wheel x degree(s) causes the wheel(s) to turn y degree(s). In most passenger cars, the ratio is between 12:1 and 20:1. For example, if one and a half turns of the steering wheel, 540 degrees, causes the inner & outer wheel to turn 35 and 30 degrees respectively, due to Ackermann steering geometry, the ratio is then 540:((35+30)/2) = 16.6:1.
A higher steering ratio means that the steering wheel is turned more to get the wheels turning, but it will be easier to turn the steering wheel. A lower steering ratio means that the steering wheel is turned less to get the wheels turning, but it will be harder to turn the steering wheel. Larger and heavier vehicles will often have a higher steering ratio, which will make the steering wheel easier to turn. If a truck had a low steering ratio, it would be very hard to turn the steering wheel. In normal and lighter cars, the wheels are easier to turn, so the steering ratio doesn't have to be as high. In race cars the ratio is typically very low, because the vehicle must respond to steering input much faster than in normal cars. The steering wheel is therefore harder to turn.
Variable-ratio steering
Variable-ratio steering is a system that uses different ratios on the rack in a rack and pinion steering system. At the center of the rack, the space between the teeth are smaller and the space becomes larger as the pinion moves down the rack. In the middle of the rack there is a higher ratio and the ratio becomes lower as the steering wheel is turned towards lock. That makes the steering less sensitive when the steering wheel is close to its center position and makes it harder for the driver to over steer at high speeds. As the steering wheel is turned towards lock, the wheels begin to react more to steering input.
Steering quickener
A steering quickener is used to modify the steering ratio of factory-installed steering system, which in turn modifies the response time and overall handling of vehicle. When a steering quickener is employed in an automobile, the driver of the automobile can turn the steering wheel a smaller degree compared to a factory-installed steering system without a steering quickener, to turn the vehicle through same distance. On the other hand, the steering effort needed will greatly increase. If the automobile is equipped with power steering, overloading the power steering pump can also be a concern.
References
Engineering ratios
Automotive steering technologies
Motorcycle dynamics | Steering ratio | Mathematics,Engineering | 620 |
13,530,209 | https://en.wikipedia.org/wiki/Work%20measurement | Work measurement is the application of techniques which is designed to establish the time for an average worker to carry out a specified manufacturing task at a defined level of performance. It is concerned with the duration of time it takes to complete a work task assigned to a specific job. It means the time taken to complete one unit of work or operation it also that the work should completely complete in a complete basis under certain circumstances which take into account of accountants time
Usage
Work measurement helps to uncover non-standardization that exist in the workplace and non-value adding activities and waste. A work has to be measured for the following reasons:
To discover and eliminate lost or ineffective time.
To establish standard times for performance measurement.
To measure performance against realistic expectations.
To set operating goals and objectives.
Techniques
Analytical estimating
Predetermined motion time systems
Standard data system
Synthesis from elemental data
Time study
Work sampling
Purpose
Work Measurement is a technique for establishing a Standard Time, which is the required time to perform a given task, based on time measurements of the work content of the prescribed method, with due consideration for fatigue and for personal and unavoidable delays.
Method study is the principal technique for reducing the work involved, primarily by eliminating unnecessary movement on the part of material or operatives and by substituting good methods for poor ones. Work measurement is concerned with investigating, reducing and subsequently eliminating ineffective time, that is time during which no effective work is being performed, whatever the cause.
Work measurement, as the name suggests, provides management with a means of measuring the time taken in the performance of an operation or series of operations in such a way that ineffective time is shown up and can be separated from effective time. In this way its existence, nature and extent become known where previously they were concealed within the total.
Uses
Revealing existing causes of ineffective time through study, important though it is, is perhaps less important in the long term than the setting of sound time standards, since these will continue to apply as long as the work to which they refer continues to be done. They will also show up any ineffective time or additional work which may occur once they have been established.
In the process of setting standards it may be necessary to use work measurement:
To compare the efficiency of alternative methods. Other conditions being equal, the method which takes the least time will be the best method.
To balance the work of members of teams, in association with multiple activity charts, so that, as nearly as possible, each member has a task taking an equal time to perform.
To determine, in association with man and machine multiple activity charts, the number of machines an operative can run.
The time standards, once set, may then be used:
To provide information on which the planning and scheduling of production can be based, including the plant and labour requirements for carrying out the programme of work and the utilisation of available capacity.
To provide information on which estimates for tenders, selling prices and delivery promises can be based.
To set standards of machine utilisation and labour performance which can be used for any of the above purposes and as a basis for incentive schemes.
To provide information for labour-cost control and to enable standard costs to be fixed and maintained.
It is thus clear that work measurement provides the basic information necessary for all the activities of organising and controlling the work of an enterprise in which the time element plays a part. Its uses in connection with these activities will be more clearly seen when we have shown how the standard time is obtained.
Techniques of work measurement
The following are the principal techniques by which work measurement is carried out:
Time study
Activity sampling
Predetermined motion time systems
Synthesis from standard data
Estimating
Analytical estimating
Comparative estimating
Of these techniques we shall concern ourselves primarily with time study, since it is the basic technique of work measurement. Some of the other techniques either derive from it or are variants of it.
Time study
Time Study consists of recording times and rates of work for elements of a specified job carried out under specified conditions to obtain the time necessary to carry out a job at a defined level of performance.
In this technique the job to be studied is timed with a stopwatch, rated, and the Basic Time calculated.
Requirements for effective time study
The requirements for effective time study are:
a. Co-operation and goodwill
b. Defined job
c. Defined method
d. Correct normal equipment
e. Quality standard and checks
f. Experienced qualified motivated worker
g. Method of timing
h. Method of assessing relative performance
i. Elemental breakdown
j. Definition of break points
k. Recording media
One of the most critical requirements for time study is that of elemental breakdown. There are some general rules concerning the way in which a job should be broken down into elements. They include the following. Elements should be easily identifiable, with definite beginnings and endings so that, once established, they can be repeatedly recognised. These points are known as the break points and should be clearly described on the study sheet. Elements should be as short as can be conveniently timed by the observer. As far as possible, elements – particularly manual ones – should be chosen so that they represent naturally unified and distinct segments of the operation.
Performance rating
Time Study is based on a record of observed times for doing a job together with an assessment by the observer of the speed and effectiveness of the worker in relation to the observer's concept of Standard Rating.
This assessment is known as rating, the definition being given in BS 3138 (1979):
The numerical value or symbol used to denote a rate of working.
Standard rating is also defined (in this British Standard BS3138) as:
"The rating corresponding to the average rate at which qualified workers will naturally work, provided that they adhere to the specified method and that they are motivated to apply themselves to their work. If the standard rating is consistently maintained and the appropriate relaxation is taken, a qualified worker will achieve standard performance over the working day or shift."
Industrial engineers use a variety of rating scales, and one which has achieved wide use is the British Standards Rating Scale which is a scale where 0 corresponds to no activity and 100 corresponds to standard rating. Rating should be expressed as 'X' BS.
Below is an illustration of the Standard Scale:
Rating walking pace
0 no activity
50 very slow
75 steady
100 brisk (standard rating)
125 very fast
150 exceptionally fast
The basic time for a task, or element, is the time for carrying out an element of work or an operation at standard rating.
Basic time = observed time x observed rating
The result is expressed in basic minutes – BMs.
The work content of a job or operation is defined as: basic time + relaxation allowance + any allowance for additional work – e.g. that part of contingency allowance which represents work.
Standard time
Standard time is the total time in which a job should be completed at standard performance i.e. work content, contingency allowance for delay, unoccupied time and interference allowance, where applicable.
Allowance for unoccupied time and for interference may be important for the measurement of machine-controlled operations, but they do not always appear in every computation of standard time. Relaxation allowance, on the other hand, has to be taken into account in every computation, whether the job is a simple manual one or a very complex operation requiring the simultaneous control of several machines. A contingency allowance will probably figure quite frequently in the compilation of standard times; it is therefore convenient to consider the contingency allowance and relaxation allowance, so that the sequence of calculation which started with the completion of observations at the workplace may be taken right through to the compilation of standard time.
Contingency allowance
A contingency allowance is a small allowance of time which may be included in a standard time to meet legitimate and expected items of work or delays, the precise measurement of which is uneconomical because of their infrequent or irregular occurrence.
Relaxation allowance
A relaxation allowance is an addition to the basic time to provide the worker with the opportunity to recover from physiological and psychological effects of carrying out specified work under specified conditions and to allow attention to personal needs. The amount of the allowance will depend on the nature of the job. Examples are:
Personal 5–7%
Energy output 0–10%
Noisy 0–5%
Conditions 0–100%
e.g. Electronics 5%
Other allowances
Other allowances include process allowance which is to cover when an operator is prevented from continuing with their work, although ready and waiting, by the process or machine requiring further time to complete its part of the job. A final allowance is that of Interference which is included whenever an operator has charge of more than one machine and the machines are subject to random stoppage. In normal circumstances the operator can only attend to one machine, and the others must wait for attention. This machine is then subject to interference which increased the machine cycle time.
It is now possible to obtain a complete picture of the standard time for a straightforward manual operation.
Activity Sampling
Activity sampling is a technique in which a large number of instantaneous observations are made over a period of time of a group of machines, processes or workers. Each observation records what is happening at that instant and the percentage of observations recorded for a particular activity or delay is a measure of the percentage of time during which the activity or delay occurs.
The advantages of this method are that
It is capable of measuring many activities that are impractical or too costly to be measured by time study.
One observer can collect data concerning the simultaneous activities of a group.
Activity sampling can be interrupted at any time without effect.
The disadvantages are that
It is quicker and cheaper to use time study on jobs of short duration.
It does not provide elemental detail.
The type of information provided by an activity sampling study is:
The proportion of the working day during which workers or machines are producing.
The proportion of the working day used up by delays. The reason for each delay must be recorded.
The relative activity of different workers and machines.
To determine the number of observations in a full study the following equation is used:
Where:
Predetermined motion time system
A predetermined motion time system is a work measurement technique whereby times established for basic human motions (classified according to the nature of the motion and the conditions under which it is made) are used to build up the time for a job at a defined level of performance.
The systems are based on the assumption that all manual tasks can be analysed into basic motions of the body or body members. They were compiled as a result of a very large number of studies of each movement, generally by a frame-by-frame analysis of films of a wide range of subjects, men and women, performing a wide variety of tasks.
The first generation of PMT systems, MTM1, were very finely detailed, involving much analysis and producing extremely accurate results. This attention to detail was both a strength and a weakness, and for many potential applications the quantity of detailed analysis was not necessary, and prohibitively time -consuming. In these cases "second generation" techniques, such as Simplified PMTS, Master Standard Data, Primary Standard Data and MTM2, could be used with advantage, and no great loss of accuracy. For even speedier application, where some detail could be sacrificed then a "third generation" technique such as Basic Work Data or MTM3 could be used.
Synthesis
Synthesis is a work measurement technique for building up the time for a job at a defined level of performance by totaling element times obtained previously from time studies on other jobs containing the elements concerned, or from synthetic data.
Synthetic data is the name given to tables and formulae derived from the analysis of accumulated work measurement data, arranged in a form suitable for building up standard times, machine process times, etc. by synthesis.
Synthetic times are increasingly being used as a substitute for individual time studies in the case of jobs made up of elements which have recurred a sufficient number of times in jobs previously studied to make it possible to compile accurate representative times for them.
Estimating
The technique of estimating is the least refined of all those available to the work measurement practitioner. It consists of an estimate of total job duration (or in common practice, the job price or cost). This estimate is made by a craftsman or person familiar with the craft. It normally embraces the total components of the job, including work content, preparation and disposal time, any contingencies etc., all estimated in one gross amount.
Analytical estimating
This technique introduces work measurement concepts into estimating. In analytical estimating the estimator is trained in elemental breakdown, and in the concept of standard performance. The estimate is prepared by first breaking the work content of the job into elements, and then utilising the experience of the estimator (normally a craftsman) the time for each element of work is estimated – at standard performance. These estimated basic minutes are totalled to give a total job time, in basic minutes. An allowance for relaxation and any necessary contingency is then made, as in conventional time study, to give the standard time.
Comparative estimating
This technique has been developed to permit speedy and reliable assessment of the duration of variable and infrequent jobs, by estimating them within chosen time bands. Limits are set within which the job under consideration will fall, rather than in terms of precise capital standard or capital allowed minute values. It is applied by comparing the job to be estimated with jobs of similar work content, and using these similar jobs as "bench marks" to locate the new job in its relevant time band – known as Work Group.
Uses
To balance the work of members of teams, in association with the multiple activity charts, so that, as far as possible, each member has tasks taking an equal time.
To compare the efficiency of alternative methods. Other conditions being equal, the method which takes the least time will be the best method.
To determine, in association with man and machine multiple activity charts, the number of machines a worker can run.
Balayla model – work measurement in the service sector
The work measurement concept has evolved from the manufacturing world but has not been fully adopted yet to the global shift to the service sector. Certain factors create inherent difficulties in determining standard times for labor allocation in service jobs: (a) wide variation in Time Between Arrivals and Service Performance Time; (b) the difficulty of assessing the damage done to the organization by long customer Waiting Times for service. This difficulty makes it hard to calculate the Break-Even Point between raising worker output, which minimizes labor costs but increases customer Waiting Times and reduces service quality.
Dr. Isaac Balayla & Professor Yissachar Gilad from the Technion, Israel, developed the Balayla (Balaila) Model which overcomes most of the above-mentioned difficulties, by taking a multi-domain approach: 1) The model deploys a series of indicators for a correlation between output and Waiting Times. The indicator values are affected by service level of urgency. 2) the model determines the best Break-Even point by comparing the operational cost of an additional worker with the economical benefit caused by the decrease in WT. Thus, the model finds the best balance between worker output and service quality.
References
Balayla I.(2012) A manpower allocation model for service jobs (Balayla Model), IJSMET International Journal of Service Science, Management, Engineering, and Technology, 3(2), April–June 2012, pp 13–34.
Industrial engineering | Work measurement | Engineering | 3,129 |
44,386,524 | https://en.wikipedia.org/wiki/Manta%20Matcher | Manta Matcher is a global online database for manta rays.
Creation
It is one of the Wildbook Web applications developed by Wild Me, a 501(c)(3) not-for-profit organization in the United States, and was created in partnership with Andrea Marshall of the Marine Megafauna Foundation.
Manta rays have unique spot patterning on their undersides, which allows for individual identification. Scuba divers around the world can photograph mantas and upload their manta identification photographs to the Manta Matcher website, supporting global research and conservation efforts.
Identification of rays
Manta Matcher is a pattern-matching software that eases researcher workload; key spot pattern features are extracted using a scale-invariant feature transform (SIFT) algorithm, which can cope with complications presented by highly variable spot patterns and low contrast photographs.
Purpose and research supported
This citizen science tool is free to use by researchers worldwide. Manta Matcher represents a global initiative to centralize manta ray sightings and facilitate research on these vulnerable species through collaborative studies, including the cross-referencing of regional databases.
Manta Matcher has already supported research that contributed to the listing of reef mantas (Manta alfredi) on Appendix 1 of the Convention on Migratory Species in November 2014.
References
External links
Myliobatidae
Online databases
Biodiversity databases | Manta Matcher | Biology,Environmental_science | 271 |
60,705,768 | https://en.wikipedia.org/wiki/Register%20of%20Antarctic%20Marine%20Species | The Register of Antarctic Marine Species, also known as RAMS, is a taxonomic database that provides a list of marine species found in the Southern Ocean surrounding Antarctica.
Its purpose is to provide authoritative and comprehensive information on the diversity of marine life in the region, which provides a reference point for marine science, research, conservation and sustainable management.
The database includes marine species found on the sea floor, in the water column, and around sea-ice. RAMS is a regionally-focused database within the World Register of Marine Species.
References
Database | Register of Antarctic Marine Species | Biology | 108 |
57,615,200 | https://en.wikipedia.org/wiki/Undecylprodigiosin | Undecylprodigiosin is an alkaloid produced by some Actinomycetes bacteria. It is a member of the prodiginines group of natural products and has been investigated for potential antimalarial activity.
Natural sources
Undecylprodigiosin is a secondary metabolite found in some Actinomycetes, for example Actinomadura madurae, Streptomyces coelicolor and Streptomyces longisporus.
Production
Biosynthesis
The biosynthesis of undecylprodigiosin starts with PCP apoprotein which is transformed into the holoprotein using acetyl CoA and PPtase then adenylation occurs utilizing L-proline and ATP. The resulting molecule is then oxidized by dehydrogenase enzyme. Elongation by decarboxylative condensation with malonyl CoA is followed by another decarboxylative condensation with L-serine using α-oxamine synthase (OAS) domain. The compound is then cyclized, oxidized with dehydrogenase and methylated with SAM to give 4-methoxy-2,2′-bipyrrole-5-carboxaldehyde (MBC) intermediate which react with 2-undecylpyrrole (2-UP) to give undecylprodigiosin.
Laboratory
The first total synthesis of the undecylprodigiosin was published in 1966, confirming the chemical structure. As with the biosynthesis, the key intermediate was MBC.
Uses
As with other prodiginines, the compound has been investigated for its pharmaceutical potential as anticancer, immunosuppressant, or antimalarial agent.
References
Streptomyces
Alkaloids
Pyrroles | Undecylprodigiosin | Chemistry | 386 |
1,219,391 | https://en.wikipedia.org/wiki/Lampworking | Lampworking is a type of glasswork in which a torch or lamp is used to melt the glass. Once in a molten state, the glass is formed by blowing and shaping with tools and hand movements. It is also known as flameworking or torchworking, as the modern practice no longer uses oil-fueled lamps. Although lack of a precise definition for lampworking makes it difficult to determine when this technique was first developed, the earliest verifiable lampworked glass is probably a collection of beads thought to date to the fifth century BCE. Lampworking became widely practiced in Murano, Italy in the 14th century. As early as the 17th century, itinerant glassworkers demonstrated lampworking to the public. In the mid-19th century lampwork technique was extended to the production of paperweights, primarily in France, where it became a popular art form, still collected today. Lampworking differs from glassblowing in that glassblowing uses a furnace as the primary heat source, although torches are also used.
Early lampworking was done in the flame of an oil lamp, with the artist blowing air into the flame through a pipe or using foot-powered bellows. Most artists today use torches that burn either propane or natural gas, or in some countries butane, for the fuel gas, mixed with either air or pure oxygen as the oxidizer. Many hobbyists use MAPP gas in portable canisters for fuel and some use oxygen concentrators as a source of continuous oxygen.
Lampworking is used to create artwork, including beads, figurines, marbles, small vessels, sculptures, Christmas tree ornaments, and much more. It is also used to create scientific instruments as well as glass models of animal and botanical subjects.
Glass selection
Lampworking can be done with many types of glass, but the most common are soda-lime glass and lead glass, both called "soft glass", and borosilicate glass, often called "hard glass". Leaded glass tubing was commonly used in the manufacture of neon signs, and many US lampworkers used it in making blown work. Some colored glass tubing that was also used in the neon industry was used to make small colored blown work, and colored glass rod, of compatible lead and soda-lime glasses, was used to ornament both clear and colored tubing. The use of soft glass tubing has been fading, owing partly to environmental concerns and health risks but mainly to the adoption of borosilicate glass by most lampworkers, especially since the introduction of colored glasses compatible with clear borosilicate.
Soft glass is sometimes useful because it melts at lower temperatures, but it does not react well to rapid temperature changes as borosilicate glass does. Soft glass expands and contracts much more than hard glass when heated/cooled, and must be kept at an even temperature while being worked, especially if the piece being made has sections of varying thickness. If thin areas cool below the "stress point", shrinking can cause a crack. Hard glass, or borosilicate, shrinks much less, so is more forgiving. Borosilicate is just like regular silicate glass (SiO2), but it has a more flexible molecular structure from being doped with boron.
Glasses to be fused together must be selected for compatibility with each other, both chemically (more of a concern with soft glass than borosilicate) and in terms of coefficient of thermal expansion (COE) [CTE is also used for Coefficient of Thermal Expansion.] Glasses with incompatible COE, mixed together, can create powerful stresses within a finished piece as it cools, cracking or violently shattering the piece. Chemically, some colors can react with each other when melted together. This may cause desirable effects in coloration, metallic sheen, or an aesthetically pleasing "web effect". It also can cause undesirable effects such as unattractive discoloration, bubbling, or devitrification.
Borosilicate glass is considered more forgiving to work with, as its lower COE makes it less apt to crack during flameworking than soda-lime glass or lead glass. However, it has a narrower working temperature range than the soft glasses, has fewer available colors, and is considerably more expensive. Also, its working range is at higher temperatures than the soft glasses, requiring the use of oxygen/gas flames instead of air/gas. In addition to producing a hotter flame, the use of pure oxygen allows more control over the flame's oxidizing or reducing properties, which is necessary because some coloring chemicals in borosilicate glass react with any remaining oxygen in the flame either to produce the desired final color or to discolor if extra oxygen is present.
Lead glass has the broadest working range of the three glasses, and holds its heat better when it is out of the flame. This gives one more time to adjust one's work when blowing hollow forms. It is also less likely to crack while being worked in making pieces of variable thickness than is soda-lime glass.
Types of glass
Raw materials
Glass is available in a wide range of shapes, sizes, and colors for the lampworker. Most lampworkers use glass produced by commercial manufactures in the shape of rod, tube, sheet or frit. Glass rods are manufactured in various sizes, as small as 1 mm and as large as 50 mm or more. Glass rod is also made in different shapes like: square, triangle or half round rod. Glass tubes are also offered in a range of diameters, colors, and profiles like: scalloped, twisted or lined tubing. Crushed glass particles that have been sifted to specific sizes are known as frit or power. Sheet glass is produced in varying thickness and can be cut and shaped before being worked in the flame. The glass industry has seen steady growth in the past few decades that continues to expand the types and forms of glass available to lampworkers.
Soda lime glass
The most popular glass for lampworking is soda-lime glass, which is available pre-colored. Soda-lime glass is the traditional mix used in blown furnace glass, and lampworking glass rods were originally hand-drawn from the furnace and allowed to cool for use by lampworkers. Today soda-lime, or "soft" glass is manufactured globally, including Italy, Germany, Czech Republic, China and America.
Lead
In addition to soda lime glass, lampworkers can use lead glass. Lead glasses are distinguished by their lower viscosity, heavier weight, and somewhat greater tolerance for COE mismatches.
Borosilicate
Lampworkers often use borosilicate glass, a very hard glass requiring greater heat. Borosilicate originated as laboratory glass, but it has recently become available in color to the studio artist from a number of companies. At one time, soft (soda lime and lead) and hard (borosilicate) glasses had distinctly different looking palettes, but demand by soft-glass artists for the silver strike colors, and the development of the bright, cadmium based 'crayon colors' by Glass Alchemy in the boro line, has diminished the distinctions between them.
Quartz
Lampworkers can also work with fused quartz tube and rod. A hydrogen and oxygen torch is used to work quartz as requires higher temperatures than other types of glass. Quartz is resistant to extreme temperature variations and chemical corrosion, making it especially useful in scientific applications. Quartz has recently gained popularity in artistic glass work but is only available a few limited colors.
Tools
Tools for lampworking are similar to those used in glassblowing. Graphite is frequently used for the working surfaces of lampworking tools because of its ability to withstand high temperatures, low coefficient of friction, and resistance to sticking to the molten glass. Steel is used where greater strength is required. Some molds may be made from fruitwoods, but primarily wood is used for handles of lampworking tools. Brass may be used for working surfaces where a higher coefficient of friction is desired.
Bench burner – A torch that is fixed to the bench which provides a stationary flame.
Hand torch – The hand torch allows for more maneuverability of the flame, commonly used on glassworking lathes where there is reduced maneuverability of the piece
Kiln – the kiln is used to garage and anneal the glass, protecting the piece from thermal shock and relieving thermal stress.
Marver – flat surfaces used to roll glass upon in order to shape, smooth or consolidate applied decoration, typically made of graphite or steel.
Paddle – A graphite or metal marver attached to a handle
Reamer – A piece of graphite or brass on a handle used to enlarge holes.
Blowhose/swivel assembly – A hose, usually latex, is connected to the blowpipe via a hollow swivel, allowing the lampworker to blow into hollow glass forms while rotating them.
Tungsten pick – The extreme temperature resistance of tungsten makes it ideal for raking (dragging glass around on the surface), or to bore a hole through the glass.
Shears – Steel shears are used to cut the hot glass.
Claw grabbers – Metal tool found in various configurations which allows the hot glass to be securely held and rotated, commonly used for finishing pieces after they have been removed from the blowpipe or pontil.
Lathe – The glassworking lathe allows for precise rotation and manipulation of glass. They are especially suited for larger scale work that may be difficult or tiring to turn by hand.
General methods of beadmaking
After designing a piece, a lampworker must plan how to construct it. Once ready to begin, the lampworker slowly introduces glass rod or tubing into the flame to prevent cracking from thermal shock. The glass is heated until molten and wound around a specially coated steel mandrel, forming the base bead. The coating is an anti-fluxing bead release agent that will allow the bead to be easily removed from the mandrel, either a clay-based substance or boron nitride. It can then be embellished or decorated using a variety of techniques and materials. All parts of the workpiece must be kept at similar temperatures lest they shatter. Once finished, the piece must be annealed in an kiln to prevent cracking or shattering.
Annealing, in glass terms, is heating a piece until its temperature reaches a stress-relief point; that is, a temperature at which the glass is still too hard to deform, but is soft enough for internal stresses to ease. The piece is then allowed to heat-soak until its temperature is uniform throughout. The time necessary for this depends on the type of glass and thickness of the thickest section. The piece is then slowly cooled at a predetermined rate until its temperature is below a critical point, (between 900 and 1000 degrees Fahrenheit), at which it cannot generate internal stresses, and then can safely be dropped to room temperature. This relieves the internal stresses, resulting in a piece which should last for many years. Glass that has not been annealed may crack or shatter due to a seemingly minor temperature change or other shock.
Additional techniques for lampworked beads
Beads can be sandblasted, or they can be faceted, using lapidary techniques. "Furnace glass" beads, which are more elaborate versions of the old Seed bead technique, are widely made today. Chevron beads are multi-layer beads once exclusively made using hot-shop techniques to produce the original tubing; but now some lampworkers make similar designs on their torches before lapping the ends to reveal the various layered colors. As torches get bigger and more powerful, the cross-over between lampworking and furnace glass continues to increase.
Fuming is a technique that has been developed and popularized by Bob Snodgrass since the 70's and 80's. Fuming consists of heating silver or gold in the flame, so that the metals vaporize or "fume" microscopically thin layers of particles onto the glass. These particles stick to the hot glass surface changing its color with interesting effects. Silver turns clear glass into a yellowish color, giving shades of blues and greens when backed with a dark color, while gold turns clear glass shades of pinks and reds. The precious metal coating becomes increasingly visible the more the glass is fumed.
Brief history of modern lampworked beads
Lampworked beads (with the exception of Asian and African beadmaking) have generally been for the last four hundred years or so the province of Italian, and, later, Bohemian lampworkers who kept the techniques secret. Thirty or so years ago, some American artists started experimenting with the form. Their early efforts, by today's standards, were crude, as there was almost no documentation, and none of the modern tools. However, they shared their information, and some of them started small businesses developing tools, torches and other equipment.
This group eventually formed the basis for the International Society of Glass Beadmakers.
See also
Glass beadmaking
Scientific glassblowing
Leopold and Rudolf Blaschka
References
Further reading
Dunham, Bandhu. Contemporary Lampworking: A practical guide to shaping glass in the flame. Volumes I-III. Prescott, AZ: Salusa Glassworks, Inc., 2003, 2010.
Goldschmidt, Eric and Beth Hylen. “The History of Lampworking.” June 3, 2019. International Flameworking Conference paper given March 24, 2017, at Salem Community College. Accessed January 29, 2020.
Lierke, Rosemarie. “Early History of Lampworking – Some Facts, Findings and Theories, Part 1: Kunckel's description of lampworking in the Arts Vitraria Experimentalis.” Glastechnische Berichte 63, no. 12 (1990): 363–369.
Lierke, Rosemarie. “Early History of Lampworking – Some Facts, Findings and Theories, Part 2: Lampworking techniques in antiquity.” Glastechnische Berichte 65, no. 12 (1992): 341–348.
Zecchin, Sandro, and Cesare Toffolo. Il vetro a lume = Lampworking. Translated by Pamela Jean Santini and Christina Cawthra. Volumes 1–3. Venezia: Grafiche 2am editore, 2018–2019.
External links
Glass Art Society
American Scientific Glassblowers Society
The International Society of Glass Beadmakers
International Flameworking Conference
Beadwork
Glass art
Glass production
Articles containing video clips | Lampworking | Materials_science,Engineering | 2,992 |
16,824,055 | https://en.wikipedia.org/wiki/True%20polar%20wander | True polar wander is a solid-body rotation (or reorientation) of a planet or moon with respect to its spin axis, causing the geographic locations of the north and south poles to change, or "wander". In rotational equilibrium, a planetary body has the largest moment of inertia axis aligned with the spin axis, with the smaller two moments of inertia axes lying in the plane of the equator. This is because planets are not rigid - they form a rotational bulge which affects the inertia tensor of the body. Internal or external processes that change the distribution of mass (internal or external loadings) disrupt the equilibrium and true polar wander will occur: the planet or moon will rotate as a rigid body (reorient in space) to realign the largest moment of inertia axis with the spin axis. Because stabilization of rotation by the rotational bulge is only transient, even relatively small loads can result in a significant reorientation (See .)
If the body is near the steady state but with the angular momentum not exactly lined up with the largest moment of inertia axis, the pole position will oscillate (Chandler wobble). Weather and water movements can also induce small changes. These subjects are covered in the article Polar motion.
Description in the context of Earth
The mass distribution of the Earth is not spherically symmetric, and the Earth has three different moments of inertia. The axis around which the moment of inertia is greatest is closely aligned with the rotation axis (the axis going through the geographic North and South Poles). The other two axes are near the equator. That is similar to a brick rotating around an axis going through its shortest dimension (a vertical axis when the brick is lying flat). On Earth and most other planets, the difference in the polar and equatorial moments of inertia is dominated by the formation of a rotational bulge - excess mass around the equator (flattening) caused by rotational deformation (planetary bodies are not rigid - they deform in response to rotation and its changes).
Internal and external processes such as mantle convection, deglaciation, formation of volcanoes, or large meteorite impacts can disrupt rotational equilibrium and cause bodies to move as a whole relative to their rotation axis (reorient). Most natural loadings are small when compared to the rotational bulge and hence change the direction of the main axis of inertia only slightly. However, since the rotational bulge eventually readjusts when the spin axis moves within the body, the stabilization by the rotational bulge disappears on geological timescales and the equilibrium orientation of the planet is given by its dominant loads. Throughout true polar wander, the spin axis lies close to the main axis of inertia of the body, and the time evolution of the latter is driven by gradual readjustment of the rotational bulge. On short timescales and for rapid loadings, the secular motion of the pole is accompanied by free (or Chandler) wobbling.
Such a reorientation changes the latitudes of most points on the Earth by an amount that depends on how far they are from the axis near the equator that does not move. In the context of tidally locked bodies, also the longitude of surface features can change in time and the dynamics of reorientation can be more rapid.
Examples
Cases of true polar wander have occurred several times in the course of the Earth's history. It has been suggested that east Asia moved south due to true polar wander by 25° between about 174 and 157 million years ago. Mars, Europa, and Enceladus are also believed to have undergone true pole wander, in the case of Europa by 80°.
Uranus' extreme inclination with respect to the ecliptic is not an instance of true polar wander (a shift of the body relative to its rotational axis), but instead a large shift of the rotational axis itself. This axis shift is believed to be the result of a catastrophic series of impacts that occurred billions of years ago.
Distinctions and delimitations
Polar wander should not be confused with precession, which is where the axis of rotation moves, in other words the North Pole points toward a different star. There are also smaller and faster variations in the axis of rotation going under the term nutation. Precession is caused by the gravitational attraction of the Moon and Sun, and occurs all the time and at a much faster rate than polar wander. It does not result in changes of latitude (it results in changes of star inclinations).
True polar wander has to be distinguished from continental drift, which is where different parts of the Earth's crust move in different directions because of circulation in the mantle. Because of plate tectonics, the polar wander as seen from an individual continent may differ from the true polar wander (see also apparent polar wander).
The effect should further not be confused with the effect known as geomagnetic reversal that describes the repeated proven reversal of the magnetic field of the Earth.
Tectonic plate reconstructions
Paleomagnetism is used to create tectonic plate reconstructions by finding the paleolatitude of a particular site. This paleolatitude is affected both by true polar wander and by plate tectonics. To reconstruct plate tectonic histories, geologists must obtain a number of dated paleomagnetic samples. Because true polar wander is a global phenomenon but tectonic motions are specific to each plate, multiple dates allow them to separate the tectonic and true polar wander signals.
See also
Apparent polar wander
Axial tilt
Cataclysmic pole shift hypothesis (includes discussion of various historical conjectures involving rapid shift of the poles)
Polar motion
True polar wander on Mars
References
Geodesy
Geodynamics
Paleomagnetism | True polar wander | Mathematics | 1,178 |
335,634 | https://en.wikipedia.org/wiki/Turret%20%28architecture%29 | In architecture, a turret is a small circular tower, usually notably smaller than the main structure, that projects outwards from a wall or corner of that structure. Turret also refers to the small towers built atop larger tower structures.
Etymology
The word turret originated in around the year 1300 from touret which meant “small tower rising from a city wall, castle, or other larger building.” Touret came from the Old French term torete which is the diminutive form of tour, meaning “tower.” Tour dates back to the Latin word turris which also means “tower.”
There is a record from 1862 of turret being used to mean “low, flat gun tower on a warship.” Around this time, the word split into two separate definitions, with this definition being the one that goes on to describe gun turrets, a separate idea from the architectural element.
Uses
Turrets initially arose on castles out of a defensive need for greater visibility. Since they project outwards from the main structure, turrets gave garrisons a better line of sight to spot possible attackers. Thus, they also provided a better defensive position for defensive military forces to originate from. Turrets constructed above the rest of a structure only improve visibility, providing 360-degree views of the surrounding land allowing enemies to be spotted from further away. This provided more time for a fortress’s defenders to prepare for an attack. Turrets offered greater resilience to attacks and were less vulnerable than free-standing watch towers.
As their defensive necessity lessened, turrets began to be used as ornamental elements instead. Turrets were sometimes used to house staircases, and towards the end of the thirteenth century they became important in this fashion. They allowed for the staircases to occupy smaller spaces without affecting the layout of the structure to which they were attached. Since turrets project outward from a structure, they directed attention, and more ornamentation was focused on them than the rest of the facade.
Structure
Turrets could vary in size, although they all shared the appearance of small towers, either built into walls or atop larger towers. They projected outward from the structure they were incorporated into, greatly contributing to the characteristics discussed in the "Uses" section. Turrets do not extend down to the ground like full-sized towers. When built into walls, turrets are generally found at the corner of structures where two walls meet. Sometimes, however, they are found in the middle of a wall. Since turrets projected outward from a structure, they had to be supported either by weight-bearing corbels or be cantilevered. This put a restriction on how large a turret could be constructed. Turrets were expensive to build, as hoisting stones high above the ground to construct them was highly laborious. It is thought that many were timber-framed and cladded in stone which would have reduced the weight needed to be supported by corbels/cantilevers and reduced the cost of construction. Turrets were traditionally supported by a corbel. The top of a turret could be finished with a pointed roof or another type of apex or might have had crenellations, such as in the image above.
Turrets on homes
In the modern day, turrets are most commonly found on homes. These turrets are still towers that project outwardly from the main structure, not extending down to the ground. Residential turrets were greatly popularized in the Queen Anne residential style, and can often be found on a variety of Victorian and Queen Anne home designs today. Some residential turrets are designed to be open-air balconies as well. Turrets can help to bring in more natural light and are often used to create more space in a home. These elements make a property more interesting to prospective buyers and homes with a turret generally appraise higher than without one. Alternatively, turrets usually increase construction costs of a home as they are more difficult to frame and support than more common elements.
Gallery
See also
Bartizan, an overhanging, wall-mounted turret found particularly on French and Spanish fortifications between the early 14th and the 16th century. They returned to prominence in the 19th century with their popularity in Scottish baronial style.
Bay window
Oriel window
Turret (Hadrian's Wall)
References
Architectural elements
Fortification (architectural elements)
Castle architecture | Turret (architecture) | Technology,Engineering | 855 |
57,654,857 | https://en.wikipedia.org/wiki/Chloroeremomycin | Chloroeremomycin is a member of the glycopeptide family of antibiotics, such as vancomycin. The molecule is a non-ribosomal polypeptide that has been glycosylated. It is composed of seven amino acids and three saccharide units. Although chloroeremomycin has never been used in human medicine, oritavancin, a semi-synthetic derivative of chloroeremomycin, has full FDA approval.
Chloroeremomycin is a type of glycopeptide antibiotic and works by blocking the construction of a cell wall. Chloroeremomycin is naturally produced by Amycolatopsis orientalis.
History
Chloroeremomycin was discovered by Eli Lilly in the 1980s. In the 1990s, researchers at Eli Lilly developed biphenyl-chloroeremomycin, now known as oritavancin, as a functionalized derivative of chloroeremomycin to combat rising antibacterial resistance to vancomycin. The chloroeremomycin gene cluster was sequenced by van Wageningen et al in 1998. After the publication, many groups expressed the genes and conducted experiments to understand how chloroeremomycin and, by extension, vancomycin are biosynthesized.
Structure
Chloroeremomycin is composed of seven amino acids (three non-proteinogenic, and four proteinogenic) and three saccharide units. From N-terminus to C-terminus, the order is: Me-L-Leu, L-Tyr, D-Asn, D-4-hydroxyphenylglycine (HPG), L-HPG, D-Tyr, and D-3,5-dihydroxyphenylglycine (DHPG). When referring to specific amino acids, this article will reference the amino acid in the order it appears within the heptapeptide. Chloroeremomycin is glycosylated at aa4 with a Glc-(2→α1)-epivancosamine disaccharide and at aa6 with a D-BHT-(→α1)-epivancosamine saccharide.
Some amino acids are modified prior to the completion of the heptapeptide (in cis) and some are modified after the heptapeptide is formed (in trans). During the synthesis of the heptapeptide, the stereocenters of aa3, aa4, aa6, and aa7 are changed from L to D. Both Tyr residues are hydroxylated and chlorinated after the amino acids have been incorporated to the growing polypeptide to form 4-chloro-β-hydroxytyrosine (BHT). The now-BHT residues are then crosslinked to the aa4 HPG through aryl-ether linkages. An aryl-aryl bond is formed between aa5 and aa7 at the aa5-C3 and aa7-C2 positions on the aromatic rings. Finally, the N-terminus Leu is methylated.
In addition to the presence of D-amino acids, the molecule has atropisomer chemistry. The orientations of the chloro-substituted phenyl rings add another aspect of stereochemistry to the molecule.
Biosynthesis
Chloroeremomycin was found to be synthesized by Amycolatopsis orientalis.
Non-ribosomal peptide synthase
The non-ribosomal peptide synthase (NRPS) is encoded by three genes: CepA, CepB, and CepC. CepA links the first three amino acids; CepB adds the fourth to sixth amino acids; CepC adds the last amino acid and includes a thioesterase domain to release the heptapeptide from the NRPS complex. The growing peptide chain is passed through modules for each amino acid. The basic organization of each module is A-PCP-C. The A, or adenylation, region activates the domain's amino acid to allow transfer to the PCP, or peptide carrying protein, region. The activated amino acid is transferred to a cysteine residue in the PCP region, which anchors the amino acid and prepares the amino acid to be added to the polypeptide. The C, or condensation, region attaches the amino acid to the polypeptide. In addition, modules 2, 4, and 5 have E regions that epimerize (switch the stereochemistry) of the added amino acid to produce the correct configuration. Module 7, the last module, has an X and TE region. The X region is responsible for recruiting several of the tailoring enzymes that will perform the necessary reactions (halogenation, glycosylation, methylation, oxidative cross-linking, and hydroxylations) to produce chloroeremomycin. Finally, the TE, or thioesterase, region releases chloroeremomycin from the NRPS complex.
Post-peptide modifications
The modification required to biosynthesize mature chloroeremomycin include: oxidative cross-linking of aromatic rings, hydroxylation and chlorination of the two Tyr residues, methylation of Leu, and glycosylation at aa4 and aa6.
The oxidative crosslinks are catalyzed by enzymes OxyA-C. The glycosylations are catalyzed by enzymes GtfA-C (coded by Orf11-13 respectively). The chlorinations are performed by enzymes encoded by Orf10 and 18.
Total synthesis
There is no reported total synthesis of chloroeremomycin, although there are several total syntheses of vancomycin. The structures of vancomycin and chloroeremomycin are very similar, differing only in the glycosylation sites. Vancomycin is glycosylated at aa4 with a (2-beta1)-Glc-vancosamine disaccharide. As mentioned above, chloroeremomycin is glycosylated at aa4 with a (2-beta1)-Glc-epivancosamine disaccharide and at aa6 with a beta1-epivancosamine saccharide.
Pharmacology and chemistry
See also
Glycorandomization
Teixobactin
References
Glycopeptide antibiotics
Halogen-containing natural products
Total synthesis
Drugs developed by Eli Lilly and Company | Chloroeremomycin | Chemistry | 1,383 |
75,184,823 | https://en.wikipedia.org/wiki/History%20sniffing | History sniffing is a class of web vulnerabilities and attacks that allow a website to track a user's web browsing history activities by recording which websites a user has visited and which the user has not. This is done by leveraging long-standing information leakage issues inherent to the design of the web platform, one of the most well-known of which includes detecting CSS attribute changes in links that the user has already visited.
Despite being known about since 2002, history sniffing is still considered an unsolved problem. In 2010, researchers revealed that multiple high-profile websites had used history sniffing to identify and track users. Shortly afterwards, Mozilla and all other major web browsers implemented defences against history sniffing. However, recent research has shown that these mitigations are ineffective against specific variants of the attack and history sniffing can still occur via visited links and newer browser features.
Background
Early browsers such as Mosaic and Netscape Navigator were built on the model of the web being a set of statically linked documents known as pages. In this model, it made sense for the user to know which documents they had previously visited and which they hadn't, regardless of which document was referring to them. Mosaic, one of the earliest graphical web browsers, used purple links to show that a page had been visited and blue links to show pages that had not been visited. This paradigm stuck around and was subsequently adopted by all modern web browsers.
Over the years, the web evolved from its original model of static content towards more dynamic content. In 1995, employees at Netscape added a scripting language, Javascript, to its flagship web browser, Netscape Navigator. This addition allowed users to add interactivity to the web page via executing Javascript programs as part of the rendering process. However, this addition came with a new security problem, that of these Javascript programs being able to access each other's execution context and sensitive information about the user. As a result, shortly afterwards, Netscape Navigator introduced the same-origin policy. This security measure prevented Javascript from being able to arbitrarily access data in a different web page's execution context. However, while the same-origin policy was subsequently extended to cover a large variety of features introduced before its existence, it was never extended to cover hyperlinks since it was perceived to hurt the user's ability to browse the web. This innocuous omission would manifest into one of the well known and earliest forms of history sniffing known on the web.
History
One of the first publicly disclosed reports of a history sniffing exploit was made by Andrew Clover from Purdue University in a mailing list post on BUGTRAQ in 2002. The post detailed how a malicious website could use Javascript to determine if a given link was of a specific colour, thus revealing if the link had been previously visited. While this was initially thought of to be a theoretical exploit with little real-world value, later research by Jang et al. in 2010 revealed that high-profile websites were using this technique in the wild to reveal user browsing data. As a result multiple lawsuits were filed against the websites that were found to have used history sniffing alleging a violation of the Computer Fraud and Abuse Act of 1986.
In the same year, L. David Baron from Mozilla Corporation developed a defence against the attack that all major browsers would later adopt. The defence included restrictions against what kinds of CSS attributes could be used to style visited links. The ability to add background images and CSS transitions to links was disallowed. Additionally, visited links would be treated identically to standard links, with Javascript application programming interfaces (APIs) that allow the website to query the color of specific elements returning the same attributes for a visited link as those for non-visited links. This ensured malicious websites could not simply infer a person's browsing history by querying the colour changes.
In 2011, research by then-Stanford graduate student Jonathan Mayer found that advertising company Epic Marketplace Inc. had used history sniffing to collect information about the browsing history of users across the web. A subsequent investigation by the Federal Trade Commission (FTC) revealed that Epic Marketplace had used history sniffing code as a part of advertisements in over 24,000 web domains, including ESPN and Papa Johns. The Javascript code allowed Epic Marketplace to track if a user has visited any of over 54,000 domains. The resulting data was subsequently used by Epic Marketplace to categorize users into specific groups and serve advertisements based on the websites the user had visited. As a result of this investigation, the FTC banned Epic Marketplace Inc. from conducting any form of online advertising and marketing for twenty years and was ordered to permanently delete the data it had collected.
Threat model
The threat model of history sniffing relies on the adversary being able to direct the victim to a malicious website entirely or partially under the adversary's control. The adversary can accomplish this by compromising a previously good web page, by phishing the user to a web page allowing the adversary to load arbitrary code, or by using a malicious advertisement on an otherwise safe web page. While most history sniffing attacks do not require user interactions, specific variants of the attacks need users to interact with particular elements which can often be disguised as buttons, browser games, CAPTCHAs, and other such elements.
Modern variants
Despite being partially mitigated in 2010, history sniffing is still considered an unsolved problem. In 2011, researchers at Carnegie Mellon University showed that while the defences proposed by Mozilla were sufficient to prevent most non-interactive attacks, such as those found by Jang et al., they were ineffective against interactive attacks. By showing users overlaid letters, numbers and patterns, which would only reveal themselves if a user had visited a specific website, the researchers were able to trick 307 participants into potentially revealing their browsing history via history sniffing. This was done by presenting the activities in the form of pattern solving problems, chess games and CAPTCHAs.
In 2018, researchers at the University of California, San Diego demonstrated timing attacks that could bypass the mitigations introduced by Mozilla. By abusing the CSS paint API (which allows developers to draw a background image programmatically) and targeting the bytecode cache of the browser, the researchers were able to time the amount of time it took to paint specific links. Thus, they were able to provide probabilistic techniques for identifying visited websites.
Since 2019, multiple history sniffing attacks have been found targeting various newer features browsers provide. In 2020, Sanchez-Rola et al. demonstrated that by measuring the time a server takes to respond to a request with HTTP cookies and then comparing it to how long it took for a server to respond without cookies, a website could perform history sniffing. In 2023, Ali et al. demonstrated that newly introduced browser features could be abused also to perform history sniffing. One particularly notable example highlighted was the fact that a recently introduced feature, the Private Tokens API, introduced under Google's Privacy Sandbox initiative with an intention to prevent user tracking, could allow malicious actors to exfiltrate users browsing data by using techniques similar to those used for cross-site leak attacks.
References
Web security exploits
Internet privacy
Client-side web security exploits | History sniffing | Technology | 1,483 |
42,349,471 | https://en.wikipedia.org/wiki/Dux%20Belgicae%20secundae | The Dux Belgicae secundae ("commander of the second Belgic province") was a senior officer in the army of the Late Roman Empire who was the commander of the limitanei (frontier troops) and of a naval squadron on the so-called Saxon Shore in Gaul.
The office is thought to have been established around 395 AD. At the imperial court, a dux was of the highest class of vir illustris. The Notitia Dignitatum lists for the Gallic part of the Litus Saxonicum ("the Coast of Saxony") two commanders, and their military units, who were charged with securing the coasts of Flanders (Belgica II), of Normandy (Lugdunensis II), and of Brittany (Lugdunensis III), these commanders being the Dux Belgicae secundae and the neighboring Dux Armoricani et Nervicani.
These two commanders were the successors to an official the Comes Maritimi Tractus (Commander of the Coastal Regions), who formerly commanded both the British and the Gallic part of the Saxon Shore. These two commanders maintained coastal defenses until the mid–5th Century. A well known commander was the Frankish king Childeric I (late 5th century).
History
In the course of the imperial reforms under Emperor Diocletian new military offices were introduced in Britain and Gaul. At that time the limes (border wall/marker) of the Saxon coast were established on both sides of the English Channel. The castles guarding the heavily exposed sections and estuaries were partially restored or modified from existing structures. Their garrisons had the task of repelling raiders and impeding the access of invaders to the interior. The main responsibility for securing both coasts was in the middle of the 4th century placed in a Comes Maritimi Tractus. In 367, an invasion of Britain by several barbarian peoples, almost completely wiping out units of the local provincial forces, killing the coastal commander Nectaridus. His area of responsibility must have been divided thereafter—by 395 at the latest—into three military districts. This most likely was also to prevent a military commander from having too many soldiers under his command, thus enabling him to start an uprising (such as the usurpation of the British fleet commander Carausius). For the Gallic part of the Saxon coast, two new ducal regions were created, which existed until the early 5th century.
In the final phase of Roman rule over Gaul, Childeric, as civilian administrator and commander of the warrior groups around the town of Tournai in the north of the province, acted as the commander of the Salian Franks. Tournai served as his residence and administrative headquarters. His power was based upon, among other things, the weapon forges here. In Childeric's grave, discovered in 1653, Eastern Roman gold coins, a gold-plated officer's coat (paludamentum), and a golden onion button brooch were found. The first was interpreted as renumeratio (payment) for services rendered, the last as an insignia of the late Roman army.
It is unclear whether Childeric acted as merely a Roman general or independently as a king (rex gloriosissimus); most likely, both offices had already merged. Childeric was probably still loyal to the late Roman military aristocracy of Gaul. In any case, it was not the formal powers that mattered, but the power based on commanding a military resources. This combining of civilian and military offices in his hands suggests that Childeric had a prominent position among barbarian army commanders. He had probably been directly confirmed in his office by the administration of Odoacer in Italy and also by the Eastern Roman imperial court. It is believed that he had precedence before the other federal commander in chief. As rex or princeps he would also have been entitled to bestow religious and secular offices and the associated titles—such as patricius, comes, and dux—to deserving Teutons or Romans in his domain (regnum).
Administrative staff
The officium (administrative staff) of the dux included the following offices:
Princeps ex eodem corpore (chancellor from the ranks of the army)
Numerarii (two accountants)
Commentariensus (legal counsel)
Adiutor (assistant)
Subadiuva (assistant)
Regerendarius (administrator)
Exceptores (secretaries)
Singulares et reliquos officiales (notaries (or bodyguards) and other civil servants)
Forts, officers, and units
In addition to the administrative staff (officium), eight tribunes or prefects and their units were available to the Dux (sub dispositione, "at discretion"):
Equites Dalmatae (no officer stated).
Praefectus classis Sambricae, commander of a flotilla of patrol ships (Navis lusoria), the fourth since the Century on the Somme was stationed. Their bases were in locus Quartensis, or Vicus ad Quantiam, (Port d'Etaples, France, north of the Somme estuary) and locus Hornensis (possibly Cap Hornu, Saint-Valery-sur-Somme, France).
Tribunus militum Nerviorum, a prefect for Sarmatian settlers (Praefectus Sarmatarum gentilium, inter Renos et Tambianos secundae provinciae Belgicae), and four prefects that commanded the contingents of Germanic Laeti:
Praefectus laetorum Nerviorum in Fanomantis (modern Famars, Picardie, France)
Praefectus laetorum Batavorum Nemetacensium in Atrabatis (modern Arras, Pas de Calais, France)
Praefectus laetorum Batavorum Contraginnensium in Noviomago
Praefectus laetorum gentilium in Remo et Silvanectas
Their shield emblems are not shown in the Notitia Dignitatum.
The Dux had originally more units under his command. Arnold Hugh Martin Jones identified the origin of some units as being from the Gallic army. They originated from Belgica II. Their names are the same as the well-known cities of this province:
Geminiacenses, a legio comitatenses, (from Geminiacum – modern Liberchies, Hainaut, Belgium); comitatenses – having been assigned to a field army, but without being awarded the higher designation of "palatine" status
Cotoriacenses, a legio comitatenses (from Cotoriacum – West Flanders)
Prima Flavia (Prima Flavia Metis) (a pseudo-comitatenses from Metis)
Unlike the vexillarii of other duces, these units are not shown as being under the command of the Dux Belgicae II. It seems that this province had diminished influence after the destruction of the border units on the Rhine (Rhine crossing of 406 AD), at which many of their units were transferred to the field army.
See also
Count of the Saxon Shore
References
Further reading
Insignia viri illustris magistri peditum, Occ. V
Heinrich Beck and others (eds): Lexicon of Germanic archeology. Volume 18 de Gruyter, Berlin-New York 2001, , p. 524
Stefanie Dick: Königtum, Barbaren auf dem Thron in: Spektrum der Wissenschaft Spezial/Archäologie - Geschichte - Kultur, Nr. 1/2015, p. 29-30.
Eugen Ewig: Die Merowinger und das Frankenreich, 5. aktualisierte Auflage, Stuttgart 2006, p. 17.
Stephen Johnson: The Roman Forts of the Saxon Shore, 1976 JC man, in VA Maxfield (Eds.): The Saxon Shore, 1989, pp. 45-77.
Arnold Hugh Martin Jones: The Later Roman Empire, 284-602. A Social, Economic and Administrative Survey. 2 vols. Johns Hopkins University Press, Baltimore, 1986, (paperback edition).
Dieter Geuenich (ed.): The Franks and the Alemanni to the "Battle of Zuelpich" (496/97). Walter de Gruyter, Berlin 1998, , p. 97
Hans DL Viereck: Die Römische Flotte [The Roman Fleet], Classis Romana. Koehler Verlagsgesellschaft mbH, Hamburg 1996, p. 258 .
External links
The Dux in the Notitia Dignitatum (English)
Saxon Shore | Dux Belgicae secundae | Engineering | 1,790 |
13,037,101 | https://en.wikipedia.org/wiki/Trichloro%28chloromethyl%29silane | Trichloro(chloromethyl)silane is a compound with formula Si(CH2Cl)Cl3.
See also
Organosilicon#Silyl_halides
Chlorosilanes
Organochlorides | Trichloro(chloromethyl)silane | Chemistry | 51 |
34,153,797 | https://en.wikipedia.org/wiki/Comparison%20of%20orbital%20rocket%20engines | This page is an incomplete list of orbital rocket engine data and specifications.
Current, upcoming, and in-development rocket engines
Retired and canceled rocket engines
See also
Comparison of orbital launch systems
Comparison of orbital launchers families
Comparison of crewed space vehicles
Comparison of space station cargo vehicles
Comparison of solid-fuelled orbital launch systems
List of space launch system designs
List of orbital launch systems
Notes
References
Spaceflight
Technological comparisons
Outer space lists
Rocket engines | Comparison of orbital rocket engines | Astronomy,Technology | 87 |
41,611,387 | https://en.wikipedia.org/wiki/Wedelia%20calendulacea | Wedelia calendulacea may refer to:
Wedelia calendulacea (L.) Less., an illegitimate name that is a synonym of Sphagneticola calendulacea
Wedelia calendulacea Rich, an unresolved name in the genus Wedelia
References | Wedelia calendulacea | Biology | 61 |
6,921,893 | https://en.wikipedia.org/wiki/Tight%20closure | In mathematics, in the area of commutative algebra, tight closure is an operation defined on ideals in positive characteristic. It was introduced by .
Let be a commutative noetherian ring containing a field of characteristic . Hence is a prime number.
Let be an ideal of . The tight closure of , denoted by , is another ideal of containing . The ideal is defined as follows.
if and only if there exists a , where is not contained in any minimal prime ideal of , such that for all . If is reduced, then one can instead consider all .
Here is used to denote the ideal of generated by the 'th powers of elements of , called the th Frobenius power of .
An ideal is called tightly closed if . A ring in which all ideals are tightly closed is called weakly -regular (for Frobenius regular). A previous major open question in tight closure is whether the operation of tight closure commutes with localization, and so there is the additional notion of -regular, which says that all ideals of the ring are still tightly closed in localizations of the ring.
found a counterexample to the localization property of tight closure. However, there is still an open question of whether every weakly -regular ring is -regular. That is, if every ideal in a ring is tightly closed, is it true that every ideal in every localization of that ring is also tightly closed?
References
Commutative algebra
Ideals (ring theory) | Tight closure | Mathematics | 299 |
13,415,343 | https://en.wikipedia.org/wiki/Regular%20matroid | In mathematics, a regular matroid is a matroid that can be represented over all fields.
Definition
A matroid is defined to be a family of subsets of a finite set, satisfying certain axioms. The sets in the family are called "independent sets". One of the ways of constructing a matroid is to select a finite set of vectors in a vector space, and to define a subset of the vectors to be independent in the matroid when it is linearly independent in the vector space. Every family of sets constructed in this way is a matroid, but not every matroid can be constructed in this way, and the vector spaces over different fields lead to different sets of matroids that can be constructed from them.
A matroid is regular when, for every field , can be represented by a system of vectors over .
Properties
If a matroid is regular, so is its dual matroid, and so is every one of its minors. Every direct sum of regular matroids remains regular.
Every graphic matroid (and every co-graphic matroid) is regular. Conversely, every regular matroid may be constructed by combining graphic matroids, co-graphic matroids, and a certain ten-element matroid that is neither graphic nor co-graphic, using an operation for combining matroids that generalizes the clique-sum operation on graphs.
The number of bases in a regular matroid may be computed as the determinant of an associated matrix, generalizing Kirchhoff's matrix-tree theorem for graphic matroids.
Characterizations
The uniform matroid (the four-point line) is not regular: it cannot be realized over the two-element finite field GF(2), so it is not a binary matroid, although it can be realized over all other fields. The matroid of the Fano plane (a rank-three matroid in which seven of the triples of points are dependent) and its dual are also not regular: they can be realized over GF(2), and over all fields of characteristic two, but not over any other fields than those. As showed, these three examples are fundamental to the theory of regular matroids: every non-regular matroid has at least one of these three as a minor. Thus, the regular matroids are exactly the matroids that do not have one of the three forbidden minors , the Fano plane, or its dual.
If a matroid is regular, it must clearly be realizable over the two fields GF(2) and GF(3). The converse is true: every matroid that is realizable over both of these two fields is regular. The result follows from a forbidden minor characterization of the matroids realizable over these fields, part of a family of results codified by Rota's conjecture.
The regular matroids are the matroids that can be defined from a totally unimodular matrix, a matrix in which every square submatrix has determinant 0, 1, or −1. The vectors realizing the matroid may be taken as the rows of the matrix. For this reason, regular matroids are sometimes also called unimodular matroids. The equivalence of regular matroids and unimodular matrices, and their characterization by forbidden minors, are deep results of W. T. Tutte, originally proved by him using the Tutte homotopy theorem. later published an alternative and simpler proof of the characterization of unimodular matrices by forbidden minors.
Algorithms
There is a polynomial time algorithm for testing whether a matroid is regular, given access to the matroid through an independence oracle.
References
Matroid theory | Regular matroid | Mathematics | 753 |
27,420,680 | https://en.wikipedia.org/wiki/Franz%20Wegner | Franz Joachim Wegner (born 15 June 1940) is emeritus professor for theoretical physics at the University of Heidelberg.
Education
Franz Wegner attained a doctorate in 1968 with thesis advisor Wilhelm Brenig at the Technical University Munich with the thesis, "Zum Heisenberg-Modell im paramagnetischen Bereich und am kritischen Punkt" ("On the Heisenberg model within the paramagnetic range and at the critical point").
Subsequently, he did research with a post-doctoral position at Forschungszentrum Jülich, in the group of Herbert Wagner and at Brown University with Leo Kadanoff. Since 1974 he is a professor at Heidelberg.
Research
The emphasis of Wegner's scientific work is statistical physics, in particular the theory of phase transitions and the renormalization group. The eponymous "Wegner exponent" is of fundamental importance for the purpose of describing corrections to asymptotic scale invariance in close proximity to phase transitions. Wegner also "invented" the foundational lattice gauge theoretical models. The method developed from Wegner's foundational work is nowadays intensively used in simulations of quantum chromodynamics.
Accolades
Wegner won the Walter Schottky prize in 1976 for his work on phase transitions and elementary particles. He has also been elected to the Heidelberger Academy of Sciences and won the Max Planck medal among other awards and recognitions. He won the Lars Onsager prize from American Physical Society in 2015 for his contributions to Statistical mechanics.
Selected works of Wegner
Reprinted in Claudio Rebbi (ed.), Lattice Gauge Theories and Monte Carlo Simulations, World Scientific, Singapore (1983), p. 60–73. (Abstract.)
See also
Wegner's scaling function
References
Technical University of Munich alumni
Theoretical physicists
German theoretical physicists
20th-century German physicists
1940 births
Living people
Winners of the Max Planck Medal | Franz Wegner | Physics | 403 |
39,882,225 | https://en.wikipedia.org/wiki/Epipharyngeal%20groove | The epipharyngeal groove is a ciliated groove along the dorsal side of the inside of the pharynx in some plankton-feeding early chordates, such as Amphioxus. It helps to carry a stream of mucus with plankton stuck in it, through the pharynx into the gut to be digested.
The subnotochordal rod or hypochord is a transient structure that appears ventral to the notochord in the heads of embryos of some vertebrates. Its appearance is stimulated by a chemical secreted by the notochord. The subnotochordal rod helps to stimulate development of the dorsal aorta.
There is an opinion that these two structures are homologous.
References
Zoology
Embryology | Epipharyngeal groove | Biology | 159 |
2,796,232 | https://en.wikipedia.org/wiki/IFREMER | The or Ifremer is an oceanographic institution in Brest, France. A state-run and funded scientific organization, it is France’s national integrated marine science research institute.
Scope of works
Ifremer focuses its research activities in the following areas:
Monitoring, use and enhancement of coastal seas
Monitoring and optimization of aquaculture production
Fishery resources
Exploration and exploitation of the oceans and their biodiversity
Circulation and marine ecosystems, mechanisms, trends and forecasting
Engineering of major facilities in the service of oceanography
Knowledge transfer and innovation in its fields of its activities
In 1985, Ifremer partnered with Dr. Robert Ballard for an ultimately successful expedition to locate the wreck of the RMS Titanic. In 1994 Ifremer assisted in the salvage of the cargo from the SS John Barry.
Ifremer operates a number of vessels, including the submarine Nautile.
In 2008, Ifremer partnered with Dr. Bruce Shillito for the testing and initial operations of the PERISCOP, a deep sea fish recovery device.
In 2023, Ifremer sent the Atalante ship and the Victor 6000 ROV to the rescue operation of the Titan submersible.
Ifremer centres
Ifremer is located at 26 sites, including five main centres (Boulogne, Brest, Nantes, Toulon and Tahiti), with headquarters at Brest. About twenty research departments are associated to these centres :
Notes and references
External links
Ifremer's official website
IFREMER
1984 establishments in France
Research institutes established in 1984
Government agencies of France
Government-owned companies of France
Governmental nuclear organizations
Research institutes in France
Oceanographic organizations
Brest, France | IFREMER | Engineering | 328 |
57,995,304 | https://en.wikipedia.org/wiki/Dmitri%20Burago | Dmitri Yurievich Burago (Дмитрий Юрьевич Бураго, born 1964) is a leading Russian - American mathematician, specializing in differential, Riemannian, Finsler geometry, geometric analysis, dynamical systems and applications to mathematical physics.
He is the son of the celebrated Geometer and Russian mathematician Yuri Dmitrievich Burago, with whom he also published well known book on metric geometry. Burago studied at 45th Physics-Mathematics School. Burago received his doctorate in 1994 at Saint Petersburg State University under the supervision of Anatoly Vershik. He was at the Steklov Institute in Saint Petersburg and is now a professor at Pennsylvania State University's Center for Dynamical Systems and Geometry.
In 1992, he was awarded the prize of the Saint Petersburg Mathematical Society. In 1998, he was an Invited Speaker at the International Congress of Mathematicians in Berlin. In 2014, he was awarded the Leroy P. Steele Prize with Yuri Burago and Sergei Vladimirovich Ivanov for their book A course in metric geometry.
Selected publications
Articles
"Periodic metrics." In: Seminar on dynamical systems, pp. 90–95. Birkhäuser, Basel, 1994.
with Sergei Ivanov: "Riemannian tori without conjugate points are flat." Geometric & Functional Analysis GAFA 4, no. 3 (1994): 259–269.
with Sergei Ivanov and Bruce Kleiner: "On the structure of the stable norm of periodic metrics." Mathematical Research Letters 4, no. 6 (1997): 791-808.
with Michael Brin and Sergei Ivanov: "On partially hyperbolic diffeomorphisms of 3-manifolds with commutative fundamental group." Modern dynamical systems and applications 307 (2004): 312
with Sergei Ivanov and Leonid Polterovich: "Conjugation-invariant norms on groups of geometric origin." arXiv preprint arXiv:0710.1412 (2007).
Books
with Yuri Burago and Sergei Ivanov: A Course in Metric Geometry, American Mathematical Society 2001
References
External links
Mathnet.ru
20th-century Russian mathematicians
21st-century Russian mathematicians
Geometers
Differential geometers
1964 births
Living people | Dmitri Burago | Mathematics | 464 |
49,398,834 | https://en.wikipedia.org/wiki/Metascape | Metascape is a free gene annotation and analysis resource that helps biologists make sense of one or multiple gene lists. Metascape provides automated meta-analysis tools to understand either common or unique pathways and protein networks within a group of orthogonal target-discovery studies.
History
In the "OMICs" age, it is important to gain biological insights into a list of genes. Although a number of bioinformatics sources exist for this purpose, such as DAVID, they are not all free, easy to use, and well maintained. To analyze multiple lists of genes originated from orthogonal but complementary "OMICs" studies, tools often require computational skills that are beyond the reach of many biologists. According to the Metascape blog, a team of scientists self-organized to address this challenge. The team includes core members Yingyao Zhou, Bin Zhou, Lars Pache, Max Chang, Christopher Benner, and Sumit Chanda, as well as other contributors over the time. Metascape was first released as a beta version on Oct 8, 2015. The first Metascape application was published on Dec 9, 2015. Metascape has gone through multiple releases since then. It currently supports key model organisms, pathway enrichment analysis, protein-protein interaction network and component analysis, automatic presentation of the results as publication-ready web report, Excel and PowerPoint presentations.
The paper titled "Metascape provides a biologist-oriented resource for the analysis of systems-level datasets" was published on Apr 3, 2019 in Nature Communications.
Analysis workflow
Metascape implements a CAME analysis workflow:
Conversion: Convert gene identifiers from popular types (such as Symbol, RefSeq, Ensembl, UniProt, UCSC) into human Entrez gene IDs and vice versa.
Annotation: Extract from dozens of function-relevant gene annotations, including protein families, transmembrane/secreted predictions, disease associations, compound associations, etc.
Membership: Flag gene memberships based on a custom keyword search within selected ontologies, e.g., highlight known "invasion" genes.
Enrichment: Identify enriched biological themes, particularly GO terms, KEGG, Reactome, BioCarta, WikiPathways as well as other pathways and data sets collected in MSigDB, etc. In addition, enriched ontology terms are automatically clustered to reduce redundancy for easier interpretation. Protein-protein interaction networks are constructed based on STRING, BioGRID, OmniPath, etc. Dense components are identified and biologically interpreted.
Metascape integrated over 40 bioinformatics knowledgebase into a seamless user interface, where experimental biologists can use a single-click Express Analysis feature to turn multiple gene lists into interpretable results.
Analysis report
All analysis results are presented in a web report, which contains Excel annotation and enrichment sheets, PowerPoint slides, and custom analysis files (e.g., .cys file by Cytoscape, .svg by Circos) for further offline analysis or processing.
One noticeable strength of Metascape is its visualization capability. Metascape has aided in the interpretation of 2,600 published studies as of December 2021, among which, 2/3 of publications made use of graphs or sheets prepared by Metascape.
MSBio
Metascape for Bioinformaticians (MSBio) was released in 2021 to meet the growing needs of computational biologists to automate Metascape batch analyzes for large-scale gene lists. MSBio leverages the power of container technology to encapsulate the computational platform in Docker containers. Academic users can conduct offline analyses, which is only limited by the hardware they have access to. Commercial users have the capability of adding proprietary knowledgebase and conducting secure computations using internal computational assets. MSBio databases are updated in synchronization with the Metascape website.
References
External links
Biological databases
Bioinformatics software
Laboratory software
Systems biology | Metascape | Biology | 821 |
41,232,267 | https://en.wikipedia.org/wiki/Road%20food | Road food is a cuisine concerning food prepared especially for hungry travelers who arrive by road. Most road food establishments are casual dining restaurants. American road food is associated with "comfort food" such as hamburgers, hot dogs, fried chicken, barbecue, and pizza. Road food establishments can include fast food, cafes and barbecue shacks.
Road food was the topic of the book Roadfood by Jane and Michael Stern originally published in 1977. Jane Stern also had an ongoing, James Beard Award-winning road food column in Gourmet magazine. Road food has been the subject of several television series, including the three-season series Feasting on Asphalt created by James Beard award winning food author Alton Brown, and Al Roker's Roker on the Road.
Notes and references
Notes
References
Cuisine
Transport culture | Road food | Physics | 162 |
3,885,807 | https://en.wikipedia.org/wiki/Deep%20geological%20repository | A deep geological repository is a way of storing hazardous or radioactive waste within a stable geologic environment, typically 200–1,000 m below the surface of the earth. It entails a combination of waste form, waste package, engineered seals and geology that is suited to provide a high level of long-term isolation and containment without future maintenance. This is intended to prevent radioactive dangers. A number of mercury, cyanide and arsenic waste repositories are operating worldwide including Canada (Giant Mine) and Germany (potash mines in Herfa-Neurode and Zielitz). Radioactive waste storage sites are under construction with the Onkalo in Finland being the most advanced.
Principles and background
Highly toxic waste that cannot be further recycled must be stored in isolation, to avoid contamination of air, ground and underground water. Deep geological repository is a type of long-term storage that isolates waste in geological structures that are expected to be stable for millions of years, with a number of natural and engineered barriers. Natural barriers include water-impermeable (e.g. clay) and gas-impermeable (e.g. salt) layers of rock above and surrounding the underground storage. Engineered barriers include bentonite clay and cement.
In 2011, the International Panel on Fissile Materials said:
It is widely accepted that spent nuclear fuel and high-level reprocessing and plutonium wastes require well-designed storage for periods ranging from tens of thousands to a million years, to minimize releases of the contained radioactivity into the environment. Safeguards are also required to ensure that neither plutonium nor highly enriched uranium is diverted to weapon use. There is general agreement that placing spent nuclear fuel in repositories hundreds of meters below the surface would be safer than indefinite storage of spent fuel on the surface [of the earth].
Common elements of repositories include the radioactive waste, the containers enclosing the waste, other engineered barriers or seals around the containers, the tunnels housing the containers, and the geologic makeup of the surrounding area.
A storage space hundreds of metres below the ground needs to withstand the effects of one or more future glaciations, with thick ice sheets resting on top of the rock. The presence of ice sheets affects the hydrostatic pressure at repository depth, groundwater flow and chemistry, and the potential for earthquakes. This is being taken into consideration by organizations preparing for long-term waste repositories in Sweden, Finland, Canada and some other countries that have to assess the effects of future glaciations.
Despite a long-standing agreement among many experts that geological disposal can be safe, technologically feasible and environmentally sound, a large part of the general public in many countries remains skeptical as a result of anti-nuclear campaigns. One of the challenges facing the supporters of these efforts is to demonstrate confidently that a repository will contain wastes for so long that any releases that might take place in the future will pose no significant health or environmental risk.
Nuclear reprocessing does not eliminate the need for a repository, but reduces the volume, the long-term radiation hazard, and long-term heat dissipation capacity needed. Reprocessing does not eliminate the political and community challenges to repository siting.
Natural radioactive repositories
Natural uranium ore deposits serve as proof of concept for stability of radioactive elements in geological formations—Cigar Lake Mine for example is a natural deposit of highly concentrated uranium ore located under sandstone and a quartz layer at a depth of 450 m, that is 1 billion years old with no radioactive leaks to the surface.
The ability of natural geologic barriers to isolate radioactive waste is demonstrated by the natural nuclear fission reactors at Oklo, Gabon. During their long reaction period about 5.4 tonnes of fission products as well as 1.5 tonnes of plutonium together with other transuranic elements were generated in the uranium ore body. This plutonium and the other transuranics remained immobile until the present day, a span of almost 2 billion years. This is remarkable as ground water had ready access to the deposits and they were not in a chemically inert form, such as glass.
Research
Deep geologic disposal has been studied for several decades, including laboratory tests, exploratory boreholes, and the construction and operation of underground research laboratories where large-scale in-situ tests are being conducted. Major underground test facilities are listed below.
Nuclear repository sites
Status of repository at certain sites
The process of selecting appropriate deep final repositories is under way in several countries, with the first expected to be commissioned some time after 2010.
Australia
There was a proposal in the early 2000s for an international high level waste repository in Australia and Russia. Since the proposal for a global repository in Australia, which has never produced nuclear power, and has one research reactor, was raised, domestic political objections have been loud and sustained, making such a facility in Australia unlikely.
Canada
Giant Mine has been used as a deep repository for storage of highly toxic arsenic waste in the form of powder. As of 2020 there is ongoing research to reprocess the waste into a frozen block form which is more chemically stable and prevents water contamination.
On Nov 28, 2024, the NWMO selected the Wabigoon Lake Ojibway Nation-Ignace area as the site for Canada's deep geological repository for used nuclear fuel.
Finland
The Onkalo site in Finland based on the KBS-3 technology, is the furthest along the road to becoming operational among repositories worldwide. Posiva started construction of the site in 2004. The Finnish government issued the company a licence for constructing the final disposal facility in November 2015. , continuous delays mean that Posiva expects operations to begin in 2023.
Germany
A number of repositories including potash mines in Herfa-Neurode and Zielitz have been used for years for the storage of highly toxic mercury, cyanide and arsenic waste. There is little debate in Germany regarding toxic waste, in spite of the fact that unlike nuclear waste, it does not lose toxicity with time.
There is a debate about the search for a final repository for radioactive waste, accompanied by protests, especially in the Gorleben village in the Wendland area, which was seen as ideal for the final repository until 1990 because of its location in a remote, economically depressed corner of West Germany, next to the closed border to the former East Germany. After reunification, the village is now close to the center of Germany, and is now used for temporary storage of nuclear waste.
The pit Asse II is a former salt mine in the mountain range of Asse in Lower Saxony/Germany, that was allegedly used as a research mine since 1965. Between 1967 and 1978, radioactive waste was placed in storage. Research indicated that brine contaminated with radioactive caesium-137, plutonium and strontium was leaking from the mine since 1988 but was not reported until June 2008. The repository for radioactive waste Morsleben is a deep geological repository for radioactive waste in the rock salt mine Bartensleben in Morsleben, in Saxony-Anhalt/Germany, that was used from 1972 to 1998. Since 2003, of salt-concrete has been pumped into the pit to temporarily stabilize the upper levels.
Sweden
Approval was granted in January 2022 for the construction of a direct disposal facility using KBS-3 technology, on the site of the Forsmark nuclear power plant.
United Kingdom
The UK Government, in common with many other countries and supported by scientific advice, has identified permanent deep underground disposal as the most appropriate means of disposing of higher activity radioactive waste.
Radioactive Waste Management (RWM) was established in 2014 to deliver a Geological Disposal Facility (GDF) and is a subsidiary of the Nuclear Decommissioning Authority (NDA) which is responsible for clean-up of the UK's historical nuclear sites. In 2022, Nuclear Waste Services (NWS) formed from the merger of RWM with the Low Level Waste Repository in Cumbria.
A GDF will be delivered through a community consent-based process , working in close partnership with communities, building trust for the long term and ensuring a GDF supports local interests and priorities.
The policy is emphatic in requiring the consent of the people who would be living alongside a GDF and giving them influence over the pace at which discussions progress.
The first Working Groups were established in Copeland and Allerdale in Cumbria during late 2020 and early 2021. These Working Groups have started the process of obtaining consent for hosting a GDF in their areas. These Working Groups are believed to be a critical step in the process to find a willing community and a suitable, feasible and acceptable site for a GDF. Allerdale withdrew from the process to select a deep waste repository site in 2023. NWS explained this decision in terms of there being insufficient extent of potentially suitable geology in which to undertake a site selection process.
RWM continues to have discussions in a range of places across England with people and organisations who are interested in exploring the benefits of hosting a GDF. More Working Groups are anticipated to form across the country in the next year or two.
Any proposal for a GDF will be evaluated against highly rigorous criteria to ensure all safety and security tests are met.
United States
The Waste Isolation Pilot Plant (WIPP) in the United States went into service in 1999 by putting the first cubic metres of transuranic radioactive waste in a deep layer of salt near Carlsbad, New Mexico.
In 1978, the U.S. Department of Energy (DOE) began studying Yucca Mountain, within the secure boundaries of the Nevada Test Site in Nye County, Nevada, to determine whether it would be suitable for a long-term geologic repository for spent nuclear fuel and high-level radioactive waste. This project faced significant opposition and suffered delays due to litigation by the Agency for Nuclear Projects for the State of Nevada (Nuclear Waste Project Office) and others. The Obama administration rejected use of the site in the 2009 United States Federal Budget proposal, which eliminated all funding except that needed to answer inquiries from the U.S. Nuclear Regulatory Commission (NRC), "while the Administration devises a new strategy toward nuclear waste disposal."
In March 2009, Energy Secretary Steven Chu told a Senate hearing the Yucca Mountain site is no longer viewed as an option for storing reactor waste.
In June 2018, the Trump administration and some members of Congress again began proposing using Yucca Mountain, with senators from Nevada raising opposition.
In February 2020, U.S. President Donald Trump tweeted about a potential change of policy on plans to use Yucca Mountain in Nevada as a repository for nuclear waste. Trump's previous budgets have included funding for Yucca Mountain but, according to Nuclear Engineering International, two senior administration officials said that the latest spending blueprint will not include any money for licensing the project. On February 7, Energy Secretary Dan Brouillette echoed Trump's sentiment and stated that the U.S. administration may investigate other types of [nuclear] storage, such as interim or temporary sites in other parts of the country.
Though no formal plan had solidified from the federal government, the private sector moved forward with their own plans. Holtec International submitted a license application to the NRC for an autonomous consolidated interim storage facility (CISF) in southeastern New Mexico in March 2017. Similarly, Interim Storage Partners is also planning to build and operate a CISF in Andrews County, Texas. Meanwhile, other companies have indicated that they are prepared to bid on an anticipated procurement from the DOE to design a facility for interim storage of nuclear waste. The NRC issued a licence for the Andrews County CISF in September 2021. A group including the State of Texas petitioned for a court review of the licence. In August 2023, the United States Court of Appeals for the Fifth Circuit ruled that the NRC does not have the authority from Congress to license such a temporary storage facility that is not at a nuclear power station or federal site, nullifying the purported license. The other New Mexico CISF is similarly being challenged in the United States Court of Appeals for the Tenth Circuit.
Deep Isolation, a corporation based in Berkeley, California, proposed a solution involving horizontal storage of radioactive waste canisters in directional boreholes, using technology developed for oil and gas mining. An 18" borehole can be directed vertically to the depth of several thousand feet in geologically stable formations, and then a horizontal waste disposal section of similar length can be created where waste canisters are stored before the borehole is sealed.
See also
Journey to the Safest Place on Earth
List of nuclear waste treatment technologies
Waste Isolation Pilot Plant
Nuclear semiotics
References
External links
Study by the World Nuclear Organization,
Sandia Report, Granite Disposal of U.S. High-Level Radioactive Waste
Sandia Report, Salt Disposal of Heat-Generating Nuclear Waste
Radioactive waste
Nuclear reactors
Radioactive waste repositories
Subterranea (geography) | Deep geological repository | Chemistry,Technology | 2,647 |
36,570,729 | https://en.wikipedia.org/wiki/Stack%20light | Stack lights (also known as signal tower lights, indicator lights, andon lights, warning lights, industrial signal lights, or tower lights) are commonly used on equipment in industrial manufacturing and process control environments to provide visual and audible indicators of a machine's status to machine operators, technicians, production managers and factory personnel. They are a form of andon: a manufacturing system that identifies errors as they happen.
General
Stack lights are used in similar applications to beacon lights/strobes, however the information they typically display encompasses more machine/process conditions. Stack lights typically use incandescent, LED or xenon-type strobes as their illumination source.
Stack lights are generally columnar structures in a variety of shapes, placing colour-coded indicator segments on top of one another in a "stacked" orientation. A stack light will typically have up to five differently coloured segments to indicate various conditions on the machine or process.
Segments in any combination of (typically) red, yellow, green, blue or clear white are actuated independently and are either off, steadily lit, or flashing.
Stack lights are passive devices that may be controlled directly by programmable logic controllers, distributed control systems, PC control systems or hardwired to machine controls such as timers, sensors and latching relays.
Discrete signals activate illuminated segments at common industrial control voltages (including 12Vdc, 24Vac/dc, 115Vac, 230Vac). Some units support fieldbus networked control through popular industrial networks such as Modbus, DeviceNet, Profibus, CAN-Open or ASi.
Flashing control may be provided by the stack light's internal circuitry or externally controlled with timers or logic controllers.
Stack lights are available for all types of industrial environments including washdown (IP65) and explosion proof.
Function
Stack lights are used in a variety of machines and process environments; specific colour-coding is assigned by the system designer.
Commonly used colour codes for machine state conditions include:
RED: Failure conditions such as an emergency stop or machine fault
AMBER: Warnings such as over-temperature or over-pressure conditions
GREEN: Normal machine or process operation
BLUE: External help request, where an operator might be requesting raw materials, scheduling or maintenance personnel assistance
WHITE: User-defined conditions to a specific machine, often related to productivity monitoring
Optionally an audible alarm buzzer, typically in the range of 70–105dB, may be added to alert machine operators to high priority conditions.
IEC60073 addresses machine state colour-coding & acoustic alerting, which can be applied to devices including panel pilot lights & stack lights. Machine operator intervention is typically required in red and yellow machine states, as these are normally errors or warnings. Manual intervention is possibly necessary in blue and white conditions.
Applications
Common applications include, but are not limited to:
Productivity monitoring (often rate-based machine output management with parts-per-hour displays). Uptime & downtime monitoring (overall equipment effectiveness) is a very common use for these devices.
Warning indication and machine fault management
Lean manufacturing - 5S Initiatives
In conjunction with SCADA supervisory control systems and user interface/HMIs: SCADA/HMIs provide more specific machine/process status data; stack lights complement them by providing visual/audible feedback away from the machine operator console.
Assembly station workcells
Maintenance call stations
CNC machining equipment and process monitoring and feedback
Broadcast studios (commonly used in broadcast radio studios) to display status of things such as a studio on air, live microphones, phone calls and even as a doorbell in an environment where silent indication is critical.
Dispatch centers where the dispatcher frequently uses a headset making it difficult to tell when the dispatcher is on the phone or radio. The light will light one color when the radio is keyed and another when on the phone.
See also
Andon (manufacturing)
IEC 60073:2002 Basic and safety principles for man-machine interface, marking and identification - Coding principles for indicators and actuators https://webstore.iec.ch/publication/587
http://talk.electricianforum.co.uk/downloads/89021-Light%20colours%20from%2060204-1.pdf
Engineering/Installation Reference Guide, https://web.archive.org/web/20160304080801/http://www.onyx-industries.com/downloads/StackLightEngineeringReferenceGuide.pdf
Lean Manufacturing "Andon", https://www.workerbase.com/post/the-definitive-guide-to-modern-lean-manufacturing-andon-systems
Lean Manufacturing Glossary, http://www.gembutsu.com/articles/leanmanufacturingglossary.html
Table 3 related to Pilot & Indicator Light Color Coding: http://wp10625799.vwp6873.webpack.hosteurope.de/rafi.de/index.php?id=841&L=1
References
Control devices
Types of lamp
Industrial automation | Stack light | Engineering | 1,052 |
18,154 | https://en.wikipedia.org/wiki/Propositional%20calculus | The propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. Sometimes, it is called first-order propositional logic to contrast it with System F, but it should not be confused with first-order logic. It deals with propositions (which can be true or false) and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below.
Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.
Propositional logic is typically studied with a formal language, in which propositions are represented by letters, which are called propositional variables. These are then used, together with symbols for connectives, to make propositional formula. Because of this, the propositional variables are called atomic formulas of a formal propositional language. While the atomic propositions are typically represented by letters of the alphabet, there is a variety of notations to represent the logical connectives. The following table shows the main notational variants for each of the connectives in propositional logic.
The most thoroughly researched branch of propositional logic is classical truth-functional propositional logic, in which formulas are interpreted as having precisely one of two possible truth values, the truth value of true or the truth value of false. The principle of bivalence and the law of excluded middle are upheld. By comparison with first-order logic, truth-functional propositional logic is considered to be zeroth-order logic.
History
Although propositional logic (also called propositional calculus) had been hinted by earlier philosophers, it was developed into a formal logic (Stoic logic) by Chrysippus in the 3rd century BC and expanded by his successor Stoics. The logic was focused on propositions. This was different from the traditional syllogistic logic, which focused on terms. However, most of the original writings were lost and, at some time between the 3rd and 6th century CE, Stoic logic faded into oblivion, to be resurrected only in the 20th century, in the wake of the (re)-discovery of propositional logic.
Symbolic logic, which would come to be important to refine propositional logic, was first developed by the 17th/18th-century mathematician Gottfried Leibniz, whose calculus ratiocinator was, however, unknown to the larger logical community. Consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan, completely independent of Leibniz.
Gottlob Frege's predicate logic builds upon propositional logic, and has been described as combining "the distinctive features of syllogistic logic and propositional logic." Consequently, predicate logic ushered in a new era in logic's history; however, advances in propositional logic were still made after Frege, including natural deduction, truth trees and truth tables. Natural deduction was invented by Gerhard Gentzen and Stanisław Jaśkowski. Truth trees were invented by Evert Willem Beth. The invention of truth tables, however, is of uncertain attribution.
Within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or Emil Post (or both, independently). Besides Frege and Russell, others credited with having ideas preceding truth tables include Philo, Boole, Charles Sanders Peirce, and Ernst Schröder. Others credited with the tabular structure include Jan Łukasiewicz, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving Lewis. Ultimately, some have concluded, like John Shosky, that "It is far from clear that any one person should be given the title of 'inventor' of truth-tables".
Sentences
Propositional logic, as currently studied in universities, is a specification of a standard of logical consequence in which only the meanings of propositional connectives are considered in evaluating the conditions for the truth of a sentence, or whether a sentence logically follows from some other sentence or group of sentences.
Declarative sentences
Propositional logic deals with statements, which are defined as declarative sentences having truth value. Examples of statements might include:
Wikipedia is a free online encyclopedia that anyone can edit.
London is the capital of England.
All Wikipedia editors speak at least three languages.
Declarative sentences are contrasted with questions, such as "What is Wikipedia?", and imperative statements, such as "Please add citations to support the claims in this article.". Such non-declarative sentences have no truth value, and are only dealt with in nonclassical logics, called erotetic and imperative logics.
Compounding sentences with connectives
In propositional logic, a statement can contain one or more other statements as parts. Compound sentences are formed from simpler sentences and express relationships among the constituent sentences. This is done by combining them with logical connectives: the main types of compound sentences are negations, conjunctions, disjunctions, implications, and biconditionals, which are formed by using the corresponding connectives to connect propositions. In English, these connectives are expressed by the words "and" (conjunction), "or" (disjunction), "not" (negation), "if" (material conditional), and "if and only if" (biconditional). Examples of such compound sentences might include:
Wikipedia is a free online encyclopedia that anyone can edit, and millions already have. (conjunction)
It is not true that all Wikipedia editors speak at least three languages. (negation)
Either London is the capital of England, or London is the capital of the United Kingdom, or both. (disjunction)
If sentences lack any logical connectives, they are called simple sentences, or atomic sentences; if they contain one or more logical connectives, they are called compound sentences, or molecular sentences.
Sentential connectives are a broader category that includes logical connectives. Sentential connectives are any linguistic particles that bind sentences to create a new compound sentence, or that inflect a single sentence to create a new sentence. A logical connective, or propositional connective, is a kind of sentential connective with the characteristic feature that, when the original sentences it operates on are (or express) propositions, the new sentence that results from its application also is (or expresses) a proposition. Philosophers disagree about what exactly a proposition is, as well as about which sentential connectives in natural languages should be counted as logical connectives. Sentential connectives are also called sentence-functors, and logical connectives are also called truth-functors.
Arguments
An argument is defined as a pair of things, namely a set of sentences, called the premises, and a sentence, called the conclusion. The conclusion is claimed to follow from the premises, and the premises are claimed to support the conclusion.
Example argument
The following is an example of an argument within the scope of propositional logic:
Premise 1: If it's raining, then it's cloudy.
Premise 2: It's raining.
Conclusion: It's cloudy.
The logical form of this argument is known as modus ponens, which is a classically valid form. So, in classical logic, the argument is valid, although it may or may not be sound, depending on the meteorological facts in a given context. This example argument will be reused when explaining .
Validity and soundness
An argument is valid if, and only if, it is necessary that, if all its premises are true, its conclusion is true. Alternatively, an argument is valid if, and only if, it is impossible for all the premises to be true while the conclusion is false.
Validity is contrasted with soundness. An argument is sound if, and only if, it is valid and all its premises are true. Otherwise, it is unsound.
Logic, in general, aims to precisely specify valid arguments. This is done by defining a valid argument as one in which its conclusion is a logical consequence of its premises, which, when this is understood as semantic consequence, means that there is no case in which the premises are true but the conclusion is not true – see below.
Formalization
Propositional logic is typically studied through a formal system in which formulas of a formal language are interpreted to represent propositions. This formal language is the basis for proof systems, which allow a conclusion to be derived from premises if, and only if, it is a logical consequence of them. This section will show how this works by formalizing the . The formal language for a propositional calculus will be fully specified in , and an overview of proof systems will be given in .
Propositional variables
Since propositional logic is not concerned with the structure of propositions beyond the point where they cannot be decomposed any more by logical connectives, it is typically studied by replacing such atomic (indivisible) statements with letters of the alphabet, which are interpreted as variables representing statements (propositional variables). With propositional variables, the would then be symbolized as follows:
Premise 1:
Premise 2:
Conclusion:
When is interpreted as "It's raining" and as "it's cloudy" these symbolic expressions correspond exactly with the original expression in natural language. Not only that, but they will also correspond with any other inference with the same logical form.
When a formal system is used to represent formal logic, only statement letters (usually capital roman letters such as , and ) are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself.
Gentzen notation
If we assume that the validity of modus ponens has been accepted as an axiom, then the same can also be depicted like this:
This method of displaying it is Gentzen's notation for natural deduction and sequent calculus. The premises are shown above a line, called the inference line, separated by a comma, which indicates combination of premises. The conclusion is written below the inference line. The inference line represents syntactic consequence, sometimes called deductive consequence, which is also symbolized with ⊢. So the above can also be written in one line as .
Syntactic consequence is contrasted with semantic consequence, which is symbolized with ⊧. In this case, the conclusion follows syntactically because the natural deduction inference rule of modus ponens has been assumed. For more on inference rules, see the sections on proof systems below.
Language
The language (commonly called ) of a propositional calculus is defined in terms of:
a set of primitive symbols, called atomic formulas, atomic sentences, atoms, placeholders, prime formulas, proposition letters, sentence letters, or variables, and
a set of operator symbols, called connectives, logical connectives, logical operators, truth-functional connectives, truth-functors, or propositional connectives.
A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of operator symbols according to the rules of the grammar. The language , then, is defined either as being identical to its set of well-formed formulas, or as containing that set (together with, for instance, its set of connectives and variables).
Usually the syntax of is defined recursively by just a few definitions, as seen next; some authors explicitly include parentheses as punctuation marks when defining their language's syntax, while others use them without comment.
Syntax
Given a set of atomic propositional variables , , , ..., and a set of propositional connectives , , , ..., , , , ..., , , , ..., a formula of propositional logic is defined recursively by these definitions:
Definition 1: Atomic propositional variables are formulas.
Definition 2: If is a propositional connective, and A, B, C, … is a sequence of m, possibly but not necessarily atomic, possibly but not necessarily distinct, formulas, then the result of applying to A, B, C, … is a formula.
Definition 3: Nothing else is a formula.
Writing the result of applying to A, B, C, … in functional notation, as (A, B, C, …), we have the following as examples of well-formed formulas:
What was given as Definition 2 above, which is responsible for the composition of formulas, is referred to by Colin Howson as the principle of composition. It is this recursion in the definition of a language's syntax which justifies the use of the word "atomic" to refer to propositional variables, since all formulas in the language are built up from the atoms as ultimate building blocks. Composite formulas (all formulas besides atoms) are called molecules, or molecular sentences. (This is an imperfect analogy with chemistry, since a chemical molecule may sometimes have only one atom, as in monatomic gases.)
The definition that "nothing else is a formula", given above as Definition 3, excludes any formula from the language which is not specifically required by the other definitions in the syntax. In particular, it excludes infinitely long formulas from being well-formed.
CF grammar in BNF
An alternative to the syntax definitions given above is to write a context-free (CF) grammar for the language in Backus-Naur form (BNF). This is more common in computer science than in philosophy. It can be done in many ways, of which a particularly brief one, for the common set of five connectives, is this single clause:
This clause, due to its self-referential nature (since is in some branches of the definition of ), also acts as a recursive definition, and therefore specifies the entire language. To expand it to add modal operators, one need only add … to the end of the clause.
Constants and schemata
Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propositional constants represent some particular proposition, while propositional variables range over the set of all atomic propositions. Schemata, or schematic letters, however, range over all formulas. (Schematic letters are also called metavariables.) It is common to represent propositional constants by , , and , propositional variables by , , and , and schematic letters are often Greek letters, most often , , and .
However, some authors recognize only two "propositional constants" in their formal system: the special symbol , called "truth", which always evaluates to True, and the special symbol , called "falsity", which always evaluates to False. Other authors also include these symbols, with the same meaning, but consider them to be "zero-place truth-functors", or equivalently, "nullary connectives".
Semantics
To serve as a model of the logic of a given natural language, a formal language must be semantically interpreted. In classical logic, all propositions evaluate to exactly one of two truth-values: True or False. For example, "Wikipedia is a free online encyclopedia that anyone can edit" evaluates to True, while "Wikipedia is a paper encyclopedia" evaluates to False.
In other respects, the following formal semantics can apply to the language of any propositional logic, but the assumptions that there are only two semantic values (bivalence), that only one of the two is assigned to each formula in the language (noncontradiction), and that every formula gets assigned a value (excluded middle), are distinctive features of classical logic. To learn about nonclassical logics with more than two truth-values, and their unique semantics, one may consult the articles on "Many-valued logic", "Three-valued logic", "Finite-valued logic", and "Infinite-valued logic".
Interpretation (case) and argument
For a given language , an interpretation, valuation, or case, is an assignment of semantic values to each formula of . For a formal language of classical logic, a case is defined as an assignment, to each formula of , of one or the other, but not both, of the truth values, namely truth (T, or 1) and falsity (F, or 0). An interpretation that follows the rules of classical logic is sometimes called a Boolean valuation. An interpretation of a formal language for classical logic is often expressed in terms of truth tables. Since each formula is only assigned a single truth-value, an interpretation may be viewed as a function, whose domain is , and whose range is its set of semantic values , or .
For distinct propositional symbols there are distinct possible interpretations. For any particular symbol , for example, there are possible interpretations: either is assigned T, or is assigned F. And for the pair , there are possible interpretations: either both are assigned T, or both are assigned F, or is assigned T and is assigned F, or is assigned F and is assigned T. Since has , that is, denumerably many propositional symbols, there are , and therefore uncountably many distinct possible interpretations of as a whole.
Where is an interpretation and and represent formulas, the definition of an argument, given in , may then be stated as a pair , where is the set of premises and is the conclusion. The definition of an argument's validity, i.e. its property that , can then be stated as its absence of a counterexample, where a counterexample is defined as a case in which the argument's premises are all true but the conclusion is not true. As will be seen in , this is the same as to say that the conclusion is a semantic consequence of the premises.
Propositional connective semantics
An interpretation assigns semantic values to atomic formulas directly. Molecular formulas are assigned a function of the value of their constituent atoms, according to the connective used; the connectives are defined in such a way that the truth-value of a sentence formed from atoms with connectives depends on the truth-values of the atoms that they're applied to, and only on those. This assumption is referred to by Colin Howson as the assumption of the truth-functionality of the connectives.
Semantics via truth tables
Since logical connectives are defined semantically only in terms of the truth values that they take when the propositional variables that they're applied to take either of the two possible truth values, the semantic definition of the connectives is usually represented as a truth table for each of the connectives, as seen below:
This table covers each of the main five logical connectives: conjunction (here notated ), disjunction (), implication (), biconditional () and negation, (¬p, or ¬q, as the case may be). It is sufficient for determining the semantics of each of these operators. For more truth tables for more different kinds of connectives, see the article "Truth table".
Semantics via assignment expressions
Some authors (viz., all the authors cited in this subsection) write out the connective semantics using a list of statements instead of a table. In this format, where is the interpretation of , the five connectives are defined as:
if, and only if,
if, and only if, and
if, and only if, or
if, and only if, it is true that, if , then
if, and only if, it is true that if, and only if,
Instead of , the interpretation of may be written out as , or, for definitions such as the above, may be written simply as the English sentence " is given the value ". Yet other authors may prefer to speak of a Tarskian model for the language, so that instead they'll use the notation , which is equivalent to saying , where is the interpretation function for .
Connective definition methods
Some of these connectives may be defined in terms of others: for instance, implication, , may be defined in terms of disjunction and negation, as ; and disjunction may be defined in terms of negation and conjunction, as . In fact, a truth-functionally complete system, in the sense that all and only the classical propositional tautologies are theorems, may be derived using only disjunction and negation (as Russell, Whitehead, and Hilbert did), or using only implication and negation (as Frege did), or using only conjunction and negation, or even using only a single connective for "not and" (the Sheffer stroke), as Jean Nicod did. A joint denial connective (logical NOR) will also suffice, by itself, to define all other connectives. Besides NOR and NAND, no other connectives have this property.
Some authors, namely Howson and Cunningham, distinguish equivalence from the biconditional. (As to equivalence, Howson calls it "truth-functional equivalence", while Cunningham calls it "logical equivalence".) Equivalence is symbolized with ⇔ and is a metalanguage symbol, while a biconditional is symbolized with ↔ and is a logical connective in the object language . Regardless, an equivalence or biconditional is true if, and only if, the formulas connected by it are assigned the same semantic value under every interpretation. Other authors often do not make this distinction, and may use the word "equivalence", and/or the symbol ⇔, to denote their object language's biconditional connective.
Semantic truth, validity, consequence
Given and as formulas (or sentences) of a language , and as an interpretation (or case) of , then the following definitions apply:
Truth-in-a-case: A sentence of is true under an interpretation if assigns the truth value T to . If is true under , then is called a model of .
Falsity-in-a-case: is false under an interpretation if, and only if, is true under . This is the "truth of negation" definition of falsity-in-a-case. Falsity-in-a-case may also be defined by the "complement" definition: is false under an interpretation if, and only if, is not true under . In classical logic, these definitions are equivalent, but in nonclassical logics, they are not.
Semantic consequence: A sentence of is a semantic consequence () of a sentence if there is no interpretation under which is true and is not true.
Valid formula (tautology): A sentence of is logically valid (), or a tautology, if it is true under every interpretation, or true in every case.
Consistent sentence: A sentence of is consistent if it is true under at least one interpretation. It is inconsistent if it is not consistent. An inconsistent formula is also called self-contradictory, and said to be a self-contradiction, or simply a contradiction, although this latter name is sometimes reserved specifically for statements of the form .
For interpretations (cases) of , these definitions are sometimes given:
Complete case: A case is complete if, and only if, either is true-in- or is true-in-, for any in .
Consistent case: A case is consistent if, and only if, there is no in such that both and are true-in-.
For classical logic, which assumes that all cases are complete and consistent, the following theorems apply:
For any given interpretation, a given formula is either true or false under it.
No formula is both true and false under the same interpretation.
is true under if, and only if, is false under ; is true under if, and only if, is not true under .
If and are both true under , then is true under .
If and , then .
is true under if, and only if, either is not true under , or is true under .
if, and only if, is logically valid, that is, if, and only if, .
Proof systems
Proof systems in propositional logic can be broadly classified into semantic proof systems and syntactic proof systems, according to the kind of logical consequence that they rely on: semantic proof systems rely on semantic consequence (), whereas syntactic proof systems rely on syntactic consequence (). Semantic consequence deals with the truth values of propositions in all possible interpretations, whereas syntactic consequence concerns the derivation of conclusions from premises based on rules and axioms within a formal system. This section gives a very brief overview of the kinds of proof systems, with anchors to the relevant sections of this article on each one, as well as to the separate Wikipedia articles on each one.
Semantic proof systems
Semantic proof systems rely on the concept of semantic consequence, symbolized as , which indicates that if is true, then must also be true in every possible interpretation.
Truth tables
A truth table is a semantic proof method used to determine the truth value of a propositional logic expression in every possible scenario. By exhaustively listing the truth values of its constituent atoms, a truth table can show whether a proposition is true, false, tautological, or contradictory. See .
Semantic tableaux
A semantic tableau is another semantic proof technique that systematically explores the truth of a proposition. It constructs a tree where each branch represents a possible interpretation of the propositions involved. If every branch leads to a contradiction, the original proposition is considered to be a contradiction, and its negation is considered a tautology. See .
Syntactic proof systems
Syntactic proof systems, in contrast, focus on the formal manipulation of symbols according to specific rules. The notion of syntactic consequence, , signifies that can be derived from using the rules of the formal system.
Axiomatic systems
An axiomatic system is a set of axioms or assumptions from which other statements (theorems) are logically derived. In propositional logic, axiomatic systems define a base set of propositions considered to be self-evidently true, and theorems are proved by applying deduction rules to these axioms. See .
Natural deduction
Natural deduction is a syntactic method of proof that emphasizes the derivation of conclusions from premises through the use of intuitive rules reflecting ordinary reasoning. Each rule reflects a particular logical connective and shows how it can be introduced or eliminated. See .
Sequent calculus
The sequent calculus is a formal system that represents logical deductions as sequences or "sequents" of formulas. Developed by Gerhard Gentzen, this approach focuses on the structural properties of logical deductions and provides a powerful framework for proving statements within propositional logic.
Semantic proof via truth tables
Taking advantage of the semantic concept of validity (truth in every interpretation), it is possible to prove a formula's validity by using a truth table, which gives every possible interpretation (assignment of truth values to variables) of a formula. If, and only if, all the lines of a truth table come out true, the formula is semantically valid (true in every interpretation). Further, if (and only if) is valid, then is inconsistent.
For instance, this table shows that "" is not valid:
The computation of the last column of the third line may be displayed as follows:
Further, using the theorem that if, and only if, is valid, we can use a truth table to prove that a formula is a semantic consequence of a set of formulas: if, and only if, we can produce a truth table that comes out all true for the formula (that is, if ).
Semantic proof via tableaux
Since truth tables have 2n lines for n variables, they can be tiresomely long for large values of n. Analytic tableaux are a more efficient, but nevertheless mechanical, semantic proof method; they take advantage of the fact that "we learn nothing about the validity of the inference from examining the truth-value distributions which make either the premises false or the conclusion true: the only relevant distributions when considering deductive validity are clearly just those which make the premises true or the conclusion false."
Analytic tableaux for propositional logic are fully specified by the rules that are stated in schematic form below. These rules use "signed formulas", where a signed formula is an expression or , where is a (unsigned) formula of the language . (Informally, is read " is true", and is read " is false".) Their formal semantic definition is that "under any interpretation, a signed formula is called true if is true, and false if is false, whereas a signed formula is called false if is true, and true if is false."
In this notation, rule 2 means that yields both , whereas branches into . The notation is to be understood analogously for rules 3 and 4. Often, in tableaux for classical logic, the signed formula notation is simplified so that is written simply as , and as , which accounts for naming rule 1 the "Rule of Double Negation".
One constructs a tableau for a set of formulas by applying the rules to produce more lines and tree branches until every line has been used, producing a complete tableau. In some cases, a branch can come to contain both and for some , which is to say, a contradiction. In that case, the branch is said to close. If every branch in a tree closes, the tree itself is said to close. In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false. Conversely, a tableau can also prove that a logical formula is tautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close.
To construct a tableau for an argument , one first writes out the set of premise formulas, , with one formula on each line, signed with (that is, for each in the set); and together with those formulas (the order is unimportant), one also writes out the conclusion, , signed with (that is, ). One then produces a truth tree (analytic tableau) by using all those lines according to the rules. A closed tree will be proof that the argument was valid, in virtue of the fact that if, and only if, is inconsistent (also written as ).
List of classically valid argument forms
Using semantic checking methods, such as truth tables or semantic tableaux, to check for tautologies and semantic consequences, it can be shown that, in classical logic, the following classical argument forms are semantically valid, i.e., these tautologies and semantic consequences hold. We use ⟚ to denote equivalence of and , that is, as an abbreviation for both and ; as an aid to reading the symbols, a description of each formula is given. The description reads the symbol ⊧ (called the "double turnstile") as "therefore", which is a common reading of it, although many authors prefer to read it as "entails", or as "models".
Syntactic proof via natural deduction
Natural deduction, since it is a method of syntactical proof, is specified by providing inference rules (also called rules of proof) for a language with the typical set of connectives ; no axioms are used other than these rules. The rules are covered below, and a proof example is given afterwards.
Notation styles
Different authors vary to some extent regarding which inference rules they give, which will be noted. More striking to the look and feel of a proof, however, is the variation in notation styles. The , which was covered earlier for a short argument, can actually be stacked to produce large tree-shaped natural deduction proofs—not to be confused with "truth trees", which is another name for analytic tableaux. There is also a style due to Stanisław Jaśkowski, where the formulas in the proof are written inside various nested boxes, and there is a simplification of Jaśkowski's style due to Fredric Fitch (Fitch notation), where the boxes are simplified to simple horizontal lines beneath the introductions of suppositions, and vertical lines to the left of the lines that are under the supposition. Lastly, there is the only notation style which will actually be used in this article, which is due to Patrick Suppes, but was much popularized by E.J. Lemmon and Benson Mates. This method has the advantage that, graphically, it is the least intensive to produce and display, which made it a natural choice for the editor who wrote this part of the article, who did not understand the complex LaTeX commands that would be required to produce proofs in the other methods.
A proof, then, laid out in accordance with the Suppes–Lemmon notation style, is a sequence of lines containing sentences, where each sentence is either an assumption, or the result of applying a rule of proof to earlier sentences in the sequence. Each line of proof is made up of a sentence of proof, together with its annotation, its assumption set, and the current line number. The assumption set lists the assumptions on which the given sentence of proof depends, which are referenced by the line numbers. The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence. See the .
Inference rules
Natural deduction inference rules, due ultimately to Gentzen, are given below. There are ten primitive rules of proof, which are the rule assumption, plus four pairs of introduction and elimination rules for the binary connectives, and the rule reductio ad adbsurdum. Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination, and MTT and DN are commonly given rules, although they are not primitive.
Natural deduction proof example
The proof below derives from and using only MPP and RAA, which shows that MTT is not a primitive rule, since it can be derived from those two other rules.
Syntactic proof via axioms
It is possible to perform proofs axiomatically, which means that certain tautologies are taken as self-evident and various others are deduced from them using modus ponens as an inference rule, as well as a rule of substitution, which permits replacing any well-formed formula with any of it. Alternatively, one uses axiom schemas instead of axioms, and no rule of substitution is used.
This section gives the axioms of some historically notable axiomatic systems for propositional logic. For more examples, as well as metalogical theorems that are specific to such axiomatic systems (such as their completeness and consistency), see the article Axiomatic system (logic).
Frege's Begriffsschrift
Although axiomatic proof has been used since the famous Ancient Greek textbook, Euclid's Elements of Geometry, in propositional logic it dates back to Gottlob Frege's 1879 Begriffsschrift. Frege's system used only implication and negation as connectives, and it had six axioms, which were these ones:
Proposition 1:
Proposition 2:
Proposition 8:
Proposition 28:
Proposition 31:
Proposition 41:
These were used by Frege together with modus ponens and a rule of substitution (which was used but never precisely stated) to yield a complete and consistent axiomatization of classical truth-functional propositional logic.
Łukasiewicz's P2
Jan Łukasiewicz showed that, in Frege's system, "the third axiom is superfluous since it can be derived from the preceding two axioms, and that the last three axioms can be replaced by the single sentence ". Which, taken out of Łukasiewicz's Polish notation into modern notation, means . Hence, Łukasiewicz is credited with this system of three axioms:
Just like Frege's system, this system uses a substitution rule and uses modus ponens as an inference rule. The exact same system was given (with an explicit substitution rule) by Alonzo Church, who referred to it as the system P2 and helped popularize it.
Schematic form of P2
One may avoid using the rule of substitution by giving the axioms in schematic form, using them to generate an infinite set of axioms. Hence, using Greek letters to represent schemata (metalogical variables that may stand for any well-formed formulas), the axioms are given as:
The schematic version of P2 is attributed to John von Neumann, and is used in the Metamath "set.mm" formal proof database. It has also been attributed to Hilbert, and named in this context.
Proof example in P2
As an example, a proof of in P2 is given below. First, the axioms are given names:
(A1)
(A2)
(A3)
And the proof is as follows:
(instance of (A1))
(instance of (A2))
(from (1) and (2) by modus ponens)
(instance of (A1))
(from (4) and (3) by modus ponens)
Solvers
One notable difference between propositional calculus and predicate calculus is that satisfiability of a propositional formula is decidable. Deciding satisfiability of propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g., DPLL algorithm, 1962; Chaff algorithm, 2001) that are very fast for many useful cases. Recent work has extended the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers.
See also
Higher logical levels
First-order logic
Second-order propositional logic
Second-order logic
Higher-order logic
Related topics
Boolean algebra (logic)
Boolean algebra (structure)
Boolean algebra topics
Boolean domain
Boolean function
Boolean-valued function
Categorical logic
Combinational logic
Combinatory logic
Conceptual graph
Disjunctive syllogism
Entitative graph
Equational logic
Existential graph
Implicational propositional calculus
Intuitionistic propositional calculus
Jean Buridan
Laws of Form
List of logic symbols
Logical graph
Logical NOR
Logical value
Mathematical logic
Operation (mathematics)
Paul of Venice
Peirce's law
Peter of Spain (author)
Propositional formula
Symmetric difference
Tautology (rule of inference)
Truth function
Truth table
Walter Burley
William of Sherwood
Notes
References
Further reading
Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY.
Chang, C.C. and Keisler, H.J. (1973), Model Theory, North-Holland, Amsterdam, Netherlands.
Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGraw–Hill, 1970. 2nd edition, McGraw–Hill, 1978.
Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY.
Lambek, J. and Scott, P.J. (1986), Introduction to Higher Order Categorical Logic, Cambridge University Press, Cambridge, UK.
Mendelson, Elliot (1964), Introduction to Mathematical Logic, D. Van Nostrand Company.
Related works
External links
Klement, Kevin C. (2006), "Propositional Logic", in James Fieser and Bradley Dowden (eds.), Internet Encyclopedia of Philosophy, Eprint.
Formal Predicate Calculus, contains a systematic formal development with axiomatic proof
forall x: an introduction to formal logic, by P.D. Magnus, covers formal semantics and proof theory for sentential logic.
Chapter 2 / Propositional Logic from Logic In Action
Propositional sequent calculus prover on Project Nayuki. (note: implication can be input in the form !X|Y, and a sequent can be a single formula prefixed with > and having no commas)
Propositional Logic - A Generative Grammar
A Propositional Calculator that helps to understand simple expressions
Logical calculi
Boolean algebra
Classical logic
Analytic philosophy | Propositional calculus | Mathematics | 8,433 |
14,589,235 | https://en.wikipedia.org/wiki/Zetix | Zetix is a fabric invented by Auxetics Technologies, Ltd., a UK company. Zetix is an auxetic material strong enough to absorb and disperse shockwaves from explosions without breaking.
Usage
Zetix is used in water-activated tape, also referred to as gummed paper tape, a popular form of carton sealing, under the trading name Xtegra™ Tegrabond™.
Zetix is also used in thread and ropes. Knots under tension may be more secure because auxetic material expands when stretched. It is known as Xtegra™ Auxetic Yarn with Kevlar®, outside of the United Kingdom.
References
External links
Zetix manufacturer Auxetix
Advanced Fabric Technologies, LLC
Brand name materials | Zetix | Physics | 154 |
7,391,012 | https://en.wikipedia.org/wiki/Combes%20quinoline%20synthesis | The Combes quinoline synthesis is a chemical reaction, which was first reported by Combes in 1888. Further studies and reviews of the Combes quinoline synthesis and its variations have been published by Alyamkina et al., Bergstrom and Franklin, Born, and Johnson and Mathews.
The Combes quinoline synthesis is often used to prepare the 2,4-substituted quinoline backbone and is unique in that it uses a β-diketone substrate, which is different from other quinoline preparation methods, such as the Conrad-Limpach synthesis and the Doebner reaction.
It involves the condensation of unsubstituted anilines (1) with β-diketones (2) to form substituted quinolines (4) after an acid-catalyzed ring closure of an intermediate Schiff base (3).
Mechanism
The reaction mechanism undergoes three major steps, the first one being the protonation of the oxygen on the carbonyl in the β-diketone, which then undergoes a nucleophilic addition reaction with the aniline. An intramolecular proton transfer is followed by an E2 mechanism, which causes a molecule of water to leave. Deprotonation at the nitrogen atom generates a Schiff base, which tautomerizes to form an enamine that gets protonated via the acid catalyst, which is commonly concentrated sulfuric acid (H2SO4). The second major step, which is also the rate-determining step, is the annulation of the molecule. Immediately following the annulation, there is a proton transfer, which eliminates the positive formal charge on the nitrogen atom. The alcohol is then protonated, followed by the dehydration of the molecule, resulting in the end product of a substituted quinoline.
Regioselectivity
The formation of the quinoline product is influenced by the interaction of both steric and electronic effects. In a recent study, Sloop investigated how substituents would influence the regioselectivity of the product as well as the rate of reaction during the rate-determining step in a modified Combes pathway, which produced trifluoromethylquinoline as the product. Sloop focused specifically on the influences that substituted trifluoro-methyl-β-diketones and substituted anilines would have on the rate of quinoline formation. One modification to the generic Combes quinoline synthesis was the use of a mixture of polyphosphoric acid (PPA) and various alcohols (Sloop used ethanol in his experiment). The mixture produced a polyphosphoric ester (PPE) catalyst that proved to be more effective as the dehydrating agent than concentrated sulfuric acid (H2SO4), which is commonly used in the Combes quinoline synthesis. Using the modified Combes synthesis, two possible regioisomers were found: 2-CF3- and 4-CF3-quinolines. It was observed that the steric effects of the substituents play a more important role in the electrophilic aromatic annulation step, which is the rate-determining step, compared to the initial nucleophilic addition of the aniline to the diketone. It was also observed that increasing the bulk of the R group on the diketone and using methoxy-substituted anilines leads to the formation of 2-CF3-quinolines. If chloro- or fluoroanilines are used, the major product would be the 4-CF3 regioisomer. The study concludes that the interaction of steric and electronic effects leads to the preferred formation of 2-CF3-quinolines, which provides us with some information on how to manipulate the Combes quinoline synthesis to form a desired regioisomer as the product.
Importance of Quinoline Synthesis
There are multiple ways to synthesize quinoline, one of which is the Combes quinoline synthesis. The synthesis of quinoline derivatives has been prevalent in biomedical studies due to the efficiency of the synthetic methods as well as the relative low-cost production of these compounds, which can also be produced in large scales. Quinoline is an important heterocyclic derivative that serves as a building block for many pharmacological synthetic compounds. Quinoline and its derivatives are commonly used in antimalarial drugs, fungicides, antibiotics, dyes, and flavoring agents. Quinoline and its derivatives also have important roles in other biological compounds that are involved in cardiovascular, anticancer, and anti-inflammatory activities. Additionally, researchers, such as Luo Zai-gang et al., recently looked at the synthesis and use of quinoline derivatives as HIV-1 integrase inhibitors. They also looked at how the substituent placement on the quinoline derivatives affected the primary anti-HIV inhibitory activity.
See also
Conrad-Limpach reaction
Doebner reaction
Doebner-Miller reaction
Skraup synthesis
References
Further reading
Bergstrom, F.W. and Franklin, E.C. Hexaacylic Compounds: Pyridine, Quinoline, and Isoquinoline In Heterocyclic Nitrogen Compounds. California: Department of Chemistry, Stanford University, 1944, 156.
Carbon-carbon bond forming reactions
Condensation reactions
Quinoline forming reactions
Name reactions | Combes quinoline synthesis | Chemistry | 1,122 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.