id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
12,026,220 | https://en.wikipedia.org/wiki/Carcinogenic%20bacteria | Cancer bacteria are bacteria infectious organisms that are known or suspected to cause cancer. While cancer-associated bacteria have long been considered to be opportunistic (i.e., infecting healthy tissues after cancer has already established itself), there is some evidence that bacteria may be directly carcinogenic. Evidence has shown that a specific stage in cancer can be associated with bacteria that is pathogenic. The strongest evidence to date involves the bacterium H. pylori and its role in gastric cancer.
Oncoviruses are viral agents that are similarly suspected of causing cancer.
Known to cause cancer
Helicobacter pylori colonizes the human stomach and duodenum. It is described as a Class 1 carcinogen. In some cases it can cause stomach cancer and MALT lymphoma. Animal models have demonstrated Koch's third and fourth postulates for the role of Helicobacter pylori in the causation of stomach cancer. The mechanism by which H. pylori causes cancer may involve chronic inflammation, or the direct action of some of its virulence factors, for example, CagA has been implicated in carcinogenesis. Another bacteria that is in this genus is Helicobacter hepaticus, which causes hepatitis and liver cancer in mice.
Chronic inflammation
Chronic inflammation contributes to the pathogenesis of several types of malignant diseases, but it is particularly important for H. pylori. Following a H. pylori infection many circulating immune cells are recruited to the infection site including neutrophils. To destroy the pathogens, neutrophils produce substances with antimicrobial activities such as oxidants like reactive oxygen species (ROS) and reactive nitrogen species (RNS). H. pylori can survive the induced oxidative stress by producing antioxidant enzymes such as e.g., catalase. However, the overproduction of ROS and RNS induces various types of DNA damage in the infected gastric cells.At the same time H. pylori is known to down-regulate major DNA repair pathways. As a result, genomic and mitochondrial mutations accumulate, leading to genomic instability - a well-known Hallmark of Cancer - in the gastric cells.
CagA
The virulence factor CagA in H. pylori has been linked to the development of gastric cancer. Once CagA is injected into the cytoplasm it can change the gastric cell signaling in both a phosphorylation-dependent and -independent manner. Phosphorylated CagA affects cell adhesion, spread and migration but can also induce the release of the proinflammatory chemokine IL-8. Additionally, interactions of the CRPIA motif in non-phosphorylated CagA were shown to lead to the persistent activation of the PI3K/Akt pathway, a pathway that is often overly active in many human cancers. This leads to the activation of the pro-inflammatory NF-κB and β-catenin pathways as well as increased gastric cell proliferation. Furthermore, CagA has also been found to increase tumor suppressor gene hypermethylation and thereby inhibiting the tumor suppressor genes. This is achieved by upregulating the methyltransferase DNMT1 via the AKT–NF-κB pathway. Lastly, CagA also induces the expression of the enzyme spermine oxidase (SMOX) that converts spermine to spermidine. As a by-product H2O2 is produced which causes ROS accumulation and contributes to the oxidative stress that the gastric cells experience during chronic inflammation.
Speculative links
A number of bacteria have associations with cancer, although their possible role in carcinogenesis is unclear.
Salmonella Typhi has been linked to gallbladder cancer but may also be useful in delivering chemotherapeutic agents for the treatment of melanoma, colon and bladder cancer. Bacteria found in the gut may be related to colon cancer but may be more complicated due to the role of chemoprotective probiotic cancers. Microorganisms and their metabolic byproducts, or impact of chronic inflammation, may also be linked to oral cancers.
The relationship between cancer and bacteria may be complicated by different individuals reacting in different ways to different cancers.
History
In 1890, the Scottish pathologist William Russell reported circumstantial evidence for the bacterial cause of cancer. In 1926, Canadian physician Thomas Glover reported that he could consistently isolate a specific bacterium from the neoplastic tissues of animals and humans. One review summarized Glover's report as follows:
Glover was asked to continue his work at the Public Health Service (later incorporated into the National Institutes of Health) completing his studies in 1929 and publishing his findings in 1930. He asserted that a vaccine or anti-serum manufactured from his bacterium could be used to treat cancer patients with varying degrees of success. According to historical accounts, scientists from the Public Health Service challenged Glover's claims and asked him to repeat his research to better establish quality control. Glover refused and opted to continue his research independently; not seeking consensus, Glover's claims and results led to controversy and are today not given serious merit.
In 1950, a Newark-based physician named Virginia Livingston published a paper claiming that a specific Mycobacterium was associated with neoplasia. Livingston continued to research the alleged bacterium throughout the 1950s and eventually proposed the name Progenitor cryptocides as well as developed a treatment protocol. Ultimately, her claim of a universal cancer bacterium was not supported in follow up studies. In 1990 the National Cancer Institute published a review of Livingston's theories, concluding that her methods of classifying the cancer bacterium contained "remarkable errors" and it was actually a case of misclassification - the bacterium was actually Staphylococcus epidermidis.
Other researchers and clinicians who worked with the theory that bacteria could cause cancer, especially from the 1930s to the 1960s, included Eleanor Alexander-Jackson, William Coley, William Crofton, Gunther Enderlein, Franz Gerlach, Josef Issels, Elise L'Esperance, Milbank Johnson, Arthur Kendall, Royal Rife, Florence Seibert, Wilhelm von Brehmer, and Ernest Villequez. Alexander-Jackson and Seibert worked with Virginia Livingston. Some of the researchers published reports that also claimed to have found bacteria associated with different types of cancers.
See also
List of oncogenic bacteria
Infectious causes of cancer
List of human diseases associated with infectious pathogens
Oncovirus
References
Bacteria
Infectious causes of cancer | Carcinogenic bacteria | Biology | 1,368 |
24,137,478 | https://en.wikipedia.org/wiki/Bunker%20Hill%20Covered%20Bridge | The Bunker Hill Covered Bridge is one of two covered bridges left in North Carolina, (the other being the Pisgah Covered Bridge in Randolph County), and is possibly the last wooden bridge in the United States with Haupt truss construction. It was built in 1895 by Andrew Loretz Ramsour (1817–1906) in Claremont, North Carolina, and crosses Lyle Creek.
The bridge was designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2001 and is also listed on the National Register of Historic Places.
History and design
The project to build the bridge was started in 1894 when Catawba County Commissioners requested nearby owners of the Bunker Hill Farm to build and maintain a bridge that would cross Lyle Creek on the old Island Ford Road (a former Native American trail). According to local archives, Ramsour found the Haupt truss design in a book. Since the bridge was originally constructed as an open span, its roof wasn't added until 1900, and in 1921, its original wooden shingle roof was replaced with a tin roof. The bridge was owned by the Bolick family until 1985 when they donated it to the Catawba County Historical Association, who restored it in 1994.
See also
List of bridges documented by the Historic American Engineering Record in North Carolina
References
External links
Bridges completed in 1895
Buildings and structures in Catawba County, North Carolina
Covered bridges on the National Register of Historic Places in North Carolina
Historic Civil Engineering Landmarks
Wooden bridges in North Carolina
Transportation in Catawba County, North Carolina
Tourist attractions in Catawba County, North Carolina
Historic American Engineering Record in North Carolina
National Register of Historic Places in Catawba County, North Carolina
Road bridges on the National Register of Historic Places in North Carolina
1895 establishments in North Carolina | Bunker Hill Covered Bridge | Engineering | 354 |
48,634,138 | https://en.wikipedia.org/wiki/K2-24c | K2-24c also known as EPIC 203771098 c is an exoplanet orbiting the Sun-like star K2-24 every 42 days. It has a density far lower than that of Saturn, which indicates that the planet is clearly a gas giant.
References
Exoplanets discovered in 2015
Transiting exoplanets
Exoplanets discovered by K2
Scorpius | K2-24c | Astronomy | 86 |
14,225,958 | https://en.wikipedia.org/wiki/Q-function | In statistics, the Q-function is the tail distribution function of the standard normal distribution. In other words, is the probability that a normal (Gaussian) random variable will obtain a value larger than standard deviations. Equivalently, is the probability that a standard normal random variable takes a value larger than .
If is a Gaussian random variable with mean and variance , then is standard normal and
where .
Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.
Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.
Definition and basic properties
Formally, the Q-function is defined as
Thus,
where is the cumulative distribution function of the standard normal Gaussian distribution.
The Q-function can be expressed in terms of the error function, or the complementary error function, as
An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:
This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.
Craig's formula was later extended by Behnad (2020) for the Q-function of the sum of two non-negative variables, as follows:
Bounds and approximations
The Q-function is not an elementary function. However, it can be upper and lower bounded as,
where is the density function of the standard normal distribution, and the bounds become increasingly tight for large x.
Using the substitution v =u2/2, the upper bound is derived as follows:
Similarly, using and the quotient rule,
Solving for Q(x) provides the lower bound.
The geometric mean of the upper and lower bound gives a suitable approximation for :
Tighter bounds and approximations of can also be obtained by optimizing the following expression
For , the best upper bound is given by and with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by and with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by and with maximum absolute relative error of 1.17%.
The Chernoff bound of the Q-function is
Improved exponential bounds and a pure exponential approximation are
The above were generalized by Tanash & Riihonen (2020), who showed that can be accurately approximated or bounded by
In particular, they presented a systematic methodology to solve the numerical coefficients that yield a minimax approximation or bound: , , or for . With the example coefficients tabulated in the paper for , the relative and absolute approximation errors are less than and , respectively. The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset.
Another approximation of for is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters that
The absolute error between and over the range is minimized by evaluating
Using and numerically integrating, they found the minimum error occurred when which gave a good approximation for
Substituting these values and using the relationship between and from above gives
Alternative coefficients are also available for the above 'Karagiannidis–Lioumpas approximation' for tailoring accuracy for a specific application or transforming it into a tight bound.
A tighter and more tractable approximation of for positive arguments is given by López-Benítez & Casadevall (2011) based on a second-order exponential function:
The fitting coefficients can be optimized over any desired range of arguments in order to minimize the sum of square errors (, , for ) or minimize the maximum absolute error (, , for ). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of is trivial and does not alter the algebraic form of the approximation).
Inverse Q
The inverse Q-function can be related to the inverse error functions:
The function finds application in digital communications. It is usually expressed in dB and generally called Q-factor:
where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for quadrature phase-shift keying (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.
Values
The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference.
Generalization to high dimensions
The Q-function can be generalized to higher dimensions:
where follows the multivariate normal distribution with covariance and the threshold is of the form
for some positive vector and positive constant . As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well as becomes larger and larger.
References
Normal distribution
Special functions
Functions related to probability distributions
Articles containing proofs | Q-function | Mathematics | 1,101 |
39,142,961 | https://en.wikipedia.org/wiki/Beach%20Erosion%20Board | The Beach Erosion Board (BEB) was a federal board organized under the US Government's War Department (later, the Department of Defense), U.S. Army, and was a part of the civil works program of the United States Army Corps of Engineers. The Board had seven members and a large staff. The life of the BEB spanned a period of 33 years, beginning with its establishment in July 1930. The BEB was abolished in November 1963.
The functions of the BEB pertained to review of reports of investigations made concerning erosion of the shores of coastal and lake waters, and the protection of those shores. Under its implementing legislation, Section 2 of Public Law 520, 7lst Congress, approved on July 3, 1930, the Chief of Engineers, U.S. Army, was given authority to make, in cooperation with the appropriate agencies of the various coastal States, investigations and studies aimed at devising effective means of presenting erosion of the shores of coastal and lake waters by waves and currents. The funding provision for these investigations and studies was that no money should be expended under authority of the Act in any State which did not provide for cooperation with the agencies of the United States and contribute to the project such funds and/or services as the Secretary of War deemed appropriate and required.
Beginning during World War II, the BEB produced, maintained, and disseminated intelligence concerning potential landing beaches of interest to the US armed forces. The BEB continued to perform that function for the US armed forces for nearly 20 years. In late 1962, the International Division which handled the intelligence function was transferred to the newly created US Army Area Analysis Intelligence Agency (AAIA). The organization and resources of the AAIA is described in detail in the "Area Analysis Plan", which is available in the National Archives.
In 1963, certain of the BEB’s functions were transferred to the newly created US Army Coastal Engineering Research Center, while others were transferred to the Board of Engineers for Rivers and Harbors. Pursuant to the federal Water Resources Development Act of 1992, the duties and responsibilities of the Board of Engineers for Rivers and Harbors subsequently were transferred by the Secretary of the Army to other elements within the Department of the Army as the Secretary determined to be necessary.
Sources
"The History of the Beach Erosion Board, U.S. Army, Corps of Engineers, 1930-63," by Mary-Louise Quinn, Miscellaneous Report No. 77-9, August 1977
References
United States Army Corps of Engineers
1930 establishments in the United States
1963 disestablishments in the United States | Beach Erosion Board | Engineering | 526 |
1,614,482 | https://en.wikipedia.org/wiki/Metropolitan-Vickers | Metropolitan-Vickers, Metrovick, or Metrovicks, was a British heavy electrical engineering company of the early-to-mid 20th century formerly known as British Westinghouse. Highly diversified, it was particularly well known for its industrial electrical equipment such as generators, steam turbines, switchgear, transformers, electronics and railway traction equipment. Metrovick holds a place in history as the builders of the first commercial transistor computer, the Metrovick 950, and the first British axial-flow jet engine, the Metropolitan-Vickers F.2. Its factory in Trafford Park, Manchester, was for most of the 20th century one of the biggest and most important heavy engineering facilities in Britain and the world.
History
Metrovick started as a way to separate the existing British Westinghouse Electrical and Manufacturing Company factories from United States control, which had proven to be a hindrance to gaining government contracts during the First World War. In 1917 a holding company was formed to try to find financing to buy the company's properties.
In May 1917, control of the holding company was obtained jointly by the Metropolitan Carriage, Wagon and Finance Company, of Birmingham, chaired by Dudley Docker, and Vickers Limited, of Barrow-in-Furness.<ref name=gillham-2>Gillham (1988), Chapter 2: The Manufacturers.</ref> On 15 March 1919, Docker agreed terms with Vickers, for Vickers to purchase all the shares of the Metropolitan Carriage, Wagon and Finance Company for almost £13 million. On 8 September 1919, Vickers changed the name of the British Westinghouse Electrical and Manufacturing Company to Metropolitan Vickers Electrical Company.
The immediate post-war era was marked by low investment and continued labour unrest. Fortunes changed in 1926 with the formation of the Central Electricity Board which standardised electrical supply and led to a massive expansion of electrical distribution, installations, and appliance purchases. Sales shot up, and 1927 marked the company's best year to date.
On 15 November 1922 the BBC was registered and the BBC's Manchester station, 2ZY, was officially opened on 375 metres transmitting from the Metropolitan Vickers Electricity works in Old Trafford.
In 1921, they bought a site at Attercliffe Common in Sheffield, which was used to manufacture traction motors. By 1923, it had its own engineering department, and was making complete locomotives and electric delivery vehicles.
BTH merger and transition to AEI
In 1928 Metrovick merged with the rival British Thomson-Houston (BTH), a company of similar size and product lineup. Combined, they would be one of the few companies able to compete with Marconi or English Electric on an equal footing. In fact the merger was marked by poor communication and intense rivalry, and the two companies generally worked at cross purposes.
The next year the combined company was purchased by the Associated Electrical Industries (AEI) holding group, who also owned Edison Swan (Ediswan); and Ferguson, Pailin & Co, manufacturers of electrical switchgear in Openshaw, Manchester. The rivalry between Metrovick and BTH continued, and AEI was never able to exert effective control over the two competing subsidiary companies.
Problems worsened in 1929 with the start of the Great Depression, but Metrovick's overseas sales were able to pick up some of the slack, notably a major railway electrification project in Brazil. By 1933 world trade was growing again, but growth was nearly upset when six Metrovick engineers were arrested and found guilty of espionage and "wrecking" in Moscow after a number of turbines built by the company in and for the Soviet Union proved to be faulty. The British government intervened; the engineers were released and trade with Russia was resumed after a brief embargo.
During the 1930s Metropolitan Vickers produced two dozen very large diameter (3m/10 ft) three-phase AC traction motors for the Hungarian railway's V40 and V60 electric locomotives. The 1640 kW rated power machinery, designed by Kálmán Kandó, was paid for by British government economic aid.
In 1935 the company built a 105 MW steam turbogenerator, the largest in Europe at that time, for the Battersea Power Station.
In 1936 Metrovick started work with the Air Ministry on automatic pilot systems, eventually branching out to gunlaying systems and building radars the next year. In 1938 they reached an agreement with the Ministry to build a turboprop design developed at the Royal Aircraft Establishment (RAE) under the direction of Hayne Constant. It is somewhat ironic that BTH, its erstwhile partners, were at the same time working with Frank Whittle on his pioneering jet designs.
Wartime aircraft production
In mid-1938, MV was awarded a contract to build Avro Manchester twin-engined heavy bombers under licence from A.V. Roe. As this type of work was very different from its traditional heavy engineering activities, a new factory was built on the western side of Mosley Road and this was completed in stages through 1940. There were significant problems producing this aircraft, not least being the unreliability of the Rolls-Royce Vulture engine and that the first 13 Manchesters were destroyed in a Luftwaffe bombing raid on Trafford Park on 23 December. Despite this the firm went on to complete 43 examples. With the design of the much improved four-engined derivative, the Avro Lancaster, MV switched production to that famous type, supplied with Rolls-Royce Merlin engines from the Ford Trafford Park shadow factory. Three hangars were erected on the southside of Manchester's Ringway Airport for assembly and testing of its Lancasters, before a policy switch was made to assembling them in a hangar at Avro's Woodford airfield. By the end of the war, MV had built 1,080 Lancasters. These were followed by 79 Avro Lincoln derivatives before remaining orders were cancelled and MV's aircraft production ceased in December 1945.
In 1940 the turboprop effort was re-engineered as a pure jet engine after the successful run of Whittle's designs. The new design became the Metrovick F.2 and eventually flew in 1943 on a Gloster Meteor. Considered to be too complex to bother with, Metrovick then re-engineered the design once again to produce roughly double the power, while at the same time starting work on a much larger design, the Metrovick F.9 Sapphire. Although the F.9 proved to be a winner, the Ministry of Supply nevertheless forced the company to sell the jet division to Armstrong Siddeley in 1947 to reduce the number of companies in the business.
In addition to building aircraft, other wartime work included the manufacture of both Dowty and Messier undercarriages, automatic pilot units, searchlights and radar equipment. They also produced electric vans and lorries.
Metrovick postwar
The post-war era led to massive demand for electrical systems, leading to additional rivalries between Metrovick and BTH as each attempted to one-up the other in delivering ever-larger turbogenerator contracts. Metrovick also expanded its appliance division during this time, becoming a well known supplier of refrigerators and stoves.
The design and manufacture of sophisticated scientific instruments, such as electron microscopes, and mass spectrometers, became an important area of scientific research for the company.
In 1947, a Metrovick G.1 Gatric gas turbine was fitted to the Motor Gun Boat MGB 2009, making it the world's first gas turbine powered naval vessel. A subsequent marine gas turbine engine was the G.2 of 4,500 shp fitted to the Royal Navy Bold-class fast patrol boats Bold Pioneer and Bold Pathfinder'', which were built in 1953.
The Bluebird K7 jet-propelled 3-point hydroplane in which Donald Campbell broke the 200 mph water speed barrier was powered with a Metropolitan-Vickers Beryl jet engine producing of thrust. The K7 was unveiled in late 1954. Campbell succeeded on Ullswater on 23 July 1955, where he set a record of , beating the previous record by some held by Stanley Sayres.
Another major area of expansion was in the diesel locomotive market, where they combined their own generators and traction motors with third-party diesel engines to develop in 1950 the Western Australian Government Railways X class 2-Do-2 locomotive and in 1958 the type 2 Co-Bo, later re-classified under the TOPS system as the British Rail Class 28. This diesel-electric locomotive was unusual on two counts; its Co-Bo wheel arrangement and its Crossley 2-Stroke diesel engine (evolved from a World War II marine engine). Intended as part the British Railways Modernisation Plan, the twenty-strong fleet saw service between Scotland and England before being deemed unsuccessful and withdrawn in the late 1960s. Metrovick also produced the CIE 001 Class (originally 'A' Class) from 1955, the first production mainline diesels in Ireland.
Metropolitan Vickers also produced electrical equipment for the British Rail Class 76 (EM1), and British Rail Class 77 (EM2), 1.5 kV DC locomotives, built at Gorton Works for the electrification of the Woodhead Line in the early 1950s. Larger but broadly similar locomotives were also supplied to the New South Wales Government Railways as its 46 class. The company also designed the British Rail Class 82, 25 kV AC locomotives built by Beyer, Peacock & Company in Manchester using Metrovick electrical equipment. The company also supplied electrical equipment for the British Rail Class 303 electric multiple units.
In the 1950s, the company built a large power transformer works at Wythenshawe, Manchester. The factory opened in 1957, and was closed by GEC in 1971, after which it was sold to the American compressor manufacturer Ingersoll Rand.
In 1961, the Russian cosmonaut Yuri Gagarin was invited to the company's factory at Trafford Park as part of his tour of Manchester.
The rivalry between Metrovick and BTH was eventually ended in an unconvincing fashion when the AEI management eventually decided to rid themselves of both brands and be known as AEI universally, a change they made on 1 January 1960. This move was almost universally resented within both companies. Worse, the new brand name was utterly unknown to its customers, leading to a noticeable fall-off in sales and AEI's stock price.
General Electric Company (GEC) takeover
When AEI attempted to remove the doubled-up management structures, they found this task to be even more difficult. By the mid-1960s the company was struggling under the weight of two complete management hierarchies, and they appeared to be unable to control the company any more. This allowed AEI to be purchased by General Electric Company in 1967.
See also
:Category:Metropolitan-Vickers locomotives
Bowesfield Works
Metro-Vickers Affair
Metrovick electric vehicles
References
Bibliography
Further reading
External links
"Metropolitan-Vickers Electrical Co. Ltd. 1899-1949" by John Dummelow 250 pages of text and pictures. (This is a mirror of the original, which was accessed from https://web.archive.org/web/20050308065049/http://www.mvbook.org.uk/ )
Turbines
Locomotive manufacturers of the United Kingdom
Engineering companies of the United Kingdom
Electrical engineering companies of the United Kingdom
Defunct manufacturing companies of the United Kingdom
Associated Electrical Industries
Radar manufacturers
Defunct companies based in Manchester
Manufacturing companies based in Manchester
Manufacturing companies established in 1899
Manufacturing companies disestablished in 1960
Defunct aircraft engine manufacturers of the United Kingdom
Former defence companies of the United Kingdom
Science and technology in the United Kingdom
British companies established in 1899
British companies disestablished in 1960
1899 establishments in England
1960 disestablishments in England | Metropolitan-Vickers | Chemistry | 2,387 |
4,626,155 | https://en.wikipedia.org/wiki/Triacsin%20C | Triacsin C is an inhibitor of long fatty acyl CoA synthetase that has been isolated from Streptomyces aureofaciens. It blocks β-cell apoptosis, induced by fatty acids (lipoapoptosis) in a rat model of obesity. In addition, it blocks the de novo synthesis of triglycerides, diglycerides, and cholesterol esters, thus interfering with lipid metabolism.
In addition, triacsin C is a vasodilator.
Inhibition of lipid metabolism reduces/removes lipid droplets from HuH7 cells. In hepatitis C–infected HuH7 cells, this reduction/removal of lipid droplets by triacsin C correlates with a reduction in virion assembly and infectivity.
General chemical description
Triacsin C belongs to a family of fungal metabolites all having an 11-carbon alkenyl chain with a common N-hydroxytriazene moiety at the terminus. Due to the N-hydroxytriazene group, triacsin C has acidic properties and may be considered a polyunsaturated fatty acid analog.
Triacsin C was discovered by Keizo Yoshida and other Japanese scientists in 1982 in a culture of the microbe Streptomyces aureofaciens. They identified it as a vasodilator.
See also
Fatty acid degradation#Activation_and_transport_into_mitochondria
Fatty acyl CoA synthetase
References
Ligase inhibitors
Nitrosamines
Hydrazones
Polyenes | Triacsin C | Chemistry | 321 |
25,719,815 | https://en.wikipedia.org/wiki/Loading%20screen | A loading screen is a screen shown by a computer program, very often a video game, while the program is loading (moving program data from the disk to RAM) or initializing.
In early video games, the loading screen was also a chance for graphic artists to be creative without the technical limitations often required for the in-game graphics. Drawing utilities were also limited during this period. Melbourne Draw, one of the few 8-bit screen utilities with a zoom function, was one program of choice for artists.
While loading screens remain commonplace in video games, background loading is now used in many games, especially open world titles, to eliminate loading screens while traversing normally through the game, making them appear only when "teleporting" further than the load distance (e.g. using warps or fast travel) or moving faster than the game can load.
Loading times
Loading screens that disguise the length of time a program takes to load were common when computer games were loaded from a cassette tape, a process which could take five minutes or more. Nowadays, most games are downloaded digitally, and therefore loaded off the hard drive meaning faster load times. However, some games are also loaded off of an optical disc, quicker than previous magnetic media, but still include loading screens to disguise the amount of time taken to initialize the game in RAM.
Since the loading screen data itself needs to be read from the media, it actually can increase the overall loading time. For example, with a ZX Spectrum game, the screen data takes up 6 kilobytes, representing an increase in loading time of about 13% over the same game without a loading screen. Recently, however, more powerful hardware has significantly diminished this effect.
Variations
The loading screen does not need to be a static picture. Some loading screens display a progress bar or a timer countdown to show how much data has actually loaded. Others, recently, are not even a picture at all, and are a small video or have parts animated in real time.
Variations such as the progress bar are sometimes programmed to inaccurately reflect the passage of time or extended during loading; opting instead for artificial pauses or stutters. This can be done in games for a multitude of reasons which includes encouraging players to engage with exposition during time away from gameplay and providing the player with an immersive transition between scenes. One notable example of this practice being used is for the real-time strategy game Age of Empires, where programmer Greg Street describes his method of timing visual loading queues with appropriate script queues when loading a randomly generated map. Other developers describe the necessity of an artificial loading timer despite technical advancement making modern loading times near-instantaneous to allow the player a smooth transition between gameplay segments. This technique has grounds in the perceived perception of performance denoted by loading times. This perception of loading times can be altered by factors such as the movement of a progress bar.
Other loading screens double as briefing screens, providing the user with information to read. This information may only be there for storytelling and/or entertainment or it can give the user information that is usable when the loading is complete, such as mission goals or useful gameplay tips. In fighting games, the loading screen is often a versus screen, which shows the fighters who will take part in the match.
Minigames
Some games have even included minigames in their loading screen, notably the 1983 Skyline Attack for the Commodore 64 and Joe Blade 2 on the ZX Spectrum.
One well-known loader game was Invade-a-Load. Another example is "the shop keepers quiz" in Dota 2 which was more of a game finding screen rather than loading screen.
Namco has used playable mini-games during loading screens. Examples include variations of their old arcade games like Galaxian or Rally-X as loading screens when first booting up many of their early PlayStation releases. Even many years later, their PlayStation 2 games, like Tekken 5, still used the games to keep people busy while the game initially boots up. Despite the Invade-a-Load prior art, Namco filed a patent in 1995 that prevented other companies from having playable mini-games on their loading screens, which expired in 2015.
EA Sports games have "warm up" sessions. For example, FIFA 11 has the player shooting free-kicks solo and NBA Live 10 has 2-player shootouts, while the game loads. NBA Live 08 features a 4-player general knowledge quiz. The PlayStation 3 and Xbox 360 versions of THQ's MX vs. ATV: Untamed lets the player partake in a free-ride session on the test course.
Cutscenes
Some games like a number of Call of Duty titles have cutscenes that give an introduction to the level while the game loads in the background. Normally, when the level is completely loaded, the remaining portion of the cutscene may be skipped. The video may not necessarily apply to what is happening in the level, as Red Faction: Guerrilla sometimes shows news reports foreshadowing events that will become important later on, or give tidbits about the game's universe.
Music
On the Commodore 64, tape loading screens would often have music in the form of a chiptune making use of the machine's advanced SID sound chip.
See also
Splash screen
Title screen
References
Graphical user interface elements
Video game design | Loading screen | Technology | 1,080 |
29,090 | https://en.wikipedia.org/wiki/Software%20testing | Software testing is the act of checking whether software satisfies expectations.
Software testing can provide objective, independent information about the quality of software and the risk of its failure to a user or sponsor.
Software testing can determine the correctness of software for specific scenarios but cannot determine correctness for all scenarios. It cannot find all bugs.
Based on the criteria for measuring correctness from an oracle, software testing employs principles and mechanisms that might recognize a problem. Examples of oracles include specifications, contracts, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, and applicable laws.
Software testing is often dynamic in nature; running the software to verify actual output matches expected. It can also be static in nature; reviewing code and its associated documentation.
Software testing is often used to answer the question: Does the software do what it is supposed to do and what it needs to do?
Information learned from software testing may be used to improve the process by which software is developed.
Software testing should follow a "pyramid" approach wherein most of your tests should be unit tests, followed by integration tests and finally end-to-end (e2e) tests should have the lowest proportion.
Economics
A study conducted by NIST in 2002 reported that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.
Outsourcing software testing because of costs is very common, with China, the Philippines, and India being preferred destinations.
History
Glenford J. Myers initially introduced the separation of debugging from testing in 1979. Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification.
Goals
Software testing is typically goal driven.
Finding bugs
Software testing typically includes handling software bugs a defect in the code that causes an undesirable result. Bugs generally slow testing progress and involve programmer assistance to debug and fix.
Not all defects cause a failure. For example, a defect in dead code will not be considered a failure.
A defect that does not cause failure at one point in time may later occur due to environmental changes. Examples of environment change include running on new computer hardware, changes in data, and interacting with different software.
A single defect may result in multiple failure symptoms.
Ensuring requirements are satisfied
Software testing may involve a Requirements gap omission from the design for a requirement. Requirement gaps can often be non-functional requirements such as testability, scalability, maintainability, performance, and security.
Code coverage
A fundamental limitation of software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product.
Defects that manifest in unusual conditions are difficult to find in testing. Also, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do) usability, scalability, performance, compatibility, and reliability can be subjective; something that constitutes sufficient value to one person may not to another.
Although testing for every possible input is not feasible, testing can use combinatorics to maximize coverage while minimizing tests.
Categorization
Testing can be categorized many ways.
Automated testing
Levels
Software testing can be categorized into levels based on how much of the software system is the focus of a test.
Unit testing
Integration testing
System testing
Static, dynamic, and passive testing
There are many approaches to software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas executing programmed code with a given set of test cases is referred to as dynamic testing.
Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for these are either using stubs/drivers or execution from a debugger environment.
Static testing involves verification, whereas dynamic testing also involves validation.
Passive testing means verifying the system's behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions. This is related to offline runtime verification and log analysis.
Exploratory
Preset testing vs adaptive testing
The type of testing strategy to be performed depends on whether the tests to be applied to the IUT should be decided before the testing plan starts to be executed (preset testing) or whether each input to be applied to the IUT can be dynamically dependent on the outputs obtained during the application of the previous tests (adaptive testing).
Black/white box
Software testing can often be divided into white-box and black-box. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box that includes aspects of both boxes may also be applied to software testing methodology.
White-box testing
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills are used to design test cases. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs. This is analogous to testing nodes in a circuit, e.g., in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration, and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Techniques used in white-box testing include:
API testing – testing of the application using public and private APIs (application programming interfaces)
Code coverage – creating tests to satisfy some criteria of code coverage (for example, the test designer can create tests to cause all statements in the program to be executed at least once)
Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies
Mutation testing methods
Static testing methods
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. Code coverage as a software metric can be reported as a percentage for:
Function coverage, which reports on functions executed
Statement coverage, which reports on the number of lines executed to complete the test
Decision coverage, which reports on whether both the True and the False branch of a given test has been executed
100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.
Black-box testing
Black-box testing (also known as functional testing) describes designing test cases without knowledge of the implementation, without reading the source code. The testers are only aware of what the software is supposed to do, not how it does it. Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing, and specification-based testing.
Specification-based testing aims to test the functionality of software according to the applicable requirements. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs, to derive test cases. These tests can be functional or non-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.
Black box testing can be used to any level of testing although usually not at the unit level.
Component interface testing
Component interface testing is a variation of black-box testing, with the focus on the data values beyond just the related actions of a subsystem component. The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units. The data being passed can be considered as "message packets" and the range or data types can be checked for data generated from one unit and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values. Unusual data values in an interface can help explain unexpected performance in the next unit.
Visual testing
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it, and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
Ad hoc testing and exploratory testing are important methodologies for checking software integrity because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in a more rigorous examination of defect fixes. However, unless strict documentation of the procedures is maintained, one of the limits of ad hoc testing is lack of repeatability.
Grey-box testing
Grey-box testing (American spelling: gray-box testing) involves using knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary." Grey-box testing may also include reverse engineering (using dynamic code analysis) to determine, for instance, boundary values or error messages. Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for the test.
By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities, such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios based on limited information. This will particularly apply to data type handling, exception handling, and so on.
With the concept of grey-box testing, this "arbitrary distinction" between black- and white-box testing has faded somewhat.
Installation testing
Compatibility testing
A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a Web application, which must render in a Web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.
Smoke and sanity testing
Sanity testing determines whether it is reasonable to proceed with further testing.
Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used as build verification test.
Regression testing
Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development, due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported.
Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk.
Acceptance testing
Acceptance testing is system-level testing to ensure the software meets customer expectations.
Acceptance testing may be performed as part of the hand-off process between any two phases of development.
Tests are frequently grouped into these levels by where they are performed in the software development process, or by the level of specificity of the test.
User acceptance testing (UAT)
Operational acceptance testing (OAT)
Contractual and regulatory acceptance testing
Alpha and beta testing
Sometimes, UAT is performed by the customer, in their environment and on their own hardware.
OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or operations readiness and assurance (OR&A) testing. Functional testing within OAT is limited to those tests that are required to verify the non-functional aspects of the system.
In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.
Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two tests can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results.
Alpha testing
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing.
Beta testing
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Beta versions can be made available to the open public to increase the feedback field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).
Functional vs non-functional testing
Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
Continuous testing
Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing includes the validation of both functional requirements and non-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.
Destructive testing
Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines. Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing.
Software performance testing
Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.
Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used.
Usability testing
Usability testing is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled UI designers.
Accessibility testing
Accessibility testing is done to ensure that the software is accessible to persons with disabilities. Some of the common web accessibility tests are
Ensuring that the color contrast between the font and the background color is appropriate
Font Size
Alternate Texts for multimedia content
Ability to use the system using the computer keyboard in addition to the mouse.
Common standards for compliance
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Security testing
Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.
The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."
Internationalization and localization
Testing for internationalization and localization validates that the software can be used with different languages and geographic regions. The process of pseudolocalization is used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product.
Globalization testing verifies that the software is adapted for a new culture, such as different currencies or time zones.
Actual translation to human languages must be tested, too. Possible localization and globalization failures include:
Some messages may be untranslated.
Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string.
Technical terminology may become inconsistent, if the project is translated by several people without proper coordination or if the translator is imprudent.
Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language.
Untranslated messages in the original language may be hard coded in the source code, and thus untranslatable.
Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing.
Software may use a keyboard shortcut that has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language.
Software may lack support for the character encoding of the target language.
Fonts and font sizes that are appropriate in the source language may be inappropriate in the target language; for example, CJK characters may become unreadable if the font is too small.
A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction.
Software may lack proper support for reading or writing bi-directional text.
Software may display images with text that was not localized.
Localized operating systems may have differently named system configuration files and environment variables and different formats for date and currency.
Development testing
Development testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process.
Depending on the organization's expectations for software development, development testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software testing practices.
A/B testing
A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome.
Concurrent testing
Concurrent or concurrency testing assesses the behaviour and performance of software and systems that use concurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling.
Conformance testing or type testing
In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language.
Output comparison testing
Creating a display expected output, whether as data comparison of text or screenshots of the UI, is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies.
Property testing
Property testing is a testing technique where, instead of asserting that specific inputs produce specific expected outputs, the practitioner randomly generates many inputs, runs the program on all of them, and asserts the truth of some "property" that should be true for every pair of input and output. For example, every output from a serialization function should be accepted by the corresponding deserialization function, and every output from a sort function should be a monotonically increasing list containing exactly the same elements as its input.
Property testing libraries allow the user to control the strategy by which random inputs are constructed, to ensure coverage of degenerate cases, or inputs featuring specific patterns that are needed to fully exercise aspects of the implementation under test.
Property testing is also sometimes known as "generative testing" or "QuickCheck testing" since it was introduced and popularized by the Haskell library QuickCheck.
Metamorphic testing
Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes.
VCR testing
VCR testing, also known as "playback testing" or "record/replay" testing, is a testing technique for increasing the reliability and speed of regression tests that involve a component that is slow or unreliable to communicate with, often a third-party API outside of the tester's control. It involves making a recording ("cassette") of the system's interactions with the external component, and then replaying the recorded interactions as a substitute for communicating with the external system on subsequent runs of the test.
The technique was popularized in web development by the Ruby library vcr.
Teamwork
Roles
In an organization, testers may be in a separate team from the rest of the software development team or they may be integrated into one team. Software testing can also be performed by non-dedicated software testers.
In the 1980s, the term software tester started to be used to denote a separate profession.
Notable software testing roles and titles include: test manager, test lead, test analyst, test designer, tester, automation developer, and test administrator.
Processes
Organizations that develop software, perform testing differently, but there are common patterns.
Waterfall development
In waterfall development, testing is generally performed after the code is completed, but before the product is shipped to the customer. This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.
Some contend that the waterfall process allows for testing to start when the development project starts and to be a continuous process until the project finishes.
Agile development
Agile software development commonly involves testing while the code is being written and organizing teams with both programmers and testers and with team members performing both programming and testing.
One agile practice, test-driven software development (TDD), is a way of unit testing such that unit-level testing is performed while writing the product code. Test code is updated as new features are added and failure conditions are discovered (bugs fixed). Commonly, the unit test code is maintained with the project code, integrated in the build process, and run on each build and as part of regression testing. Goals of this continuous integration is to support development and reduce defects.
Even in organizations that separate teams by programming and testing functions, many often have the programmers perform unit testing.
Sample process
The sample below is common for waterfall development. The same activities are commonly found in other development models, but might be described differently.
Requirements analysis: testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
Test planning: test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed.
Test development: test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
Test execution: testers execute the software based on the plans and test documents then report any errors found to the development team. This part could be complex when running tests with a lack of programming knowledge.
Test reporting: once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
Test result analysis: or defect analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
Defect retesting: once a defect has been dealt with by the development team, it is retested by the testing team.
Regression testing: it is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything and that the software product as a whole is still working correctly.
Test closure: once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.
Quality
Software verification and validation
Software testing is used in association with verification and validation:
Verification: Have we built the software right? (i.e., does it implement the requirements).
Validation: Have we built the right software? (i.e., do the deliverables satisfy the customer).
The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to the IEEE Standard Glossary of Software Engineering Terminology:
Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.
And, according to the ISO 9000 standard:
Verification is confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings.
In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below).
But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification.
So, when these words are defined in common terms, the apparent contradiction disappears.
Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it.
Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document.
Software quality assurance
In some organizations, software testing is part of a software quality assurance (SQA) process. In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.
Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers.
Measures
Quality measures include such topics as correctness, completeness, security and ISO/IEC 9126 requirements such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.
Artifacts
A software testing process can produce several artifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs.
Test plan
A test plan is a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans. The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan). A test plan can be, in some cases, part of a wide "test strategy" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact.
Traceability matrix
Test case
A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result. This can be as terse as "for condition x your derived result is y", although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
Test script
A test script is a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program.
Test suite
Test fixture or test data
In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate Test data.
Test harness
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
Test run
A test run is a collection of test cases or test suites that the user is executing and comparing the expected with the actual results. Once complete, a report or all executed tests may be generated.
Certifications
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. A few practitioners argue that the testing field is not ready for certification, as mentioned in the controversy section.
Controversy
Some of the major software testing controversies include:
Agile vs. traditional Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since the early 2000s mainly in commercial circles, whereas government and military software providers use this methodology but also the traditional test-last models (e.g., in the Waterfall model).
Manual vs. automated testing Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. The test automation then can be considered as a way to capture and implement the requirements. As a general rule, the larger the system and the greater the complexity, the greater the ROI in test automation. Also, the investment in tools and expertise can be amortized over multiple projects with the right level of knowledge sharing within an organization.
Is the existence of the ISO 29119 software testing standard justified? Significant opposition has formed out of the ranks of the context-driven school of software testing about the ISO 29119 standard. Professional testing associations, such as the International Society for Software Testing, have attempted to have the standard withdrawn.
Some practitioners declare that the testing field is not ready for certification No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.
Studies used to show the relative expense of fixing defects There are opposing views on the applicability of studies used to show the relative expense of fixing defects depending on their introduction and detection. For example:
It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found. For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of modern continuous deployment practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time.
The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis:
The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points.
Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.
See also
Database testing, testing of databases
References
Further reading
External links
"Software that makes Software better" Economist.com
Software engineering terminology | Software testing | Technology,Engineering | 8,701 |
5,735,510 | https://en.wikipedia.org/wiki/Hydrostatic%20stress | In continuum mechanics, hydrostatic stress, also known as isotropic stress or volumetric stress, is a component of stress which contains uniaxial stresses, but not shear stresses. A specialized case of hydrostatic stress contains isotropic compressive stress, which changes only in volume, but not in shape. Pure hydrostatic stress can be experienced by a point in a fluid such as water. It is often used interchangeably with "mechanical pressure" and is also known as confining stress, particularly in the field of geomechanics.
Hydrostatic stress is equivalent to the average of the uniaxial stresses along three orthogonal axes, so it is one third of the first invariant of the stress tensor (i.e. the trace of the stress tensor):
For example in cartesian coordinates (x,y,z) the hydrostatic stress is simply:
Hydrostatic stress and thermodynamic pressure
In the particular case of an incompressible fluid,
the thermodynamic pressure coincides with the mechanical pressure (i.e. the opposite of the hydrostatic stress):
In the general case of a compressible fluid, the thermodynamic pressure p is no more proportional to the isotropic stress term (the mechanical pressure), since there is an additional term dependent on the trace of the strain rate tensor:
where the coefficient is the bulk viscosity. The trace of the strain rate tensor corresponds to the flow compression (the divergence of the flow velocity):
So the expression for the thermodynamic pressure is usually expressed as:
where the mechanical pressure has been denoted with .
In some cases, the second viscosity can be assumed to be constant in which case, the effect of the volume viscosity is that the mechanical pressure is not equivalent to the thermodynamic pressure as stated above.
However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming . The assumption of setting is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect.
Potential external field in a fluid
Its magnitude in a fluid, , can be given by Stevin's Law:
where
is an index denoting each distinct layer of material above the point of interest;
is the density of each layer;
is the gravitational acceleration (assumed constant here; this can be substituted with any acceleration that is important in defining weight);
is the height (or thickness) of each given layer of material.
For example, the magnitude of the hydrostatic stress felt at a point under ten meters of fresh water would be
where the index indicates "water".
Because the hydrostatic stress is isotropic, it acts equally in all directions. In tensor form, the hydrostatic stress is equal to
where is the 3-by-3 identity matrix.
Hydrostatic compressive stress is used for the determination of the bulk modulus for materials.
Notes
References
Stress tensor
Volumetric strain
Deviatoric stress tensor
Flow velocity
Pressure
Bulk viscosity
Isotropy
Continuum mechanics
Orientation (geometry) | Hydrostatic stress | Physics,Mathematics | 676 |
52,297,611 | https://en.wikipedia.org/wiki/Multi%20leaf%20spring | Multi-leaf springs are widely used for the suspension of cars, trucks and railway wagons. A multi-leaf spring consists of a series of flat plates, usually of semi-elliptical shape. The flat plates are called leaves of the spring. The leaf at the top has maximum length. The length gradually decreases from the top leaf to the bottom leaf. The longest leaf at the top is called master leaf. It is bent at both ends to form the spring eyes. Two bolts are inserted through these eyes to fix the leaf spring to the automobile body. The leaves are held together by means of two U-bolts and a center clip. Rebound clips are provided to keep the leaves in alignment and prevent lateral shifting of the leaves during operation. At the center, the leaf spring is supported on the axle. Multi-leaf springs are provided with one or two extra full-length leaves in addition to the master leaf and the graduated length leaves. The extra full-length leaves are provided to support the transverse sheer force.
A multi spring is not just a bunch of pieces of steel put together. It is an engineered system designed to provide support, stability and safety to a vehicle. In a multi-leaf spring the length and the make-up of each leaf is important because each leaf is designed to carry a proportionate amount of load and stress.
Also each leaf is designed to provide support to the leaf above and below it.
Analysis
For this purpose, the leaves are divided into two groups namely, master leaf along with graduated-length leaves forming one group and extra full-length leaves forming the other.
The group of graduated-length leaves along with master leaf can be treated as triangular plate. It is assumed that individual leaves are separated and the master leaf placed at the center, then the second leaf is cut longitudinally into two halves.
References
Vehicle design | Multi leaf spring | Engineering | 371 |
51,648,390 | https://en.wikipedia.org/wiki/List%20of%20music%20sequencers | Music sequencers are hardware devices or application software that can record, edit, or play back music, by handling note and performance information.
Hardware sequencers
Many synthesizers, and by definition all music workstations, groove machines and drum machines, contain their own sequencers.
The following are specifically designed to function primarily as the music sequencers:
Rotating object with pins or holes
Barrel or cylinder with pins (since 9th or 14th century) — utilized on barrel organs, carillons, music boxes
Metal disc with punched holes (late 18th century) — utilized on several music boxes such as Polyphon, Regina, Symphonion, Ariston, Graphonola (early version), etc.
Punched paper
Book music (since 1890) for pneumatics system — utilized on several mechanical organs
Music roll for pneumatics system — utilized on player pianos (using piano rolls), Orchestrions, several mechanical organs, etc.
Punch tape system for earliest studio synthesizers
RCA Mark II Sound Synthesizer by Herbert Belar and Harry Olson at RCA, a room-filling device built in 1957 for half a million dollars. Included a 4-polyphony synth with 12 oscillators, a sequencer fed with wide paper tape, with output recorded by a disc cutting lathe.
Siemens Synthesizer (1959) at Siemens-Studio für elektronische Musik
Sound-on-film
Variophone (1930) by Evgeny Sholpo—on earliest version, hand drawn waves on film or disc were used to synthesize sound, and later versions were promised to experiment on musical intonations and temporal characteristics of live music performance, however not finished. Variophone is often referred as a forerunner of drawn sound system including ANS synthesizer and Oramics.
Composer-Tron (1953) by Osmond Kendal—rhythmical sequences were controlled via marking cue on film, while timbre of note or envelope-shape of sound were defined via hand drawn shapes on a surface of a CRT input device, drawn with a grease pencil.
ANS synthesizer (1938-1958) by Evgeny Murzin—an earliest realtime additive synthesizer using 720 microtonal sine waves (1/6 semitones × 10 octaves) generated by five glass discs. Composers could control the time evolution of amplitudes of each microtone via scratches on a glass plate user interface covered with black mastic.
Oramics (1957) by Daphne Oram—hand drawn contours on a set of ten sprocketed synchronized strips of 35 film were used to control various parameters of monophonic sound generator (frequency, timbre, amplitude and duration). Polyphonic sounds were obtained using multitrack recording technique.
Electro-mechanical sequencers
Wall of Sound (mid-1940s–1950s) by Raymond Scott—early electro-mechanical sequencer developed by Raymond Scott to produce rhythmic patterns, consistent with stepping relays, solenoids, and tone generators
Circle Machine (1959) by Raymond Scott—electro-optical rotary sequencer developed by Raymond Scott to generate arbitrary waveforms, consistent with dimmer bulbs arranged in a ring, and a rotating arm with photocell scanning over the ring
Wurlitzer Sideman (1959)—first commercial drum machine; rhythm patterns were electro-mechanically generated by rotating disk switches, and drum sounds were electronically generated by vacuum-tube circuits
Analog sequencers
Analog sequencers with CV/Gate interface
Buchla 100's sequencer modules (1964/1966–)
One of the earliest analog sequencers of the modular synthesizer era since 1960. Later, Robert Moog admired Buchla's unique works including it
Moog 960 Sequential Controller / 961 Interface / 962 Sequential Switch (c.1968)
A popular analog sequencer module for the Moog modular synthesizer system, following the earliest Buchla sequencer
Aries AR334 (module)
ARP 1601 and 1027 (module)
Buchla 245, 246
Doepfer Dark Time
Electro Harmonix Sequencer
EML 400
ETI 603 (DIY project)
genoQs Octopus-digital midi
genoQs Nemo-digital midi
Korg SQ-10
MFB Urzwerg / MFB Urzwerg Pro—CV/Gate step sequencer with 8steps/4tracks or 16steps/2tracks; also synchronizable with MIDI sequencer
Oberheim Mini Sequencer MS1A
PAiA 4780
Polyfusion AS1, AS1R and 2040/2041/2042/2043 modules
PPG 313, 314
Roland 104, 182, 717A
Sequential Circuits Model 600
Serge Modular TKB, SQP, SEQ8
Steiner Parker 151
Synthesizers.com Q119
Synthesizers.com Q960—reissue of Moog 960
WMS 1020A
Yamaha CS30 (1977)—monophonic synthesizer keyboard with built-in 8-step analog sequencer
Analog-style step sequencers
Analog-style MIDI step sequencers
Since the analog synthesizer revivals in the 1990s, newly designed MIDI sequencers with a series of knobs or sliders similar to analog sequencer have appeared. These often equip CV/Gate and DIN sync interface along with MIDI, and even patch memory for multiple sequence patterns and possibly song sequences. These analog-digital hybrid machines are often called "Analogue-style MIDI step sequencer" or "MIDI analogue sequencer", etc.
Doepfer MAQ 16/3—MIDI analog sequencer, designed in cooperation with Kraftwerk
Doepfer Regelwerk—MIDI analog sequencer with MIDI controller
Frostwave Fat Controller
Infection Music Phaedra
Infection Music Zeit
Latronic Notron
Manikin Schrittmacher
Quasimidi Polymorph (1999)—Four-part multitimbral tabletop synthesizer, with an analogue-like step sequencer
Roland EF-303—Multiple effects unit with 16-step modulation, also usable as the analog-style MIDI step sequencer
Sequentix P3
Analog-style MIDI pattern sequencers
Several machines also provide "song mode" to play the sequence of memorised patterns in specified order, as per drum machines.
Doepfer Schaltwerk—MIDI pattern sequencer
Step sequencers (supported on)
Typical step sequencers are integrated on drum machines, bass machines, groove machines, music production machines, and these software versions. Often, these also support the semi-realtime recording mode, too.
MFB Step 64—Standalone step sequencer dedicated for drum patterns (16 steps/4 tracks or 64 steps/1 track, 118 programs×4 banks, 16 song sequences, each with up to 128 sequences)
Embedded self-contained step sequencers
Several tiny keyboards provide a step sequencer combined with an independent timing mode for recording and performance:
Casio VL-Tone VL-1 (1979), Casiotone MT-70 (c.1984), Sampletone SK-1 (1986), etc.—Timings of musical notes stored on the step sequencer, can be designated by the two trigger buttons labeled "One Key Play", around the right hand position
Embedded CV/Gate step sequencers
Several machines have white and black chromatic keypads, to enter the musical phrases.
Multivox / Firstman SQ-01 (1980)—a forerunner of TB-303
Roland TB-303 (1981)
Roland SH-101 (1982)—monophonic keytar synthesizer with sequencer
Roland MC-202 (1983)—monophonic tabletop synthesizer with sequencer, similar to SH-101
Embedded MIDI step sequencers
Groovebox-type machines with white and black chromatic keypads, often support step recording mode along with realtime recording mode:
Korg Electribe / Electribe 2 series
Roland Corporation MC series: MC-09 / MC-303 / MC-307 / MC-505 / MC-808 / MC-909
Yamaha RM1x
Yamaha RS7000—Music Production Studio
Other groovebox-type machines (including several music production machines) also often support step recording mode, of course:
Linn 9000 (1984)
Sequential Circuits Studio 440 (1986)
E-mu SP-12 (1986)
E-mu SP-1200 (1987)
Akai MPC series (1988–)
Akai MPC Renaissance / Studio / Fly (2012)—Software with control surfaces
Native Instruments Maschine (2009)—Software with control surface
Roland MV-30
Roland MV-8000—Production Studio
Button-grid-style step sequencers
Recently emerging button-grid-style interfaces/instruments are naturally support step sequence. On these machines, one axis on grid means musical scale or sample to play, and another axis means timing of notes.
Akai APC40—interface for Ableton Live
Arduinome—interface
Bliptronics 5000—instrument
Monome—interface
Novation Launchpad—interface for Ableton Live
Yamaha Tenori-on—instrument
Synthstrom Deluge - Piano-roll-style sequencing on 128 pads (16×8)
In addition, newly designed hardware MIDI sequencers equipping a series of knobs/sliders similar to analog sequencers, are appeared. For details, see #Analog-style MIDI step sequencers.
Digital sequencers
CV/Gate
Also often support Gate clock and DIN sync interfaces.
EDP Spider (late 1970s)—supported LINK and CV/Gate
EMS Sequencer series (1971)
Max Mathews GROOVE system (1970)
Multivox MX-8100 / Firstman SQ-10 (1979/1980)—supported V/Oct. and Hz/V
Oberheim DS-2 (1974)
Roland CSQ-100
Roland CSQ-600 (1980)—it memories 600 notes for individual 4 tracks, a buddy of TR-808
Roland MC-4 Microcomposer (1981)
Roland MC-8 Microcomposer (1977)—also supporting DCB via OP-8
Sequential Circuits Model 800 (1977)
Proprietary digital interfaces (pre MIDI era)
NED Synclavier series—CV/Gate interface and MIDI retrofit kit were available on Synclavier II. Also MIDI became standard feature on Synclavier PSMT
Fairlight CMI series—CV/Gate interface was optionally available on Series II, and MIDI was supported on Series IIx and later models
Oberheim DSX (Oberheim Parallel Bus)
PPG Wave family (PPG Bus)
Rhodes Chroma (Chroma Computer Interface)
Roland JSQ-60 (Roland Digital Control Bus (DCB))
Sequential Circuits PolySequencer 1005 (SCI Serial Bus)
Yamaha CS70M (Key Code Interface)
Hardware MIDI sequencers
Standalone MIDI sequencers
Akai ASQ10
Alesis MMT-8—a buddy of HR-16 drum machine
Korg SQD-1
Korg SQD-8
Kawai Q-80
Roland MC-327
Roland MC series: MC-50/MC-50MkII/MC-80/MC-300/MC-500 Microcomposer
Roland MSQ-100 (1985)
Roland MSQ-700 (1984)—one of the earliest multitrack MIDI sequencer (8tr), a buddy of TR-909
Roland SB-55—SMF recorder
Yamaha QX series: QX1/QX3/QX5/QX7/QX21
MIDI phrase sequencers
Zyklus MPS
Embedded MIDI sequencers
Sequential Circuits Six-Track (1984), MultiTrak (1985), Split-8 / Pro-8 (1985)
MIDI sequencers with embedded sound module
Yamaha TQ5—desktop version of EOS YS200 FM workstation
Yamaha QY10—with embedded GM tone generator (1990)
Yamaha QY20—with embedded GM tone generator (1992)
Yamaha QY300—with embedded GM tone generator (1994)
Yamaha QY20—with embedded GM tone generator (1995)
Yamaha QY700—with embedded XG tone generator (1996)
Yamaha QY70—with embedded XG tone generator (1997)
Yamaha QY100—with embedded XG tone generator (2000)
Palmtop MIDI sequencers
Korg SQ-8—palmtop sequencer
Philips Micro Composer PMC100
Roland PMA-5—palmtop sequencer with touch screen
Yamaha Walkstation series: QY8/QY10/QY20/QY22/QY70/QY100—palmtop sequencer with embedded sound module
Accompaniment machines
Boss DR-5 Dr.Rhythm Section
Yamaha QR10 Musical Accompaniment Player
Open-source hardware
MIDIbox Sequencer modules—Analog-style MIDI step sequencer/MIDI effect processor modules of MIDIbox project
oTTo Sampler, Sequencer, Multi-engine synth and effects - in a box.
Software sequencers and DAWs with sequencing features
Free, open source
Scorewriters
MuseScore—Linux, Windows, OS X
DAW with MIDI sequencers
Ardour—Linux, OS X, FreeBSD, Windows
LMMS—Linux, Windows
MusE—Linux
Qtractor—Linux
Rosegarden—Linux
Auxy: Beat Studio—iOS 7
Drum machines
Hydrogen—Linux, OS X
Commercial
Scorewriters
Aegis Sonix—Amiga
Software MIDI sequencers
B-Step Sequencer from Monoplugs
Fugue Machine from Alexandernaut
Master Tracks Pro from Passport Music Software
Bars & Pipes Professional from Blue Ribbon Software [improved
by Alfred Faust] at http://bnp.hansfaust.de/indexeng.html
Loop-oriented DAWs with MIDI sequencers
ACID Pro and Cinescore from Sony Creative Software
Live from Ableton
GarageBand from Apple
REAPER from Cockos
Tracktion from Mackie
Tracker-oriented DAWs with MIDI sequencers
Renoise
DAWs with MIDI sequencers
Ableton Live from Ableton
Audition from Adobe (removed since Version 4 CS5.5)
Bitwig Studio from Bitwig
Cubase and Nuendo from Steinberg
Digital Performer from MOTU
REAPER from Cockos
FL Studio from Image Line Software
Logic Pro and Logic Express from Apple
Mixcraft from Acoustica
Mixbus from Harrison
MuLab from MUTools
MultitrackStudio from Bremmers Audio Design
n-Track Studio from n-Track Software
Pro Tools from Avid
Samplitude, Sequoia, Music Maker and Music Studio from Magix
Sonar, Music Creator and Home Studio from Cakewalk
Studio One from PreSonus
Podium from Zynewave (gratis)
Z-Maestro from Z-Systems
Integrated software studio environments
Reason and Record from Propellerhead
Storm from Arturia
See also
List of music software
References
Music sequencers | List of music sequencers | Engineering | 2,996 |
25,723,035 | https://en.wikipedia.org/wiki/Yang%E2%80%93Mills%E2%80%93Higgs%20equations | In mathematics, the Yang–Mills–Higgs equations are a set of non-linear partial differential equations for a Yang–Mills field, given by a connection, and a Higgs field, given by a section of a vector bundle (specifically, the adjoint bundle). These equations are
with a boundary condition
where
A is a connection on a vector bundle,
D is the exterior covariant derivative,
F is the curvature of that connection,
Φ is a section of that vector bundle,
∗ is the Hodge star, and
[·,·] is the natural, graded bracket.
These equations are named after Chen Ning Yang, Robert Mills, and Peter Higgs. They are very closely related to the Ginzburg–Landau equations, when these are expressed in a general geometric setting.
M.V. Goganov and L.V. Kapitanskii have shown that the Cauchy problem for hyperbolic Yang–Mills–Higgs equations in Hamiltonian gauge on 4-dimensional Minkowski space have a unique global solution with no restrictions at the spatial infinity. Furthermore, the solution has the finite propagation speed property.
Lagrangian
The equations arise as the equations of motion of the Lagrangian density
where is an invariant symmetric bilinear form on the adjoint bundle. This is sometimes written as due to the fact that such a form can arise from the trace on under some representation; in particular here we are concerned with the adjoint representation, and the trace on this representation is the Killing form.
For the particular form of the Yang–Mills–Higgs equations given above, the potential is vanishing. Another common choice is , corresponding to a massive Higgs field.
This theory is a particular case of scalar chromodynamics where the Higgs field is valued in the adjoint representation as opposed to a general representation.
See also
Yang–Mills equations
Stable Yang–Mills–Higgs pair
Scalar chromodynamics
References
M.V. Goganov and L.V. Kapitansii, "Global solvability of the initial problem for Yang-Mills-Higgs equations", Zapiski LOMI 147,18–48, (1985); J. Sov. Math, 37, 802–822 (1987).
Partial differential equations
Quantum field theory | Yang–Mills–Higgs equations | Physics | 480 |
168,680 | https://en.wikipedia.org/wiki/Teradata | Teradata Corporation is an American software company that provides cloud database and analytics-related software, products, and services. The company was formed in 1979 in Brentwood, California, as a collaboration between researchers at Caltech and Citibank's advanced technology group.
Overview
Teradata is an enterprise software company that develops and sells database analytics software. The company provides three main services: business analytics, cloud products, and consulting. It operates in North and Latin America, Europe, the Middle East, Africa, and Asia.
Teradata is headquartered in San Diego, California and has additional major U.S. locations in Atlanta and San Francisco, where its data center research and development is housed. It is publicly traded on the New York Stock Exchange (NYSE) under the stock symbol TDC. Steve McMillan has served as the company's president and chief executive officer since 2020. The company reported $1.836 billion in revenue, a net income of $129 million, and 8,535 employees globally, as of February 9, 2020.
History
The concept of Teradata grew from research at the California Institute of Technology and from the discussions of Citibank's advanced technology group in the 1970s. In 1979, the company was incorporated in Brentwood, California by Jack E. Shemer, Philip M. Neches, Walter E. Muir, Jerold R. Modes, William P. Worth, Carroll Reed and David Hartke. Teradata released its DBC/1012 database machine in 1984. In 1990, the company acquired Sharebase, originally named Britton Lee. In September 1991, AT&T Corporation acquired NCR Corporation, which announced the acquisition of Teradata for about $250 million in December. Teradata built the first system over 1 terabyte for Wal-Mart in 1992.
NCR acquired Strategic Technologies & Systems in 1999 and appointed Stephen Brobst as chief technology officer of Teradata Solutions Group. In 2000, NCR acquired Ceres Integrated Solutions and its customer relationship management software for $90 million, as well as Stirling Douglas Group and its demand chain management software. Teradata acquired financial management software from DecisionPoint in 2005. In January 2007, NCR announced Teradata would become an independent public company, led by Michael F. Koehler. The new company's shares started trading in October.
In April 2016, a hardware product line called IntelliFlex was announced. Victor L. Lund became the chief executive on May 5, 2016.
In October 2018, Teradata started promoting its cloud analytics software called Vantage (which evolved from the Teradata Database).
On May 7, 2020, Teradata announced the appointment of Steve McMillan as president and chief executive officer, effective June 8, 2020
In December 2024, Teradata successfully appealed a California federal judge's decision in favor of SAP SE in a case involving allegations of trade secret misappropriation and antitrust violations. The lawsuit accused SAP of using Teradata's trade secrets to address technical issues in its software and of violating antitrust laws by bundling products and requiring customers to purchase them together. A federal appeals court reversed the earlier ruling, reviving Teradata's claims.
Acquisitions and divestitures
Teradata has acquired several companies since becoming an independent public company in 2008. In March 2008, Teradata acquired professional services company Claraview, which previously had spun out software provider Clarabridge. Teradata acquired column-oriented DBMS vendor Kickfire in August 2010, followed by the marketing software company Aprimo for about $550 million in December. In March 2011, the company acquired Aster Data Systems for about $263 million. Teradata acquired Software-as-a-service digital marketing company eCircle in May 2012, which was merged into the Aprimo business.
In 2014, Teradata acquired the assets of Revelytix, a provider of information management products and for a reported $50 million. In September, Teradata acquired Hadoop service firm Think Big Analytics. In December, Teradata acquired RainStor, a company specializing in online data archiving on Hadoop. Teradata acquired Appoxxee, a mobile marketing software as a service provider, for about $20 million in January 2015, followed by the Netherlands-based digital marketing company FLXone in September. That same year Teradata acquired a small Business Intelligence firm, MiaPearl.
In July 2016, the marketing applications division, using the Aprimo brand, was sold to private equity firm Marlin Equity Partners for about $90 million with Aprimo under CEO John Stammen moving its headquarters to Chicago. while absorbing Revenew Inc. that Marlin had also bought.
Teradata acquired Big Data Partnership, a service company based in the UK, on July 21, 2016. In July 2017, Teradata acquired StackIQ, maker of the Stacki cluster manager software.
Technology and products
Teradata offers three primary services to its customers: cloud and hardware-based data warehousing, business analytics, and consulting services. In September 2016, the company launched Teradata Everywhere, which allows users to submit queries against public and private databases. The service uses massively parallel processing across both its physical data warehouse and cloud storage, including managed environments such as Amazon Web Services, Microsoft Azure, VMware, and Teradata's Managed Cloud and IntelliFlex. Teradata offers customers both hybrid cloud and multi-cloud storage. In March 2017, Teradata introduced Teradata IntelliCloud, a secure managed cloud for data and analytic software as a service. IntelliCloud is compatible with Teradata's data warehouse platform, IntelliFlex. The Teradata Analytics Platform was unveiled in 2017.
Big data
Teradata began to use the term "big data" in 2010. CTO Stephen Brobst attributed the rise of big data to "new media sources, such as social media." The increase in semi-structured and unstructured data gathered from online interactions prompted Teradata to form the "Petabyte club" in 2011 for its heaviest big data users.
The rise of big data resulted in many traditional data warehousing companies updating their products and technology. For Teradata, big data prompted the acquisition of Aster Data Systems in 2011 for the company's MapReduce capabilities and ability to store and analyze semi-structured data.
Old hardware line end of support
To focus on its cloud database and analytics software business, Teradata plans to end support for its old hardware platforms; Teradata will provide remedial maintenance services for six (6) years from its Platform Sales Discontinuation Date. Platform Support Discontinuation is the end of the support date for a particular Teradata hardware platform. Teradata hardware platforms saw support discontinuation start in 2017, with the 2580 generation of their data warehouse appliance, with only their IntelliFlex 2.5 and R740 hardware platforms being listed as "current product". All other platform models/classes have a published support discontinuation date. Teradata customers may request extended support for a Teradata platform that has reached its end of support life. The extended support for a platform requires approval from the local Area Director and will be delivered on a "best effort" basis, as spare parts for older platforms may no longer be available.
Teradata Vantage analytics
In October 2018, Teradata started calling the cloud analytics software product line Vantage. Vantage is composed of various analytics engines on a core relational database, including the Aster graph database and a machine learning engine. The capability to leverage open-source analytic engines such as Spark and TensorFlow was released.
Vantage can be deployed across public clouds, on-premises, and commodity infrastructure. Vantage provides storage and analysis for multi-structured data formats.
References
External links
Companies listed on the New York Stock Exchange
Data warehousing products
Companies based in San Diego
Software companies based in California
NCR Corporation
Big data companies
Computer companies of the United States
Computer companies established in 1979
Software companies established in 1979
American companies established in 1979
1979 establishments in California
Corporate spin-offs
Software companies of the United States
2007 initial public offerings
Computer hardware companies
Companies in the S&P 400 | Teradata | Technology | 1,708 |
46,775,463 | https://en.wikipedia.org/wiki/Diphasiastrum%20%C3%97%20sabinifolium | Diphasiastrum × sabinifolium, the savinleaf groundpine or savin leaf club moss, is hybrid of D. sitchense and D. tristachyum. It can be found in North America from Labrador and Newfoundland to Ontario, and south to Pennsylvania and Michigan. Erect stems can reach 20 centimeters high, and branch dichotomously. The sterile branches are flattened, and the leaves are 4-ranked. Peduncles are 1-8 centimeters long. In many disturbed sites, it can be found growing alongside D. sitchense, and can be distinguished by flattened branchlets split into four ranks, as opposed to those of D. sitchense, which generally are rounded and split into five ranks.
References
sabinifolium
Hybrid plants | Diphasiastrum × sabinifolium | Biology | 161 |
20,530,144 | https://en.wikipedia.org/wiki/Tripod%20%28surveying%29 | A surveyor's tripod is a device used to support any one of a number of surveying instruments, such as theodolites, total stations, levels or transits.
History
The modern sturdy, but portable, tripod stand with three leg pairs hinged to a triangular metal head was invented and first manufactured for sale by Sir Francis Ronalds in the late 1820s in Croydon. He sold 140 of the stands in the decade 1830-40 and his design was soon imitated by others.
Older surveying tripods had slightly different features compared to modern ones. For example, on some older tripods, the instrument had its own footplate and did not need to move laterally relative to the tripod head. For this reason, the head of the tripod was not a flat footplate but was simply a large diameter fitting. Threads on the outside of the head engaged threads on the instrument's footplate. No other mounting screw was used.
Fixed length legs were also seen on older instruments. Instrument height was adjusted by changing the angle of the legs. Widely spaced tripod feet resulted in a lower instrument while closely spaced legs raised the instrument. This was considerably less convenient than having variable length legs.
Materials for older tripods were predominantly wood and brass, with some steel for high wear items like the feet or foot points.
Usage
The tripod is placed in the location where it is needed. The surveyor will press down on the legs' platforms to securely anchor the legs in soil or to force the feet to a low position on uneven, pock-marked pavement. Leg lengths are adjusted to bring the tripod head to a convenient height and make it roughly level.
Once the tripod is positioned and secure, the instrument is placed on the head. The mounting screw is pushed up under the instrument to engage the instrument's base and screwed tight when the instrument is in the correct position. The flat surface of the tripod head is called the foot plate and is used to support the adjustable feet of the instrument.
Positioning the tripod and instrument precisely over an indicated mark on the ground or benchmark requires intricate techniques.
Construction
Many modern tripods are constructed of aluminum, though wood is still used for legs. The feet are either aluminum tipped with a steel point or steel. The mounting screw is often brass or brass and plastic. The mounting screw is hollow and has two lateral holes to attach a plumb bob to center the instrument e.g. over a corner or other mark on the ground. After the instrument is centered within a few cm over the mark, the plumb bob is removed and a viewer (using a prism) in the instrument is used to exactly center it.
The top is typically threaded with a 5/8" x 11 tpi screw thread. The mounting screw is held to the underside of the tripod head by a movable arm. This permits the screw to be moved anywhere within the head's opening. The legs are attached to the head with adjustable screws that are usually kept tight enough to allow the legs to be moved with a bit of resistance. The legs are two part, with the lower part capable of telescoping to adjust the length of the leg to suit the terrain. Aluminum or steel slip joints with a tightening screw are at the bottom of the upper leg to hold the bottom part in place and fix the length. A shoulder strap is often affixed to the tripod to allow for ease of carrying the equipment over areas to be surveyed.
See also
Tribrach (instrument)
References
Raymond Davis, Francis Foote, Joe Kelly, Surveying, Theory and Practice, McGraw-Hill Book Company, 1966 LC 64-66263
Construction surveying
Surveying instruments | Tripod (surveying) | Engineering | 748 |
3,687,460 | https://en.wikipedia.org/wiki/System%20usability%20scale | In systems engineering, the system usability scale (SUS) is a simple, ten-item attitude Likert scale giving a global view of subjective assessments of usability. It was developed by John Brooke at Digital Equipment Corporation in the UK in 1986 as a tool to be used in usability engineering of electronic office systems.
The usability of a system, as defined by the ISO standard ISO 9241 Part 11, can be measured only by taking into account the context of use of the system—i.e., who is using the system, what they are using it for, and the environment in which they are using it. Furthermore, measurements of usability have several different aspects:
effectiveness (can users successfully achieve their objectives)
efficiency (how much effort and resource is expended in achieving those objectives)
satisfaction (was the experience satisfactory)
Measures of effectiveness and efficiency are also context specific. Effectiveness in using a system for controlling a continuous industrial process would generally be measured in very different terms to, say, effectiveness in using a text editor. Thus, it can be difficult, if not impossible, to answer the question "is system A more usable than system B", because the measures of effectiveness and efficiency may be very different. However, it can be argued that given a sufficiently high-level definition of subjective assessments of usability, comparisons can be made between systems.
The formula for computing the final SUS score requires converting the raw scores, by subtracting 1 from each raw score, then utilizing the following equation:
SUS has generally been seen as providing this type of high-level subjective view of usability and is thus often used in carrying out comparisons of usability between systems. Because it yields a single score on a scale of 0–100, it can be used to compare even systems that are outwardly dissimilar. This one-dimensional aspect of the SUS is both a benefit and a drawback, because the questionnaire is necessarily quite general.
Recently, Lewis and Sauro suggested a two-factor orthogonal structure, which practitioners may use to score the SUS on independent Usability and Learnability dimensions. At the same time, Borsci, Federici and Lauriola by an independent analysis confirm the two factors structure of SUS, also showing that those factors (Usability and Learnability) are correlated.
The SUS has been widely used in the evaluation of a range of systems. Bangor, Kortum and Miller have used the scale extensively over a ten-year period and have produced normative data that allow SUS ratings to be positioned relative to other systems. They propose an extension to SUS to provide an adjective rating that correlates with a given score. Based on a review of hundreds of usability studies, Sauro and Lewis proposed a curved grading scale for mean SUS scores.
References
Further reading
External links
System Usability Scale (SUS) Analysis Toolkit
System Usability Scale (SUS) Score Calculator
Computer-related introductions in 1986
User interfaces
Systems engineering
Human–computer interaction
Qualitative research
Survey methodology
Usability | System usability scale | Technology,Engineering | 628 |
43,269,014 | https://en.wikipedia.org/wiki/G%C3%B6del%20logic | In mathematical logic, a Gödel logic, sometimes referred to as Dummett logic or Gödel–Dummett logic, is a member of a family of finite- or infinite-valued logics in which the sets of truth values V are closed subsets of the unit interval [0,1] containing both 0 and 1. Different such sets V in general determine different Gödel logics. The concept is named after Kurt Gödel.
In 1959, Michael Dummett showed that infinite-valued propositional Gödel logic can be axiomatised by adding the axiom schema
to intuitionistic propositional logic.
See also
Intermediate logic
References
Set theory
Mathematical logic
Formal methods | Gödel logic | Mathematics,Engineering | 140 |
12,967,579 | https://en.wikipedia.org/wiki/Bicycloundecane | Bicycloundecane is an organic compound with molecular formula C11H20. It is essentially the spherical form of the ring cycloundecane. In cycloundecane, the eleven carbon atoms are joined together in a chain that meets at the ends to form a ring. In bicycloundecane, the eleven carbon atoms are arranged nearly spherically as two groups of four carbon atoms with a third group of three carbon atoms acting as a bridge. Each non-bridgehead carbon atom is attached to two hydrogen atoms making bicycloundecane a saturated compound. It is a bicycloalkane. Other related bicycloalkanes are bicyclooctane and bicyclononane.
Bicycloundecane is anisotropically conductive. It is one of several bridged hydrocarbon residues that are formed during the thermohardening of polymerizable conductive adhesives based on di(metha)acroyloxymethyl-tricyclodecane, organic peroxide, and thermoplastic resin; such adhesives have applications in electronics.
References
Hydrocarbons
Bicyclic compounds | Bicycloundecane | Chemistry | 249 |
7,667,067 | https://en.wikipedia.org/wiki/Cross-domain%20solution | A cross-domain solution (CDS) is an integrated information assurance system composed of specialized software or hardware that provides a controlled interface to manually or automatically enable and/or restrict the access or transfer of information between two or more security domains based on a predetermined security policy. CDSs are designed to enforce domain separation and typically include some form of content filtering, which is used to designate information that is unauthorized for transfer between security domains or levels of classification, such as between different military divisions, intelligence agencies, or other operations which depend on the timely sharing of potentially sensitive information.
The goal of a CDS is to allow a trusted network domain to exchange information with other domains, either one-way or bi-directionally, without introducing the potential for security threats. CDS development, assessment, and deployment are based on comprehensive risk management. Every aspect of an accredited CDS is usually evaluated under what is known as a Lab-Based Security Assessment (LBSA) to reduce potential vulnerabilities and risks. The evaluation and accreditation of CDSs in the United States are primarily under the authority of the National Cross Domain Strategy and Management Office (NCDSMO) within the National Security Agency (NSA).
CDS filter for viruses and malware; content examination utilities; in high-to-low security transfer audited human review. CDS sometimes has security-hardened operating systems, role-based administration access, redundant hardware, etc.
The acceptance criteria for information transfer across domains or cross-domain interoperability is based on the security policy implemented within the solution. This policy may be simple (e.g., antivirus scanning and whitelist (also known as an "allowlist") check before transfer between peer networks) or complex (e.g., multiple content filters and a human reviewer must examine, redact, and approve a document before release from a high-security domain). Unidirectional networks are often used to move information from low-security domains to secret enclaves while assuring that information cannot escape. Cross-domain solutions often include a High Assurance Guard.
Though cross-domain solutions have, as of 2019, historically been most typical in military, intelligence, and law enforcement environments, one example is the flight control and infotainment systems on an airliner.
Types
There are three types of cross-domain solutions (CDS) according to Department of Defense Instruction (DoDI) 854001p. These types are broken down into Access, Transfer, and Multi-level solutions (MLS) and all must be included in the cross-domain baseline list before Department of Defense-specific site implementations. Access Solution "An access solution describes a user’s ability to view and manipulate information from domains of differing security levels and caveats. In theory, the ideal solution respects separation requirements between domains by preventing overlapping data between domains, which ensures data of different classifications cannot ‘leak’ (i.e. data spill) between networks at any host layer of the OSI/TCP model. In practice, however, data spills are an ever-present concern that system designers attempt to mitigate within acceptable risk levels. For this reason, data transfer is addressed as a separate CDS". Transfer Solution offers the ability to move information between security domains that are of different classification level or different caveat of the same classification level. Multi-level Solutions "Access and transfer solutions rely on multiple security levels (MSL) approaches that maintain the separation of domains; this architecture is considered multiple single levels. A multi-level solution (MLS) differs from MSL architecture by storing all data in a single domain. The solution uses trusted labeling and integrated Mandatory Access Control (MAC) schema as a basis to mediate data flow and access according to user credentials and clearance to authenticate read and write privileges. In this manner, an MLS is considered an all-in-one CDS, encompassing both access and data transfer capabilities."
Unintended consequences
In previous decades, multilevel security (MLS) technologies were developed. These enforced mandatory access control (MAC) with near certainty. Automated information systems sometimes share information contrary to the need to avoid sharing secrets with adversaries. When the ‘balance’ is decided at the discretion of users, the access control is called discretionary access control (DAC), that is more tolerant of actions that manage risk where MAC requires risk avoidance.
These documents provide standards guidance on risk management:
, SP 800-53 Rev3
, Instruction No. 1253
References
Computer security software | Cross-domain solution | Engineering | 913 |
25,213,349 | https://en.wikipedia.org/wiki/Steel%20fibre-reinforced%20shotcrete | Steel fibre-reinforced shotcrete (SFRS) is shotcrete (spray concrete) with steel fibres added. It has higher tensile strength than unreinforced shotcrete and is quicker to apply than weldmesh reinforcement. It has often been used for tunnels.
Advantages
The primary advantages of fibre-reinforced shotcrete are:
Addition of steel fibers into the concrete improves the crack resistance (or ductility) capacity of the concrete. Traditional rebars are generally used to improve the tensile strength of the concrete in a particular direction, whereas steel fibers are useful for multidirectional reinforcement. This is one of the reasons why steel fiber reinforced (shotcrete form) concrete successfully replaced weldmesh in lining tunnels.
Less labour is required.
Less construction time is required.
Applications and types
SFRS has various types, which are applicable to differing situations. Primary uses are:
Tunnels – uses short steel fibers
Industrial floorings – uses long steel fibers
See also
Fibre reinforced concrete
References
Building materials
Concrete
Fibre-reinforced cementitious materials | Steel fibre-reinforced shotcrete | Physics,Engineering | 218 |
932,711 | https://en.wikipedia.org/wiki/Carmichael%27s%20theorem | In number theory, Carmichael's theorem, named after the American mathematician R. D. Carmichael,
states that, for any nondegenerate Lucas sequence of the first kind Un(P, Q) with relatively prime parameters P, Q and positive discriminant, an element Un with n ≠ 1, 2, 6 has at least one prime divisor that does not divide any earlier one except the 12th Fibonacci number F(12) = U12(1, −1) = 144 and its equivalent U12(−1, −1) = −144.
In particular, for n greater than 12, the nth Fibonacci number F(n) has at least one prime divisor that does not divide any earlier Fibonacci number.
Carmichael (1913, Theorem 21) proved this theorem. Recently, Yabuta (2001) gave a simple proof. Bilu, Hanrot, Voutier and Mignotte (2001) extended it to the case of negative discriminants (where it is true for all n > 30).
Statement
Given two relatively prime integers P and Q, such that and , let be the Lucas sequence of the first kind defined by
Then, for n ≠ 1, 2, 6, Un(P, Q) has at least one prime divisor that does not divide any Um(P, Q) with m < n, except U12(±1, −1) = ±F(12) = ±144.
Such a prime p is called a characteristic factor or a primitive prime divisor of Un(P, Q).
Indeed, Carmichael showed a slightly stronger theorem: For n ≠ 1, 2, 6, Un(P, Q) has at least one primitive prime divisor not dividing D except U3(±1, −2) = 3, U5(±1, −1) = F(5) = 5, or U12(1, −1) = −U12(−1, −1) = F(12) = 144.
In Camicharel's theorem, D should be greater than 0; thus the cases U13(1, 2), U18(1, 2) and U30(1, 2), etc. are not included, since in this case D = −7 < 0.
Fibonacci and Pell cases
The only exceptions in Fibonacci case for n up to 12 are:
F(1) = 1 and F(2) = 1, which have no prime divisors
F(6) = 8, whose only prime divisor is 2 (which is F(3))
F(12) = 144, whose only prime divisors are 2 (which is F(3)) and 3 (which is F(4))
The smallest primitive prime divisor of F(n) are
1, 1, 2, 3, 5, 1, 13, 7, 17, 11, 89, 1, 233, 29, 61, 47, 1597, 19, 37, 41, 421, 199, 28657, 23, 3001, 521, 53, 281, 514229, 31, 557, 2207, 19801, 3571, 141961, 107, 73, 9349, 135721, 2161, 2789, 211, 433494437, 43, 109441, ...
Carmichael's theorem says that every Fibonacci number, apart from the exceptions listed above, has at least one primitive prime divisor.
If n > 1, then the nth Pell number has at least one prime divisor that does not divide any earlier Pell number. The smallest primitive prime divisor of nth Pell number are
1, 2, 5, 3, 29, 7, 13, 17, 197, 41, 5741, 11, 33461, 239, 269, 577, 137, 199, 37, 19, 45697, 23, 229, 1153, 1549, 79, 53, 113, 44560482149, 31, 61, 665857, 52734529, 103, 1800193921, 73, 593, 9369319, 389, 241, ...
See also
Zsigmondy's theorem
References
.
Fibonacci numbers
Theorems in number theory | Carmichael's theorem | Mathematics | 930 |
66,510,961 | https://en.wikipedia.org/wiki/Tabix | Tabix is a bioinformatics software utility for indexing large genomic data files. Tabix is free software under the MIT license.
Benefits
Speed: Without an index, extracting specific regions from large files would require scanning through the entire file. Tabix avoids this by jumping directly to the region of interest.
Storage Efficiency: Tabix compresses the data using BGZF, which helps reduce storage requirements while still allowing for fast random access.
References
External links
File format specification
Command-line utility manual page
Free bioinformatics software | Tabix | Chemistry,Biology | 113 |
60,446,868 | https://en.wikipedia.org/wiki/NGC%204302 | NGC 4302 is an edge-on spiral galaxy located about 55 million light-years away in the constellation Coma Berenices. It was discovered by astronomer William Herschel on April 8, 1784 and is a member of the Virgo Cluster.
It is classified as a Seyfert galaxy and as a LINER galaxy. It also has a prominent, extended dust lane.
Physical characteristics
The disk of NGC 4302 contains extraplanar dust that is organized into filamentary structures and large dust complexes. The apparent bending of many of the large complexes toward the north of the galaxy appears to be due to an interaction with the intracluster medium caused by the motion of NGC 4302 as it falls into the Virgo Cluster.
The dense, dusty matter in the disk of NGC 4302 appears to be largely tracing matter ejected from the disk by energetic feedback from massive stars.
Extraplanar Diffuse Ionized Gas
First detected by Pildis et al., NGC 4302 has a faint but prominent layer of extraplanar diffuse ionized gas (DIG) that extends out to a galactocentric radius of ~ and a height of ~ above the plane of the galaxy.
The DIG appears to have been ionized by photoionization by OB stars.
Box/Peanut Bulge
The presence of a boxy/peanut bulge in NGC 4302 suggests that the galaxy contains a thickened bar that is viewed edge-on.
HI Disk
The HI disk of NGC 4302 is truncated to within the optical disk to the south of the galaxy. This truncation appears to be the result of ram pressure.
Tidal Bridge
Kantharia et al. and Zschaechner et al. both independently detected a tidal bridge between NGC 4302 and NGC 4298. The bridge is the result of a tidal interaction between the two galaxies.
HI tail
First identified by Chung et al., NGC 4302 has a ~ tail of neutral atomic hydrogen (HI) that extends to the north of the galaxy. The tail appears to be a result of ram pressure or by a tidal interaction with NGC 4298. However, NGC 4302 appears relatively undisturbed favoring the cause of the tail to be due to ram pressure.
The HI tail is pointed away from M87 which suggests that NGC 4302 is falling into the center of the Virgo Cluster on a highly radial orbit.
SN 1986E
NGC 4302 has hosted one supernova, a Type IIL supernova designated as SN 1986E. The supernova was discovered by G. Candeo at the Asiago Observatory on April 13, 1986, with an apparent magnitude of 14.5.
See also
List of NGC objects (4001–5000)
References
External links
4302
39974
Coma Berenices
Astronomical objects discovered in 1784
Spiral galaxies
Interacting galaxies
7418
Seyfert galaxies
LINER galaxies
Virgo Cluster
Discoveries by William Herschel | NGC 4302 | Astronomy | 593 |
17,347,053 | https://en.wikipedia.org/wiki/Society%20of%20Mexican%20American%20Engineers%20and%20Scientists | MAES: Latinos in Science and Engineering, Inc. (MAES), originally the Mexican American Engineering Society, was founded in 1974. It organizes an annual symposium and career fair.
History
MAES was founded in Los Angeles in 1974 to increase the number of Mexican Americans and other Hispanics in the technical and scientific fields.
The idea to establish a professional society for Mexican American engineers originated with Robert Von Hatten, an aerospace electronics engineer with TRW Defense Space Systems in Redondo Beach, California. Mr. Von Hatten had for several years served as volunteer for programs directed at combating the alarming number of high school dropouts. He envisioned a national organization that would serve as a source for role models, address the needs of its members, and become a resource for industry and students.
In mid–1974, Mr. Von Hatten contacted Manuel Castro to join him in the campaign to form the professional organization. During a subsequent series of meetings, a cohort of individuals banded together to lay out the foundation for the “Mexican American Engineering Society.” The founders, listed below, drafted the articles of incorporation and the first bylaws of the society:
Oscar Buttner – Rockwell International
Sam Buttner – Southern California Edison
Manuel Castro – Bechtel Power
Clifford Maldonado – Northrop Corporation
Sam Mendoza – California State University, Fullerton
Frank Serna – Northrop Corporation
Robert Von Hatten – TRW Defense Space Systems
The society filed incorporation papers as a nonprofit, tax exempt organization with the California Secretary of State in October 1974, and it received its charter on March 28, 1975. The Internal Revenue Service granted the society a federal tax–exemption letter and employer identification number on January 4, 1979. Ten years later, to reflect its broader technical membership, the organization filed to change its name to the “Society of Mexican American Engineers and Scientists, Inc.” This change was granted on July 19, 1989.
MAES is one of several membership–based organizations that represent Latinos in engineering and science. As a mature organization with over 30 years of experience addressing the concerns of Latinos, MAES is a source of expertise on barriers to and methods for improving educational access and attainment. The society recognizes the importance of encouraging more youth to pursue careers in science, technology, engineering, and mathematics as a means for economic advancement and workforce development.
Many of its programs, with the financial help of members, companies, and government agencies are directed at increasing the number of students at all grade levels who will study, prepare, enter, and excel in the technical professions.
References
External links
Society of Mexican American Engineers and Scientists, Inc. National organization's site.
California State University, Long Beach Chapter
California State University, Fullerton MAES
San Antonio MAES
San Antonio College MAES
University of Houston MAES
UTEP MAES/SHPE_Mission
Engineering organizations
Organizations based in Houston
Scientific organizations established in 1974
Hispanic and Latino American professional organizations
1974 establishments in California | Society of Mexican American Engineers and Scientists | Engineering | 598 |
26,802,549 | https://en.wikipedia.org/wiki/Lean%20enterprise | Lean enterprise is a practice focused on value creation for the end customer with minimal waste and processes. Principals derive from lean manufacturing and Six Sigma (or Lean Six Sigma). The lean principles were popularized by Toyota in the automobile manufacturing industry, and subsequently the electronics and internet software industries.
Principles and variants
Principles for lean enterprise derive from lean manufacturing and Six Sigma principles:
There are five principles, originating from lean manufacturing, outlined by James Womack and Daniel Jones
Value: Understand clearly what value the customer wants for the product or service.
Value Stream: The entire flow of a product's or service's life cycle. In other words, from raw materials, production of the product or service, customer delivery, customer use, and final disposal.
Flow: Keep the value stream moving. If it's not moving, it's creating waste and less value for the customer.
Pull: Do not make anything until the customer orders it.
Perfection: Systematically and continuously remove root causes of poor quality from production processes.
There are key lean enterprise principles originating from Lean Six Sigma principles. These principles focus on eliminating 8 varieties of waste (Muda) and form the acronym Downtime:
Defects
Overproduction
Waiting
Non-Utilized Talent
Transportation
Inventory
Motion
Extra-Processing
These 8 varieties of waste are derivative from the original 7 wastes as defined in the Toyota Production System (TPS). They are:
Transportation
Inventory
Motion
Waiting
Overproduction
Over-processing
Defects
The 8th waste of non-utilized talent was not recognized until post-Americanization of the Toyota Production System (TPS).
The lean startup principles, developed in 2008 from lean manufacturing, also now contribute to understanding of lean enterprise:
Eliminate wasteful practices
Increase value producing practices
Customer feedback during product development
Build what customers want
KPIs
Continuous deployment process
History
Lean enterprise is a practice focused on value creation for the end customer with minimal waste and processes. The term has historically been associated with lean manufacturing and Six Sigma (or Lean Six Sigma) due to lean principles being popularized by Toyota in the automobile manufacturing industry and subsequently the electronics and internet software industries.
Early 1900s: Ford, GM & Toyota Systems
Henry Ford developed a process called assembly line production. This is a manufacturing process in which parts are added as the assembly moves from work station to work station where parts are added in sequence until final assembly is produced.
Alfred Sloan of General Motors further developed the concept of assembly line production by building a process called mass production that allowed scale and variety. This process enabled large amounts of standardized products to run through assembly lines while still being able to produce more variety and compete against Ford's single offering.
Kiichiro Toyoda studied the Ford production system and adapted the process in order to have smaller production quantities. He built a production system called Just-in-Time Manufacturing for Toyota along with Taiichi Ohno. It's worth noting too that kaizen, the process of continuous improvement, was developed in the 1950s by Eiji Toyoda along with the Toyota Production System (TPS).
1980s & 1990s: Motorola
New innovations in lean enterprise moved away from machine technology to electronic technology.
Another development was management techniques from Motorola commonly referred to as Six Sigma. This management technique was built off of mass production principles with more focus on minimizing variability. Applying Six Sigma principles led to reduced cycle time, reduced pollution, reduced costs, increased customer satisfaction, and increased profits.
1990s & 2000s: Internet companies
New innovations in lean enterprise moved away from electronic manufacturing to internet and software technology. Before, during, and after the dot-com bubble, internet and software enterprises originally did not place emphasis on lean enterprise principles for efficient usage and allocation of capital and labor due to accessible funds from venture capital and capital markets. The idea of "build it and they will come" became common practice as a result.
After the dot-com bubble, inspired by the Agile Manifesto, internet and software companies began operating under agile software development practices such as Extreme programming. Along with the agile software movement, companies (especially startups) applied both lean enterprise and agile software principles together in order to develop new products or even new companies more efficiently and based on validated customer demand. Very early practices of lean enterprise and agile software principles was commonly referred to as lean startup.
After 2010, more and more enterprises are adopting this new branch of lean enterprise (lean startup) since it provides principles and methodologies for non-internet enterprises to enter in new markets or offer goods and services in new form factors with less time, labor, and capital. For internet and software enterprises (by tradition), the Lean startup variant of lean enterprise enabled them to remain competitive with new technologies and services that are rapidly coming out to market without exclusively resorting to mergers and acquisitions, and being able to retain internal innovation ecosystem competency.
See also
Lean manufacturing
Lean Six Sigma
Toyota Production System
Lean startup
Management fad
References
Lean manufacturing | Lean enterprise | Engineering | 983 |
15,515,301 | https://en.wikipedia.org/wiki/Foundations%20of%20statistics | The Foundations of Statistics are the mathematical and philosophical bases for statistical methods. These bases are the theoretical frameworks that ground and justify methods of statistical inference, estimation, hypothesis testing, uncertainty quantification, and the interpretation of statistical conclusions. Further, a foundation can be used to explain statistical paradoxes, provide descriptions of statistical laws, and guide the application of statistics to real-world problems.
Different statistical foundations may provide different, contrasting perspectives on the analysis and interpretation of data, and some of these contrasts have been subject to centuries of debate. Examples include the Bayesian inference versus frequentist inference; the distinction between Fisher's significance testing and the Neyman-Pearson hypothesis testing; and whether the likelihood principle holds.
Certain frameworks may be preferred for specific applications, such as the use of Bayesian methods in fitting complex ecological models.
Bandyopadhyay & Forster identify four statistical paradigms: classical statistics (error statistics), Bayesian statistics, likelihood-based statistics, and information-based statistics using the Akaike Information Criterion. More recently, Judea Pearl reintroduced formal mathematics by attributing causality in statistical systems that addressed the fundamental limitations of both Bayesian and Neyman-Pearson methods, as discussed in his book Causality.
Fisher's "significance testing" vs. Neyman–Pearson "hypothesis testing"
During the 20th century, the development of classical statistics led to the emergence of two competing foundations for inductive statistical testing. The merits of these models were extensively debated. Although a hybrid approach combining elements of both methods is commonly taught and utilized, the philosophical questions raised during the debate still remain unresolved.
Significance testing
Publications by Fisher, like "Statistical Methods for Research Workers" in 1925 and "The Design of Experiments" in 1935, contributed to the popularity of significance testing, which is a probabilistic approach to deductive inference. In practice, a statistic is computed based on the experimental data and the probability of obtaining a value greater than that statistic under a default or "null" model is compared to a predetermined threshold. This threshold represents the level of discord required (typically established by convention). One common application of this method is to determine whether a treatment has a noticeable effect based on a comparative experiment. In this case, the null hypothesis corresponds to the absence of a treatment effect, implying that the treated group and the control group are drawn from the same population. Statistical significance measures probability and does not address practical significance. It can be viewed as a criterion for the statistical signal-to-noise ratio. It is important to note that the test cannot prove the hypothesis (of no treatment effect), but it can provide evidence against it.
The Fisher significance test involves a single hypothesis, but the choice of the test statistic requires an understanding of relevant directions of deviation from the hypothesized model.
Hypothesis testing
Neyman and Pearson collaborated on the problem of selecting the most appropriate hypothesis based solely on experimental evidence, which differed from significance testing. Their most renowned joint paper, published in 1933, introduced the Neyman-Pearson lemma, which states that a ratio of probabilities serves as an effective criterion for hypothesis selection (with the choice of the threshold being arbitrary). The paper demonstrated the optimality of the Student's t-test, one of the significance tests. Neyman believed that hypothesis testing represented a generalization and improvement of significance testing. The rationale for their methods can be found in their collaborative papers.
Hypothesis testing involves considering multiple hypotheses and selecting one among them, akin to making a multiple-choice decision. The absence of evidence is not an immediate factor to be taken into account. The method is grounded in the assumption of repeated sampling from the same population (the classical frequentist assumption), although Fisher criticized this assumption.
Grounds of disagreement
The duration of the dispute allowed for a comprehensive discussion of various fundamental issues in the field of statistics.
An example exchange from 1955–1956
Fisher's attack
Repeated sampling of the same population
Such sampling is the basis of frequentist probability
Fisher preferred fiducial inference
Type II errors
Which result from an alternative hypothesis
Inductive behavior
(Vs inductive reasoning)
Neyman's rebuttal
Fisher's attack on inductive behavior has been largely successful because he selected the field of battle. While operational decisions are routinely made on a variety of criteria (such as cost), scientific conclusions from experimentation are typically made based on probability alone.
Fisher's theory of fiduciary inference is flawed
Paradoxes are common
A purely probabilistic theory of tests requires an alternative hypothesis. Fisher's attacks on Type II errors have faded with time. In the intervening years, statistics have separated the exploratory from the confirmatory. In the current environment, the concept of Type II errors are used in power calculations for confirmatory hypothesis tests' sample size determination.
Discussion
Fisher's attack based on frequentist probability failed but was not without result. He identified a specific case (2×2 table) where the two schools of testing reached different results. This case is one of several that are still troubling. Commentators believe that the "right" answer is context-dependent. Fiducial probability has not fared well, being virtually without advocates, while frequentist probability remains a mainstream interpretation.
Fisher's attack on inductive behavior has been largely successful because he selected the field of battle. While ''operational decisions'' are routinely made on a variety of criteria (such as cost), ''scientific conclusions'' from experimentation are typically made based on probability alone.
During this exchange, Fisher also discussed the requirements for inductive inference, specifically criticizing cost functions that penalize erroneous judgments. Neyman countered by mentioning the use of such functions by Gauss and Laplace. These arguments occurred 15 years after textbooks began teaching a hybrid theory of statistical testing.
Fisher and Neyman held different perspectives on the foundations of statistics (though they both opposed the Bayesian viewpoint):
The interpretation of probability
The disagreement between Fisher's inductive reasoning and Neyman's inductive behavior reflected the Bayesian-Frequentist divide. Fisher was willing to revise his opinion (reaching a provisional conclusion) based on calculated probability, while Neyman was more inclined to adjust his observable behavior (making a decision) based on computed costs.
The appropriate formulation of scientific questions, with a particular focus on modelling
Whether it is justifiable to reject a hypothesis based on a low probability without knowing the probability of an alternative
Whether a hypothesis could ever be accepted based solely on data
In mathematics, deduction proves, while counter-examples disprove.
In the Popperian philosophy of science, progress is made when theories are disproven.
Subjectivity: Although Fisher and Neyman endeavored to minimize subjectivity, they both acknowledged the significance of "good judgment." Each accused the other of subjectivity.
Fisher subjectively selected the null hypothesis.
Neyman-Pearson subjectively determined the criterion for selection (which was not limited to probability).
Both subjectively established numeric thresholds.
Fisher and Neyman diverged in their attitudes and, perhaps, their language. Fisher was a scientist and an intuitive mathematician, and inductive reasoning came naturally to him. Neyman, on the other hand, was a rigorous mathematician who relied on deductive reasoning rather than probability calculations based on experiments. Hence, there was an inherent clash between applied and theoretical approaches (between science and mathematics).
Related history
In 1938, Neyman relocated to the West Coast of the United States of America, effectively ending his collaboration with Pearson and their work on hypothesis testing. Subsequent developments in the field were carried out by other researchers.
By 1940, textbooks began presenting a hybrid approach that combined elements of significance testing and hypothesis testing. However, none of the main contributors were directly involved in the further development of the hybrid approach currently taught in introductory statistics.
Statistics subsequently branched out into various directions, including decision theory, Bayesian statistics, exploratory data analysis, robust statistics, and non-parametric statistics. Neyman-Pearson hypothesis testing made significant contributions to decision theory, which is widely employed, particularly in statistical quality control. Hypothesis testing also extended its applicability to incorporate prior probabilities, giving it a Bayesian character. While Neyman-Pearson hypothesis testing has evolved into an abstract mathematical subject taught at the post-graduate level, much of what is taught and used in undergraduate education under the umbrella of hypothesis testing can be attributed to Fisher.
Contemporary opinion
There have been no major conflicts between the two classical schools of testing in recent decades, although occasional criticism and disputes persist. However, it is highly unlikely that one theory of statistical testing will completely supplant the other in the foreseeable future.
The hybrid approach, which combines elements from both competing schools of testing, can be interpreted in different ways. Some view it as an amalgamation of two mathematically complementary ideas, while others see it as a flawed union of philosophically incompatible concepts. Fisher's approach had certain philosophical advantages, while Neyman and Pearson emphasized rigorous mathematics. Hypothesis testing remains a subject of controversy for some users, but the most widely accepted alternative method, confidence intervals, is based on the same mathematical principles.
Due to the historical development of testing, there is no single authoritative source that fully encompasses the hybrid theory as it is commonly practiced in statistics. Additionally, the terminology used in this context may lack consistency. Empirical evidence indicates that individuals, including students and instructors in introductory statistics courses, often have a limited understanding of the meaning of hypothesis testing.
Summary
The interpretation of probability remains unresolved, although fiduciary probability is not widely embraced.
Neither of the test methods has been completely abandoned, as they are extensively utilized for different objectives.
Textbooks have integrated both test methods into the framework of hypothesis testing.
Some mathematicians argue, with a few exceptions, that significance tests can be considered a specific instance of hypothesis tests.
On the other hand, some perceive these problems and methods as separate or incompatible.
The ongoing dispute has harmed statistical education.
Bayesian inference versus frequentist inference
Two distinct interpretations of probability have existed for a long time, one based on objective evidence and the other on subjective degrees of belief. The debate between Gauss and Laplace could have taken place more than 200 years ago, giving rise to two competing schools of statistics. Classical inferential statistics emerged primarily during the second quarter of the 20th century, largely in response to the controversial principle of indifference used in Bayesian probability at that time. The resurgence of Bayesian inference was a reaction to the limitations of frequentist probability, leading to further developments and reactions.
While the philosophical interpretations have a long history, the specific statistical terminology is relatively recent. The terms "Bayesian" and "frequent" became standardized in the second half of the 20th century. However, the terminology can be confusing, as the "classical" interpretation of probability aligns with Bayesian principles, while "classical" statistics follow the frequentist approach. Moreover, even within the term "frequentist," there are variations in interpretation, differing between philosophy and physics.
The intricate details of philosophical probability interpretations are explored elsewhere. In the field of statistics, these alternative interpretations allow for the analysis of different datasets using distinct methods based on various models, aiming to achieve slightly different objectives. When comparing the competing schools of thought in statistics, pragmatic criteria beyond philosophical considerations are taken into account.
Major contributors
Fisher and Neyman were significant figures in the development of frequentist (classical) methods. While Fisher had a unique interpretation of probability that differed from Bayesian principles, Neyman adhered strictly to the frequentist approach. In the realm of Bayesian statistical philosophy, mathematics, and methods, de Finetti, Jeffreys, and Savage emerged as notable contributors during the 20th century. Savage played a crucial role in popularizing de Finetti's ideas in English-speaking regions and establishing rigorous Bayesian mathematics. In 1965, Dennis Lindley's two-volume work titled "Introduction to Probability and Statistics from a Bayesian Viewpoint" played a vital role in introducing Bayesian methods to a wide audience. For three generations, statistics have progressed significantly, and the views of early contributors are not necessarily considered authoritative in present times.
Contrasting approaches
Frequentist inference
The earlier description briefly highlights frequentist inference, which encompasses Fisher's "significance testing" and Neyman-Pearson's "hypothesis testing." Frequentist inference incorporates various perspectives and allows for scientific conclusions, operational decisions, and parameter estimation with or without confidence intervals.
Bayesian inference
A classical frequency distribution provides information about the probability of the observed data. By applying Bayes' theorem, a more abstract concept is introduced, which involves estimating the probability of a hypothesis (associated with a theory) given the data. This concept, formerly referred to as "inverse probability," is realized through Bayesian inference. Bayesian inference involves updating the probability estimate for a hypothesis as new evidence becomes available. It explicitly considers both the evidence and prior beliefs, enabling the incorporation of multiple sets of evidence.
Comparisons of characteristics
Frequentists and Bayesians employ distinct probability models. Frequentist typically view parameters as fixed but unknown, whereas Bayesians assign probability distributions to these parameters. As a result, Bayesian discuss probabilities that frequentist do not acknowledge. Bayesian consider the probability of a theory, whereas true frequentists can only assess the evidence's consistency with the theory. For instance, a frequentist does not claim a 95% probability that the true value of a parameter falls within a confidence interval; rather, they state that 95% of confidence intervals encompass the true value.
Mathematical results
Both the frequentist and Bayesian schools are subject to mathematical critique, and neither readily embraces such criticism. For instance, Stein's paradox highlights the intricacy of determining a "flat" or "uninformative" prior probability distribution in high-dimensional spaces. While Bayesians perceive this as tangential to their fundamental philosophy, they find frequentist plagued with inconsistencies, paradoxes, and unfavorable mathematical behavior. Frequentist traveller can account for most of these issues. Certain "problematic" scenarios, like estimating the weight variability of a herd of elephants based on a single measurement (Basu's elephants), exemplify extreme cases that defy statistical estimation. The principle of likelihood has been a contentious area of debate.
Statistical results
Both the frequentist and Bayesian schools have demonstrated notable accomplishments in addressing practical challenges. Classical statistics, with its reliance on mechanical calculators and specialized printed tables, boasts a longer history of obtaining results. Bayesian methods, on the other hand, have shown remarkable efficacy in analyzing sequentially sampled information, such as radar and sonar data. Several Bayesian techniques, as well as certain recent frequentist methods like the bootstrap, necessitate the computational capabilities that have become widely accessible in the past few decades. There is an ongoing discourse regarding the integration of Bayesian and frequentist approaches, although concerns have been raised regarding the interpretation of results and the potential diminishment of methodological diversity.
Philosophical results
Bayesians share a common stance against the limitations of frequent, but they are divided into various philosophical camps (empirical, hierarchical, objective, personal, and subjective), each emphasizing different aspects. A philosopher of statistics from the frequentist perspective has observed a shift from the statistical domain to philosophical interpretations of probability over the past two generations. Some perceive that the successes achieved with Bayesian applications do not sufficiently justify the associated philosophical framework. Bayesian methods often develop practical models that deviate from traditional inference and have minimal reliance on philosophy. Neither the frequentist nor the Bayesian philosophical interpretations of probability can be considered entirely robust. The frequentist view is criticized for being overly rigid and restrictive, while the Bayesian view can encompass both objective and subjective elements, among others.
Illustrative quotations
"Carefully used, the frequentist approach yields broadly applicable if sometimes clumsy answers"
"To insist on unbiased [frequent] techniques may lead to negative (but unbiased) estimates of variance; the use of p-values in multiple tests may lead to blatant contradictions; conventional 0.95 confidence regions may consist of the whole real line. No wonder that mathematicians find it often difficult to believe that conventional statistical methods are a branch of mathematics."
"Bayesianism is a neat and fully principled philosophy, while frequentist is a grab-bag of opportunistic, individually optimal, methods."
"In multiparameter problems flat priors can yield very bad answers"
"Bayes' rule says there is a simple, elegant way to combine current information with prior experience to state how much is known. It implies that sufficiently good data will bring previously disparate observers to an agreement. It makes full use of available information, and it produces decisions having the least possible error rate."
"Bayesian statistics is about making probability statements, frequentist statistics is about evaluating probability statements."
"Statisticians are often put in a setting reminiscent of Arrow’s paradox, where we are asked to provide estimates that are informative and unbiased and confidence statements that are correct conditional on the data and also on the underlying true parameter." (These are conflicting requirements.)
"Formal inferential aspects are often a relatively small part of statistical analysis"
"The two philosophies, Bayesian and frequent, are more orthogonal than antithetical."
"A hypothesis that may be true is rejected because it has failed to predict observable results that have not occurred. This seems a remarkable procedure."
Summary
Bayesian theory has a mathematical advantage.
Frequentist probability has existence and consistency problems.
But finding good priors to apply Bayesian theory remains (very?) difficult.
Both theories have impressive records of successful application.
Neither the philosophical interpretation of probability nor its support is robust.
There is increasing scepticism about the connection between application and philosophy.
Some statisticians are recommending active collaboration (beyond a cease-fire).
The likelihood principle
In common usage, likelihood is often considered synonymous with probability. However, according to statistics, this is not the case. In statistics, probability refers to variable data given a fixed hypothesis, whereas likelihood refers to variable hypotheses given a fixed set of data. For instance, when making repeated measurements with a ruler under fixed conditions, each set of observations corresponds to a probability distribution, and the observations can be seen as a sample from that distribution, following the frequentist interpretation of probability. On the other hand, a set of observations can also arise from sampling various distributions based on different observational conditions. The probabilistic relationship between a fixed sample and a variable distribution stemming from a variable hypothesis is referred to as likelihood, representing the Bayesian view of probability. For instance, a set of length measurements may represent readings taken by observers with specific characteristics and conditions.
Likelihood is a concept that was introduced and developed by Fisher over a span of more than 40 years, although earlier references to the concept exist and Fisher's support for it was not wholehearted. The concept was subsequently accepted and substantially revised by Jeffreys. In 1962, Birnbaum "proved" the likelihood principle based on premises that were widely accepted among statisticians, although his proof has been subject to dispute by statisticians and philosophers. Notably, by 1970, Birnbaum had rejected one of these premises (the conditionality principle) and had also abandoned the likelihood principle due to their incompatibility with the frequentist "confidence concept of statistical evidence." The likelihood principle asserts that all the information in a sample is contained within the likelihood function, which is considered a valid probability distribution by Bayesians but not by frequent.
Certain significance tests employed by frequentists are not consistent with the likelihood principle. Bayesian, on the other hand, embrace the principle as it aligns with their philosophical standpoint (perhaps in response to frequentist discomfort). The likelihood approach is compatible with Bayesian statistical inference, where the posterior Bayes distribution for a parameter is derived by multiplying the prior distribution by the likelihood function using Bayes' Theorem. Frequentist interpret the likelihood principle unfavourably, as it suggests a lack of concern for the reliability of evidence. The likelihood principle, according to Bayesian statistics, implies that information about the experimental design used to collect evidence does not factor into the statistical analysis of the data. Some Bayesian, including Savage, acknowledge this implication as a vulnerability.
The likelihood principle's staunchest proponents argue that it provides a more solid foundation for statistics compared to the alternatives presented by Bayesian and frequentist approaches. These supporters include some statisticians and philosophers of science. While Bayesian recognize the importance of likelihood for calculations, they contend that the posterior probability distribution serves as the appropriate basis for inference.
Modelling
Inferential statistics relies on statistical models. Classical hypothesis testing, for instance, has often relied on the assumption of data normality. To reduce reliance on this assumption, robust and nonparametric statistics have been developed. Bayesian statistics, on the other hand, interpret new observations based on prior knowledge, assuming continuity between the past and present. The experimental design assumes some knowledge of the factors to be controlled, varied, randomized, and observed. Statisticians are aware of the challenges in establishing causation, often stating that "correlation does not imply causation," which is more of a limitation in modelling than a mathematical constraint.
As statistics and data sets have become more complex, questions have arisen regarding the validity of models and the inferences drawn from them. There is a wide range of conflicting opinions on modelling.
Models can be based on scientific theory or ad hoc data analysis, each employing different methods. Advocates exist for each approach. Model complexity is a trade-off and less subjective approaches such as the Akaike information criterion and Bayesian information criterion aim to strike a balance.
Concerns have been raised even about simple regression models used in the social sciences, as a multitude of assumptions underlying model validity are often neither mentioned nor verified. In some cases, a favorable comparison between observations and the model is considered sufficient.
Traditional observation-based models often fall short in addressing many significant problems, requiring the utilization of a broader range of models, including algorithmic ones. "If the model is a poor emulation of nature, the conclusions may be wrong."
Modelling is frequently carried out inadequately, with improper methods employed, and the reporting of models is often subpar.
Given the lack of a strong consensus on the philosophical review of statistical modeling, many statisticians adhere to the cautionary words of George Box: "All models are wrong, but some are useful."
Other reading
For a concise introduction to the fundamentals of statistics, refer to Stuart, A.; old, J.K. (1994). "Ch. 8 – Probability and statistical inference" in Kendall's Advanced Theory of Statistics, Volume I: Distribution Theory (6th ed.), published by Edward Arnold.
In his book Statistics as Principled Argument, Robert P. Abelson presents the perspective that statistics serve as a standardized method for resolving disagreements among scientists, who could otherwise engage in endless debates about the merits of their respective positions. From this standpoint, statistics can be seen as a form of rhetoric. However, the effectiveness of statistical methods depends on the consensus among all involved parties regarding the chosen approach.
See also
Philosophy of statistics
History of statistics
Philosophy of probability
Philosophy of mathematics
Philosophy of science
Evidence
Likelihoodist statistics
Probability interpretations
Founders of statistics
Footnotes
Citations
References
The text is a collection of essays.
University of Houston lecture notes?
Translation of the 1937 French original with later notes added.
Preliminary version of an article for the International Encyclopedia of the Social and Behavioral Sciences.
– A joke escalated into a serious discussion of Bayesian problems by 5 authors (Gelman, Bernardo, Kadane, Senn, Wasserman) on pages 445-478.
– A working paper that explains the difference between Fisher's evidential p-value and the Neyman–Pearson type I error rate .
Working paper contains numerous quotations from the sources of the dispute.
– Lecture notes? University of Illinois at Chicago
Further reading
.
– Bayesian.
.
External links
Philosophy of statistics | Foundations of statistics | Mathematics | 4,956 |
27,058,073 | https://en.wikipedia.org/wiki/Lax%E2%80%93Wendroff%20theorem | In computational mathematics, the Lax–Wendroff theorem, named after Peter Lax and Burton Wendroff, states that if a conservative numerical scheme for a hyperbolic system of conservation laws converges, then it converges towards a weak solution.
See also
Lax–Wendroff method
Godunov's scheme
References
Randall J. LeVeque, Numerical methods for conservation laws, Birkhäuser, 1992
Numerical differential equations
Computational fluid dynamics
Theorems in analysis | Lax–Wendroff theorem | Physics,Chemistry,Mathematics | 95 |
48,483,838 | https://en.wikipedia.org/wiki/Encapsulin%20nanocompartment | Encapsulin nanocompartments, or encapsulin protein cages, are spherical bacterial organelle-like compartments roughly 25-30 nm in diameter that are involved in various aspects of metabolism, in particular protecting bacteria from oxidative stress. Encapsulin nanocompartments are structurally similar to the HK97 bacteriophage and their function depends on the proteins loaded into the nanocompartment. The sphere is formed from 60 (for a 25 nm sphere) or 180 (for a 30 nm sphere) copies of a single protomer, termed encapsulin. Their structure has been studied in great detail using X-ray crystallography and cryo-electron microscopy.
A number of different types of proteins have been identified as being loaded into encapsulin nanocompartments. Peroxidases or proteins similar to ferritins are the two most common types of cargo proteins. While most encapsulin nanocompartments contain only one type of cargo protein, in some species two or three types of cargo proteins are loaded.
Encapsulins purified from Rhodococcus jostii can be assembled and disassembled with changes in pH. In the assembled state, the compartment enhances the activity of its cargo, a peroxidase enzyme.
Use as a platform for bioengineering
Recently, encapsulin nanocompartments have begun to receive considerable interest from bioengineers because of their potential to allow the targeted delivery of drugs, proteins, and mRNAs to specific cells of interest.
References
Metabolism
Cell biology
Biological engineering
Bacterial proteins | Encapsulin nanocompartment | Chemistry,Engineering,Biology | 338 |
60,293,208 | https://en.wikipedia.org/wiki/2019%20Kim%20Kim%20River%20toxic%20pollution | The 2019 Kim Kim River toxic pollution is a water pollution incident that occurred on 7 March 2019 caused by illegal chemical waste dumping at the Kim Kim River in Pasir Gudang of Johor in Malaysia. The illegal dumping released toxic fumes, affecting 6,000 people and hospitalising 2,775. Most of the victims were school students—110 schools located near the river were subsequently closed.
Background of pollution
The incident started on 7 March 2019 after several students and canteen workers from two schools near the river began to fall ill and complaining of breathing difficulties. Both schools were ordered to shut down and all the victims were sent to Sultan Ismail Hospital while investigations were being carried out by state health authorities over the cause. Twenty-one people were warded at the hospital with some being admitted into the emergency unit and intensive care unit (ICU). Some of the students brought to the hospital were already fainted and with symptoms such as vomiting. Those who were not seriously affected were given outpatient treatment and allowed to return home. During recovery, some of the seriously affected victims shared their experience of suddenly being ill after inhaling unpleasant odour in their school compound. The number of victims hospitalised over the toxic fumes rose to 76 on the following day and on 9 March 2019, five police reports have been made on the issue with police began investigating the case.
Further spread of toxic fumes and water pollution
On 11 March, the second wave of air poisoning took effect with further 106–207 victims hospitalised before escalating into more than 1,000 victims with eight admitted into the ICU. The spread of the toxic fumes was aided by hot weather combined with strong winds, which made more people sick. The Malaysian Fire and Rescue Department director-general Mohammad Hamdan Wahid explained that the further spread of toxic fumes could have been prevented if the illegally dumped chemicals found earlier were immediately removed. However, the authorities did not dispose the chemicals after concluding it was no longer reactive, allegedly due to the costs involved. Until 19 March, further 76 police reports have been made. On 20 June, about 3 months since the pollution first discovered, a number of students from schools in the Pasir Gudang area began complaining of nausea, dizziness and experienced vomiting which eventually led to the temporary closure of the schools in the area. The authorities later confirmed it as the third wave of air poisoning resulted from the river pollution, which were not fully cleared.
In August 2019, residents in Acheh's Well Village near the Daing and Kopok rivers (tributaries to the Kim Kim River) complained that the water in both rivers have turned black and oily with unbearable foul stench which were believed to have spread from the chemical pollution of the Kim Kim River. A resident interviewed on the issue said the rivers were once homes to various crabs, freshwater fish and shrimps with children used to swim in, but everything has been damaged since the pollution turned worse in April.
Investigation, clearance works and arrestment of perpetrators
Through investigations, it was believed that the chemical wastes were dumped from a lorry tanker into the Kim Kim River in the early morning hours, on the same day before the victims fell ill. Agencies dispatched for the cleaning-up operation of the polluted river collected 2.43 tonnes of chemical waste on the day the incident was reported. The cleaning works, however, worsened the chemical reaction, as the contractor engaged was not experienced in dealing with chemical wastes. A Chemical, biological, radiological and nuclear (CBRN) team from the 12th Squadron of the Royal Army Engineers Regiment of Malaysian Armed Forces was later dispatched to assist in the chemical cleaning efforts together with a Hazmat team.
The Johor Department of Environment (DOE) arrested an owner of a chemical factory in Kulai on 10 March followed by another arrest involving shredded waste factory owner and one of its workers in Taman Pasir Puteh on the following day after a series of investigations. With the arrests, the DOE completed its investigation papers and were sent to the public prosecutor for further action, with the investigators also have identified the illegally dumped chemical as marine oil that emitted flammable methane and benzene fumes. The oil is categorised as a scheduled waste and needs proper disposal due to its hazardous nature. On 17 March, nine more people were arrested by the police in connection to the case; two arrested in Johor Bahru while seven were arrested outside the Johor Bahru area. Two key suspects, believed to be instrumental in arranging for the transportation of the toxic substances, were arrested on 19 March, bringing the total arrests to 11 with one suspect later released under bail after he is proven to be unrelated to the case. The cleaning operation of the 1.5 kilometre stretch of the affected river was completed in the same day, where a total of 900 tonnes of soil and 1,500 tonnes of polluted water were cleaned.
Several other identified toxic gases emitted (following the interaction of the chemicals concerned with water and air) include acrolein, acrylonitrile, ethylbenzene, hydrogen chloride, D-limonene, toluene and xylene, which if inhaled, can cause headache, nausea, fainting and breathing difficulty. Two main suspects (a Singaporean and a Malaysian) were charged at the Sessions Court in Johor on 25 March for illegal disposal of chemicals into the river and their company, P Tech Resources was slapped with 15 charges to which they pleaded not guilty. Both have been charged earlier in the same court for conspiring with a lorry driver to dispose the scheduled wastes into the river.
Government and health authorities response
Johor's Sultan Ibrahim Ismail urged for an immediate action against the perpetrators involved in this pollution that endangered public lives while expressing his appreciation for the medical teams which had been working tirelessly to treat the victims in hospital. The Sultan has pledged a total of RM1 million (around US$250,000) towards helping rescue agencies and authorities to gather the necessary means and equipment to resolve the matter, meanwhile expressing his view that the incident shows the need for a government hospital to be built in Pasir Gudang. Prime Minister Mahathir Mohamad and Deputy Prime Minister Wan Azizah Wan Ismail visited victims of the pollution at the hospital in Johor Bahru on 14 March, and said that the situation was "under control" where residents are not necessary to be evacuated from the area and also that there is a need to review the country's Environmental Quality Act 1974 in light of the serious pollution. The federal government has approved an allocation of RM8 million for river purification works and has ordered various agencies including the police, military and Hazmat teams to support the situation in the affected area. They explained that there was no request for a state of emergency received from the state government of Johor.
Johor's Menteri Besar Osman Sapian were in the opinion that the situation is under control without the need to declare a state of emergency in the area, with the state government having approved an emergency allocation of RM6.4 million for cleaning up the affected river. Malaysia's Environment Minister Yeo Bee Yin stressed that investigation will be carried out to bring those responsible to justice and explained that the RM6.4 million is mainly used to clear the 1.5 kilometre stretch of the affected river with further cost expected to balloon to over RM10 million. The state government also dismissed claims that its agencies were slow to react over the incident. The State Health Department had also warned the public not to circulate fake news, with an ongoing one saying deaths had been resulted from the pollution. On 1 June, Malaysia's Health Ministry formed a medical team, which consists of officers from the Institute for Medical Research and Johor Health Department, to examine a total of 6,000 victims affected by the pollution . Malaysia's Water, Land and Natural Resources Minister Xavier Jayakumar Arulanandam urged every state governments to take serious measures to overcome river pollution as climate change could cause the country to experience long periods of drought in the future. The ministry also drafted a Water Resources Bill to clamp down on water pollution.
Neighbouring authorities in Singapore continue to monitor the situation following the reports of more illegal waste dumping sites found in Pasir Gudang. Various Singapore agencies have been conducting regular checks with a minister explained that they were taking the matter very seriously as what happened in Malaysia can affect Singapore significantly.
Criticism of government response and lawsuits
Johor's Crown Prince Tunku Ismail Idris took the matter to Twitter to express his opinion that the government should have instead declared a state of emergency on the day it first occurred and relocated residents to a temporary place until there was a guarantee that the area was safe. Malaysian Chinese Association (MCA) Deputy President Mah Hang Soon said that the incompetent preventive measures escalated the hazard levels in the involved area. In July 2019, a boy was reported to have developed Parkinson's-like disease of myokymia after being exposed to the pollution, although this was denied by Malaysia's Deputy Health Minister Lee Boon Chye who said that the boy was born premature and had a history of fits since he was four. A group of 160 victims of the pollution then began to file a suit and taking the Johor Menteri Besar along with the state government to court to seek monetary compensation for the boy and other damages caused by the illegal dumping of toxic chemicals.
Other responses
A chemical company Lotte Chemical Titan Holdings Bhd had denied rumours that they are involved in the pollution of Kim Kim River. In a statement to Bursa Malaysia, Lotte Chemical Titan stated in detail: "The company hereby denies rumours and wishes to announce that it has no involvement with the incident". Malaysian singer Indah Ruhaila expressed concern over the incident as her parents live in the affected area where she also persuaded her parents to leave their homes, worrying of the reoccurrence of the incident.
See also
Water pollution
Chernobyl disaster
Bhopal disaster
Further reading
References
External links
Illegal chemical disposal case at Kim Kim River, Pasir Gudang – Press release from the Department of Environment, Malaysia
Pasir Gudang Municipal Council Press Release – Recent information on the Kim Kim River pollution from the Government of Johor
2019 in Malaysia
Environmental issues in Malaysia
Water in Malaysia
Water pollution
Health disasters in Malaysia
Man-made disasters in Malaysia
March 2019 crimes in Asia
2019 disasters in Malaysia
Waste disposal incidents | 2019 Kim Kim River toxic pollution | Chemistry,Environmental_science | 2,113 |
38,407,940 | https://en.wikipedia.org/wiki/Jakarta%20Planetarium%20and%20Observatory | Jakarta Planetarium and Observatory (Indonesian: Planetarium dan Observatorium Jakarta) is a public planetarium and an observatory, part of the Taman Ismail Marzuki Art and Science Complex in Jakarta, Indonesia. The planetarium is the oldest of the three planetaria in Indonesia. The second planetarium is located in Surabaya, East Java. The third planetarium is located in Kutai, East Kalimantan.
History
Construction of the planetarium was an initiative of President Sukarno when in the early 1960s he envisaged a monumental large-scale planetarium project. However, by the next half of the 1960s, the design was made more modest. The construction of the Jakarta Planetarium and Observatory began in 1964 as part of the construction of the Taman Ismail Marzuki art complex.
The construction was funded by the Indonesian government and the Indonesian Batik Cooperatives Association (Gabungan Koperasi Batik Indonesia or GKBI).
The 22-meter dome of the planetarium was completed in 1968. On November 10, 1968, the building was officially inaugurated by the Governor of Jakarta Ali Sadikin together with the Taman Ismail Marzuki art complex. The planetarium was opened to the public on March 1, 1969; the day was made the official birthday of the planetarium. The planetarium made use of the Carl Zeiss Universal planetarium projector
In 1975, a coudé telescope that had already been a property of the institution since the 1964s was installed in a two-floored building not far from the planetarium. In 1982, the coudé telescope was moved closer to the present observatory because the land where the telescope stood earlier belonged to another owner.
In 1984, Jakarta Planetarium became officially the Jakarta Planetarium and Observatory. In 1991, the building was extended and facilities such as classrooms, were added. In 1994, a 31 cm star telescope was acquired to replace the older telescope.
Major renovation and technological upgrade of the planetarium was done in 1996. The previous Universal Projector was replaced with the computerized Universarium VIII Projector. The material for the domed screen was replaced and the diameter of the dome was reduced from 23 meters to 22 meters. The floor was elevated and terraced. The previous central-facing seating configuration was reorganized into a south-facing configuration, and the number of seats was reduced from 500 to 320.
In 2010, a Mobile Observatory unit was acquired: a minibus that transports several telescopes e.g. the LUNT 80 mm solar telescope, Vixen VC200 telescope, and a 120 mm refractor.
Facility
The Jakarta Planetarium and Observatory features an exhibition hall for astronomy.
The planetarium features nine movies, each with a duration of 60 minutes.
See also
List of astronomical observatories
List of planetariums
List of museums and cultural institutions in Indonesia
References
Website Planetarium & Observatorium Jakarta
Kompas, Jumat, 05/01/2007 - Mengajak Keluarga Menjadi Pengamat Bintang di Jakarta, oleh Neli Triana
Cited works
External links
Official site (2013)
Planetaria
Museums in Jakarta
1968 establishments in Indonesia | Jakarta Planetarium and Observatory | Astronomy | 647 |
58,710,893 | https://en.wikipedia.org/wiki/Notion%20%28productivity%20software%29 | Notion is a productivity and note-taking web application developed by Notion Labs, Inc. It is an online-only organizational tool with options for both free and paid subscriptions. It is headquartered in San Francisco, California, United States, with offices in New York, Dublin, Hyderabad, Seoul, Sydney, and Tokyo.
Software
Notion is a collaboration platform with Markdown and including kanban boards, tasks, wikis and databases. It is a workspace for notetaking, knowledge and data management, as well as project and task management. It has file management in a single workspace, allowing users to comment on ongoing projects, participate in discussions, and receive feedback. It can be accessed by cross-platform apps and by most web browsers.
It includes a "clipper" for screenshotting content from webpages. It exists for users to schedule tasks, manage files, save documents, set reminders, keep agendas, and organize their work. LaTeX support allows writing and pasting equations in block or inline form.
History
Notion Labs, Inc. was created as a startup in San Francisco, California, founded in 2013 by Ivan Zhao, Chris Prucha, Jessica Lam, Simon Last, and Toby Schachman.
In August 2016, Notion 1.0 was released on Product Hunt and was nominated for Golden Kitty 2016 in desktop product.
In March 2018, Notion 2.0 was released. At that point, the company had fewer than 10 employees.
In June 2018, an Android app was released.
In September 2019, the company announced it had reached 1 million users.
In January 2020, Notion had $50 million in investments from Index Ventures and others. In April 2020, it was valued at two billion dollars.
On September 7, 2021, Notion acquired Automate.io, a Hyderabad-based startup. In October of that year, a new round of funding led by Coatue Management and Sequoia Capital helped Notion raise $275 million. The investment valued Notion at $10 billion, and the company had 20 million users.
In 2022, Notion launched the Notion Certified Program, an accreditation for users to expand their use of the platform. It also joined the Security First Initiative, a group of tech companies pledged to sharing security information with their customers.
In June 2022, Notion acquired the calendar software Cron.
In July 2022, Notion acquired FlowDash.
In November 2022, Notion announced its official Japanese release.
In February 2023, Notion released the "Notion AI" service that can be used on the workspace.
In April 2023, Notion released multi-factor authentication for its users.
In November 2023, Notion released 'Q&A', an AI feature allowing users to ask questions directly to AI and receive answers based on information stored in the workspace.
On January 17, 2024, Notion released their second product 'Notion Calendar', a fully-featured calendar application with integrations to Notion pages and databases.
On February 9, 2024, Notion acquired the email service Skiff.
On October 24, 2024, at the Make with Notion conference, Notion announced Notion Mail, a new email application, as well as Forms, Layouts, and new Automation workflows including Formulas in Automations.
Features
It uses AI and a library of free and fee-based templates. With the Notion AI functionality, users can write and improve content, summarize existing notes, daily standup, adjust the tone, translate or check text. Security features include Security Assertion Markup Language single sign-on and private team spaces for their Business and Enterprise tiers.
Notion enables its users to integrate with more than 70 other SaaS tools, such as Slack, GitHub, GitLab, Zoom, Jira, Cisco Webex, Zapier, and Typeform.
Blocks
Notion is made up of blocks (These blocks are similar to elements in HTML). This allows users to customize a page by adding and moving blocks in various ways. In June 2021, Notion released a synced block, which is a type that can be linked and displayed across multiple pages, keeping the content contained within the synced block in sync between copies.
Databases
One of Notion's features is its databases. Databases are used for storing information and can hold any number of rows and columns. By default, each row will have two pre-populated properties: 'Name' and 'Tags'. Users can add more properties, such as date, checkbox, multi-select, URL, and more. When creating databases, users can choose to either create it 'inline', within an already existing page, or as its own page.
Formulas
One property type that can be added to Databases is a Formula property. Formula properties can leverage Notion Formulas code, a JavaScript-like language. Formula properties receive a row in a database as context. The language can be utilized to write code that outputs custom data based on the data and relations in the database row.
The following example shows a Formula-based property that sorts a list of related "Routines" by a numeric "Checked" property in a related database and then displays the output with a custom format.prop("Routines")
/**
* First we take the related "Routine" objects and sort them by
* the Checked number property. Multiplying by -1 sorts them in reverse order.
*/
.sort(current.prop("Checked") * -1)
/**
* Now we loop over (or map over) each related page and return a formatted
* display of the data which builds a new list of text lines.
*/
.map(
current.format() + ": " +
current.prop("Checked").format().style("blue") +
" checked"
)
/* Finally we join the list with a newline character */
.join("\n")
Templates
Notion users can make and use templates. Notion hosts its own template gallery, where users can browse through templates made by other Notion creators. However, not all of these templates are free to use. Some creators profit from selling Notion templates. Jason Ruiyi Chen, from Singapore, made $239,000 by selling his Notion templates to his Twitter audience. Thomas Frank, a YouTuber with 2.8 million subscribers as of February 2023, made $1 million in 2022.
Notion AI
Notion AI uses artificial intelligence, which enables prediction of some results, text generation and solving arithmetic. It will also work with other datasets in response to an input by the user of the AI. This AI can automate some tasks and assist the user to make a paragraph or outline from a line of text. Notion AI is an extra cost above the subscription. The service is powered by Anthropic's Claude as well as OpenAI's GPT-4.
Notion Calendar
Notion Calendar integrates time management with the workspaces and databases within Notion. It allows you to see and manage your professional and personal events in one application, syncing with Google Calendar for consolidated scheduling. With Notion Calendar, you can link database entries to calendar events, enabling efficient planning and task tracking. To get started, users download the app, connect their calendar accounts, and can then integrate their Notion workspaces and databases for a unified view of tasks and schedules.
Notion Partnerships
Notion also offers different types of partnerships, with various focuses:
Technology Partner Program: Your product + Notion platform = new tools for Notion users.
Channel Partner Program: Support unique needs of Notion customers and grow your business.
Startup Partner Program: Offer your startup network free Notion access and resources.
Affiliate Partner Program: Refer Notion to friends and followers and earn unlimited rewards.
Some of Notion's current partners include: Zoom, Slack, Github, Google Drive, AWS, Asana, Figma, and Typeform.
Pricing
Notion has a four tiered subscription model: Free, Plus, Business, and Enterprise. Users can also earn credit via referrals. As of May 2020, the company changed the Personal plan to allow unlimited blocks. Notion also offers a free student plan called "Notion for education".
See also
Collaborative real-time editor
Document collaboration
Obsidian (software)
Comparison of note-taking software
References
External links
Note-taking software
Collaborative real-time editors
Collaborative software
Proprietary wiki software
Android (operating system) software
IOS software
Software companies based in California
Business software
Applications of artificial intelligence | Notion (productivity software) | Technology | 1,730 |
2,496,547 | https://en.wikipedia.org/wiki/Wiley%20Post%E2%80%93Will%20Rogers%20Memorial%20Airport | Wiley Post–Will Rogers Memorial Airport, often referred to as Post/Rogers Memorial, is a public airport located in Utqiaġvik (formerly Barrow), the largest city and borough seat of the North Slope Borough of the U.S. state of Alaska. The airport is owned by the Alaska Department of Transportation & Public Facilities. Situated on the Chukchi Sea at a latitude of 71.29°N, it is the northernmost airport in United States territory.
The airport is named after American humorist Will Rogers and aviator Wiley Post, both of whom died about away at Point Barrow in a 1935 airplane crash.
Facilities and aircraft
Wiley Post–Will Rogers Memorial Airport has one asphalt paved runway (8/26) measuring .
For the 12-month period ending 11 January 2011, the airport had 12,010 aircraft operations, an average of 33 per day: 50% air taxi, 37% general aviation, 12% scheduled commercial and fewer than 1% military. At that time there were eight aircraft based at this airport: one jet, three helicopters, one multi-engine, and three single-engine.
Airlines and destinations
Passenger
Prior to its bankruptcy and cessation of all operations, Ravn Alaska served the airport from multiple locations.
Statistics
Top destinations
See also
List of airports in Alaska
References
External links
FAA Alaska airport diagram (GIF)
National Weather Service Barrow, Alaska
WAAS reference stations
Airports in North Slope Borough, Alaska
Utqiagvik, Alaska | Wiley Post–Will Rogers Memorial Airport | Technology | 296 |
1,004,764 | https://en.wikipedia.org/wiki/Gap%20penalty | A Gap penalty is a method of scoring alignments of two or more sequences. When aligning sequences, introducing gaps in the sequences can allow an alignment algorithm to match more terms than a gap-less alignment can. However, minimizing gaps in an alignment is important to create a useful alignment. Too many gaps can cause an alignment to become meaningless. Gap penalties are used to adjust alignment scores based on the number and length of gaps. The five main types of gap penalties are constant, linear, affine, convex, and profile-based.
Applications
Genetic sequence alignment - In bioinformatics, gaps are used to account for genetic mutations occurring from insertions or deletions in the sequence, sometimes referred to as indels. Insertions or deletions can occur due to single mutations, unbalanced crossover in meiosis, slipped strand mispairing, and chromosomal translocation. The notion of a gap in an alignment is important in many biological applications, since the insertions or deletions comprise an entire sub-sequence and often occur from a single mutational event. Furthermore, single mutational events can create gaps of different sizes. Therefore, when scoring, the gaps need to be scored as a whole when aligning two sequences of DNA. Considering multiple gaps in a sequence as a larger single gap will reduce the assignment of a high cost to the mutations. For instance, two protein sequences may be relatively similar but differ at certain intervals as one protein may have a different subunit compared to the other. Representing these differing sub-sequences as gaps will allow us to treat these cases as “good matches” even though there are long consecutive runs with indel operations in the sequence. Therefore, using a good gap penalty model will avoid low scores in alignments and improve the chances of finding a true alignment. In genetic sequence alignments, gaps are represented as dashes(-) on a protein/DNA sequence alignment.
Unix diff function - computes the minimal difference between two files similarly to plagiarism detection.
Spell checking - Gap penalties can help find correctly spelled words with the shortest edit distance to a misspelled word. Gaps can indicate a missing letter in the incorrectly spelled word.
Plagiarism detection - Gap penalties allow algorithms to detect where sections of a document are plagiarized by placing gaps in original sections and matching what is identical. The gap penalty for a certain document quantifies how much of a given document is probably original or plagiarized.
Bioinformatics applications
Global alignment
A global alignment performs an end-to-end alignment of the query sequence with the reference sequence. Ideally, this alignment technique is most suitable for closely related sequences of similar lengths. The Needleman-Wunsch algorithm is a dynamic programming technique used to conduct global alignment. Essentially, the algorithm divides the problem into a set of sub-problems, then uses the results of the sub-problems to reconstruct a solution to the original query.
Semi-global alignment
The use of semi-global alignment exists to find a particular match within a large sequence. An example includes seeking promoters within a DNA sequence. Unlike global alignment, it compromises of no end gaps in one or both sequences. If the end gaps are penalized in one sequence 1 but not in sequence 2, it produces an alignment that contains sequence 2 within sequence 1.
Local alignment
A local sequence alignment matches a contiguous sub-section of one sequence with a contiguous sub-section of another. The Smith-Waterman algorithm is motivated by giving scores for matches and mismatches. Matches increase the overall score of an alignment whereas mismatches decrease the score. A good alignment then has a positive score and a poor alignment has a negative score. The local algorithm finds an alignment with the highest score by considering only alignments that score positives and picking the best one from those. The algorithm is a dynamic programming algorithm. When comparing proteins, one uses a similarity matrix which assigns a score to each possible residue pair. The score should be positive for similar residues and negative for dissimilar residue pairs. Gaps are usually penalized using a linear gap function that assigns an initial penalty for a gap opening, and an additional penalty for gap extensions, increasing the gap length.
Scoring matrix
Substitution matrices such as BLOSUM are used for sequence alignment of proteins. A Substitution matrix assigns a score for aligning any possible pair of residues. In general, different substitution matrices are tailored to detecting similarities among sequences that are diverged by differing degrees. A single matrix may be reasonably efficient over a relatively broad range of evolutionary change.
The BLOSUM-62 matrix is one of the best substitution matrices for detecting weak protein similarities. BLOSUM matrices with high numbers are designed for comparing closely related sequences, while those with low numbers are designed for comparing distant related sequences. For example, BLOSUM-80 is used for alignments that are more similar in sequence, and BLOSUM-45 is used for alignments that have diverged from each other. For particularly long and weak alignments, the BLOSUM-45 matrix may provide the best results. Short alignments are more easily detected using a matrix with a higher "relative entropy" than that of BLOSUM-62. The BLOSUM series does not include any matrices with relative entropies suitable for the shortest queries.
Indels
During DNA replication, the cellular replication machinery is prone to making two types of errors while duplicating the DNA. These two replication errors are insertions and deletions of single DNA bases from the DNA strand (indels). Indels can have severe biological consequences by causing mutations in the DNA strand that could result in the inactivation or over activation of the target protein. For example, if a one or two nucleotide indel occurs in a coding sequence the result will be a shift in the reading frame, or a frameshift mutation that may render the protein inactive. The biological consequences of indels are often deleterious and are frequently associated with pathologies such as cancer. However, not all indels are frameshift mutations. If indels occur in trinucleotides, the result is an extension of the protein sequence that may also have implications on protein function.
Types
Constant
This is the simplest type of gap penalty: a fixed negative score is given to every gap, regardless of its length. This encourages the algorithm to make fewer, larger, gaps leaving larger contiguous sections.
ATTGACCTGA
|| |||||
AT---CCTGA
Aligning two short DNA sequences, with '-' depicting a gap of one base pair. If each match was worth 1 point and the whole gap -1, the total score: 7 − 1 = 6.
Linear
Compared to the constant gap penalty, the linear gap penalty takes into account the length (L) of each insertion/deletion in the gap. Therefore, if the penalty for each inserted/deleted element is B and the length of the gap L; the total gap penalty would be the product of the two BL. This method favors shorter gaps, with total score decreasing with each additional gap.
ATTGACCTGA
|| |||||
AT---CCTGA
Unlike constant gap penalty, the size of the gap is considered. With a match with score 1 and each gap -1, the score here is (7 − 3 = 4).
Affine
The most widely used gap penalty function is the affine gap penalty. The affine gap penalty combines the components in both the constant and linear gap penalty, taking the form . This introduces new terms, A is known as the gap opening penalty, B the gap extension penalty and L the length of the gap. Gap opening refers to the cost required to open a gap of any length, and gap extension the cost to extend the length of an existing gap by 1. Often it is unclear as to what the values A and B should be as it differs according to purpose. In general, if the interest is to find closely related matches (e.g. removal of vector sequence during genome sequencing), a higher gap penalty should be used to reduce gap openings. On the other hand, gap penalty should be lowered when interested in finding a more distant match. The relationship between A and B also have an effect on gap size. If the size of the gap is important, a small A and large B (more costly to extend a gap) is used and vice versa. Only the ratio A/B is important, as multiplying both by the same positive constant will increase all penalties by : which does not change the relative penalty between different alignments.
Convex
Using the affine gap penalty requires the assigning of fixed penalty values for both opening and extending a gap. This can be too rigid for use in a biological context.
The logarithmic gap takes the form and was proposed as studies had shown the distribution of indel sizes obey a power law. Another proposed issue with the use of affine gaps is the favoritism of aligning sequences with shorter gaps. Logarithmic gap penalty was invented to modify the affine gap so that long gaps are desirable. However, in contrast to this, it has been found that using logarithmatic models had produced poor alignments when compared to affine models.
Profile-based
Profile–profile alignment algorithms are powerful tools for detecting protein homology relationships with improved alignment accuracy. Profile-profile alignments are based on the statistical indel frequency profiles from multiple sequence alignments generated by PSI-BLAST searches. Rather than using substitution matrices to measure the similarity of amino acid pairs, profile–profile alignment methods require a profile-based scoring function to measure the similarity of profile vector pairs. Profile-profile alignments employ gap penalty functions. The gap information is usually used in the form of indel frequency profiles, which is more specific for the sequences to be aligned. ClustalW and MAFFT adopted this kind of gap penalty determination for their multiple sequence alignments. Alignment accuracies can be improved using this model, especially for proteins with low sequence identity. Some profile–profile alignment algorithms also run the secondary structure information as one term in their scoring functions, which improves alignment accuracy.
Comparing time complexities
The use of alignment in computational biology often involves sequences of varying lengths. It is important to pick a model that would efficiently run at a known input size. The time taken to run the algorithm is known as the time complexity.
Challenges
There are a few challenges when it comes to working with gaps. When working with popular algorithms there seems to be little theoretical basis for the form of the gap penalty functions. Consequently, for any alignment situation gap placement must be empirically determined. Also, pairwise alignment gap penalties, such as the affine gap penalty, are often implemented independent of the amino acid types in the inserted or deleted fragment or at the broken ends, despite evidence that specific residue types are preferred in gap regions. Finally, alignment of sequences implies alignment of the corresponding structures, but the relationships between structural features of gaps in proteins and their corresponding sequences are only imperfectly known. Because of this incorporating structural information into gap penalties is difficult to do. Some algorithms use predicted or actual structural information to bias the placement of gaps. However, only a minority of sequences have known structures, and most alignment problems involve sequences of unknown secondary and tertiary structure.
References
Further reading
Computational phylogenetics | Gap penalty | Biology | 2,328 |
56,007,376 | https://en.wikipedia.org/wiki/Aspergillus%20fresenii | Aspergillus fresenii is a species of fungus in the genus Aspergillus. Aspergillus fresenii produces ochratoxin A, ochratoxin B, ochratoxin C, aspochracins, mellamides, orthosporins, radarins, secopenitrems, sulphinines, xanthomegnins.
References
Further reading
fresenii
Fungi described in 1971
Fungus species | Aspergillus fresenii | Biology | 100 |
331,579 | https://en.wikipedia.org/wiki/Digital%20microfluidics | Digital microfluidics (DMF) is a platform for lab-on-a-chip systems that is based upon the manipulation of microdroplets. Droplets are dispensed, moved, stored, mixed, reacted, or analyzed on a platform with a set of insulated electrodes. Digital microfluidics can be used together with analytical analysis procedures such as mass spectrometry, colorimetry, electrochemical, and electrochemiluminescense.
Overview
]
In analogy to digital microelectronics, digital microfluidic operations can be combined and reused within hierarchical design structures so that complex procedures (e.g. chemical synthesis or biological assays) can be built up step-by-step. And in contrast to continuous-flow microfluidics, digital microfluidics works much the same way as traditional bench-top protocols, only with much smaller volumes and much higher automation. Thus a wide range of established chemical procedures and protocols can be seamlessly transferred to a nanoliter droplet format. Electrowetting, dielectrophoresis, and immiscible-fluid flows are the three most commonly used principles, which have been used to generate and manipulate microdroplets in a digital microfluidic device.
A digital microfluidic (DMF) device set-up depends on the substrates used, the electrodes, the configuration of those electrodes, the use of a dielectric material, the thickness of that dielectric material, the hydrophobic layers, and the applied voltage.
A common substrate used in this type of system is glass. Depending if the system is open or closed, there would be either one or two layers of glass. The bottom layer of the device contains a patterned array of individually controllable electrodes. When looking at a closed system, there is usually a continuous ground electrode found through the top layer made usually of indium tin oxide (ITO). The dielectric layer is found around the electrodes in the bottom layer of the device and is important for building up charges and electrical field gradients on the device. A hydrophobic layer is applied to the top layer of the system to decrease the surface energy where the droplet will actually we be in contact with. The applied voltage activates the electrodes and allows changes in the wettability of droplet on the device’s surface. In order to move a droplet, a control voltage is applied to an electrode adjacent to the droplet, and at the same time, the electrode just under the droplet is deactivated. By varying the electric potential along a linear array of electrodes, electrowetting can be used to move droplets along this line of electrodes.
Modifications to this foundation can also be fabricated into the basic design structure. One example of this is the addition of electrochemiluminescence detectors within the indium tin oxide layer (the ground electrode in a closed system) which aid in the detection of luminophores in droplets. In general, different materials may also be used to replace basic components of a DMF system such as the use of PDMS instead of glass for the substrate. Liquid materials can be added, such as oil or another substance, to a closed system to prevent evaporation of materials and decrease surface contamination. Also, DMF systems can be compatible with ionic liquid droplets with the use of an oil in a closed device or with the use of a catena (a suspended wire) over an open DMF device.
Digital microfluidics can be light-activated. Optoelectrowetting can be used to transport sessile droplets around a surface containing patterned photoconductors. The photoelectrowetting effect can also be used to achieve droplet transport on a silicon wafer without the necessity of patterned electrodes.
Working principle
Droplets are formed using the surface tension properties of a liquid. For example, water placed on a hydrophobic surface such as wax paper will form spherical droplets to minimize its contact with the surface. Differences in surface hydrophobicity affect a liquid’s ability to spread and ‘wet’ a surface by changing the contact angle. As the hydrophobicity of a surface increases, the contact angle increases, and the ability of the droplet to wet the surface decreases. The change in contact angle, and therefore wetting, is regulated by the Young-Lippmann equation.
where is the contact angle with an applied voltage ; is the contact angle with no voltage; is the relative permittivity of the dielectric; is the permittivity of free space; is the liquid/filler media surface tension; is the dielectric thickness.
In some cases, the hydrophobicity of a substrate can be controlled by using electrical fields. This refers to the phenomenon Electrowetting On Dielectric (EWOD).[3][4] For example, when no electric field is applied to an electrode, the surface will remain hydrophobic and a liquid droplet will form a more spherical droplet with a greater contact angle. When an electric field is applied, a polarized hydrophilic surface is created. The water droplet then becomes flattened and the contact angle decreases. By controlling the localization of this polarization, we can create an interfacial tension gradient that allows controlled displacement of the droplet across the surface of the DMF device.
Droplet formation
There are two ways to make new droplets with a digital microfluidic device. Either an existing droplet can be split in two, or a new droplet can be made from a reservoir of material. Both processes are only known to work in closed devices, though this often is not a problem as the top plates of DMF devices are typically removable, so an open device can be made temporarily closed should droplet formation be necessary.
From an existing droplet
A droplet can be split by charging two electrodes on opposite sides of a droplet on an uncharged electrode. In the same way a droplet on an uncharged electrode will move towards an adjacent, charged electrode, this droplet will move towards both active electrodes. Liquid moves to either side, which causes the middle of the droplet to neck. For a droplet of the same size as the electrodes, splitting will occur approximately when , as the neck will be at its thinnest. is the radius of curvature of the menisci at the neck, which is negative for a concave curve, and is the radius of curvature of the menisci at the elongated ends of the droplet. This process is simple and consistently results in two droplets of equal volume.
The conventional method of splitting an existing droplet by simply turning the splitting electrodes on and off produces new droplets of relatively equal volume. However, the new droplets formed by the conventional method show considerable difference in volume. This difference is caused by local perturbations due to the rapid mass transport. Even though the difference is negligible in some applications, it can still pose a problem in applications that are highly sensitive to variations in volume, such as immunoassays and DNA amplification. To overcome the limitation of the conventional method, an existing droplet can be split by gradually changing the potential of the electrodes at the splitting region instead of simply switching them on and off. Using this method, a noticeable improvement in droplet volume variation, from around 10% variation in volume to less than 1% variation in volume, has been reported.
From a reservoir
Creating a new droplet from a reservoir of liquid can be done in a similar fashion to splitting a droplet. In this case, the reservoir remains stationary while a sequence of electrodes are used to draw liquid out of the reservoir. This drawn liquid and the reservoir form a neck of liquid, akin to the neck of a splitting droplet but longer, and the collapsing of this neck forms a dispensed droplet from the drawn liquid. In contrast to splitting, though, dispensing droplets in this manner is inconsistent in scale and results. There is no reliable distance liquid will need to be pulled from the reservoir for the neck to collapse, if it even collapses at all. Because this distance varies, the volumes of dispensed droplets will also vary within the same device.
Due to these inconsistencies, alternative techniques for dispensing droplets have been used and proposed, including drawing liquid out of reservoirs in geometries that force a thinner neck, using a continuous and replenishable electrowetting channel, and moving reservoirs into corners so as to cut the reservoir down the middle. Multiple iterations of the latter can produce droplets of more manageable sizes.
Droplet manipulation
Droplet merging
As an existing droplet can be split to form discrete droplets using electrodes (see From an existing droplet), droplets can be merged into one droplet by electrodes as well. Utilizing the same concept applied for creating new droplets through splitting an existing droplet with electrodes, an aqueous droplet resting on an uncharged electrode can move towards a charged electrode where droplets will join and merge into one droplet. However, the merged droplet might not always form a circular shape even after the merging process is over due to surface tension. This problem can be solved by implementing a superhydrophobic surface between the droplets and the electrodes. Oil droplets can be merged in the same way as well, but oil droplets will move towards uncharged electrodes unlike aqueous droplets.
Droplet transportation
Discrete droplets can be transported in a highly controlled way using an array of electrodes. In the same way droplets move from an uncharged electrode to a charged electrode, or vice versa, droplets can be continuously transported along the electrodes by sequentially energizing the electrodes. Since droplet transportation involves an array of electrodes, multiple electrodes can be programmed to selectively apply a voltage to each electrode for a better control over transporting multiple droplets.
Displacement by electrostatic actuation
Three-dimensional droplet actuation has been made possible by implementing a closed system; this system contains a μL sized droplet in immiscible fluid medium. The droplet and medium are then sandwiched between two electromagnetic plates, creating an EM field between the two plates. The purpose of this method is to transfer the droplet from a lower planar surface to an upper parallel planar surface and back down via electrostatic forces. The physics behind such particle actuation and perpendicular movement can be understood from early works of N. N. Lebedev and I. P. Skal’skaya. In their research, they attempted to model the Maxwell electrical charge acquired by a perfectly round conducting particle in the presence of a uniform magnetic field caused by a perfectly-conducting and infinitely-stretching surface. Their model helps to predict the Z-direction motion of the microdroplets within the device as it points to the magnitude and direction of forces acting upon a micro droplet. This can be used to help accurately predict and correct for unwanted and uncontrollable particle movement. The model explains why failing to employ dielectric coating on one of the two surfaces causes reversal of charge within the droplet upon contact with each electrode and in turn causes the droplets to uncontrollably bounce of between electrodes.
Digital microfluidics (DMF), has already been readily adapted in many biological fields. By enabling three-dimensional movement within DMF, the technology can be used even more extensively in biological applications, as it could more accurately mimic 3-D microenvironments. A large benefit of employing this type of method is that it allows for two different environments to be accessible by the droplet, which can be taken advantage of by splitting the microfluidic tasks among the two surfaces. For example, while the lower plane can be used to move droplets, the upper plate can carry out the necessary chemical and/or biological processes. This advantage can be translated into practical experiment protocols in the biological community, such as coupling with DNA amplification. This also allows for the chip to be smaller, and to give researchers more freedom in designing platforms for microdroplet analysis.
All-terrain droplet actuation (ATDA)
All-terrain microfluidics is a method used to transport liquid droplets over non-traditional surface types. Unlike traditional microfluidics platform, which are generally restricted to planar and horizontal surfaces, ATDA enables droplet manipulation over curved, non-horizontal, and inverted surfaces. This is made possible by incorporating flexible thin sheets of copper and polyimide into the surface via a rapid prototyping method. This device works very well with many liquids, including aqueous buffers, solutions of proteins and DNA, and undiluted bovine serum. ATDA is compatible with silicone oil or pluronic additives, such as F-68, which reduce non-specific absorption and biofouling when dealing with biological fluids such as proteins, biological serums, and DNA. A drawback of a setup like this is accelerated droplet evaporation. ATDA is a form of open digital microfluidics, and as such the device needs to be encapsulated in a humidified environment in order to minimize droplet evaporation.
Implementation
In one of various embodiments of EWOD-based microfluidic biochips, investigated first by Cytonix in 1987 and subsequently commercialized by Advanced Liquid Logic, there are two parallel glass plates. The bottom plate contains a patterned array of individually controllable electrodes and the top plate is coated with a continuous grounding electrode. A dielectric insulator coated with a hydrophobic is added to the plates to decrease the wet-ability of the surface and to add capacitance between the droplet and the control electrode. The droplet containing biochemical samples and the filler medium, such as the silicone oil, a fluorinated oil, or air, are sandwiched between the plates and the droplets travel inside the filler medium. In order to move a droplet, a control voltage is applied to an electrode adjacent to the droplet, and at the same time, the electrode just under the droplet is deactivated. By varying the electric potential along a linear array of electrodes, electrowetting can be used to move droplets along this line of electrodes.
Applications
Laboratory automation
In research fields such as synthetic biology, where highly iterative experimentation is common, considerable efforts have been made to automate workflows. Digital microfluidics is often touted as a laboratory automation solution, with a number of advantages over alternative solutions such as pipetting robots and droplet microfluidics. These stated advantages often include a reduction in the required volume of experimental reagents, a reduction in the likelihood of contamination and cross-contamination, potential improvements in reproducibility, increased throughput, individual droplet addressability, and the ability to integrate with sensor and detector modules to perform end-to-end or even closed loop workflow automation.
Reduced experimental footprint
One of the core advantages of digital microfluidics, and of microfluidics in general, is the use and actuation of picoliter to microliter scale volumes. Workflows adapted from the bench to a DMF system are miniaturized, meaning working volumes are reduced to fractions of what is normally required for conventional methods. For example, Thaitrong et al. developed a DMF system with a capillary electrophoresis (CE) module with the purpose of automating the process of next generation sequencing (NGS) library characterization. Compared to an Agilent BioAnalyzer (an instrument commonly used to measure sequencing library size distribution), the DMF-CE system consumed ten-fold less sample volume. Reducing volumes for a workflow can be especially beneficial if the reagents are expensive or when manipulating rare samples such as circulating tumor cells and prenatal samples. Miniaturization also means a reduction in waste product volumes.
Reduced probability of contamination
DMF-based workflows, particularly those using a closed configuration with a top-plate ground electrode, have been shown to be less susceptible to outside contamination compared to some conventional laboratory workflows. This can be attributed to minimal user interaction during automated steps, and the fact that the smaller volumes are less exposed to environmental contaminants than larger volumes which would need to be exposed to open air during mixing. Ruan et al. observed minimal contamination from exogenous nonhuman DNA and no cross-contamination between samples while using their DMF-based digital whole genome sequencing system.
Improved reproducibility
Overcoming issues of reproducibility has become a topic of growing concern across scientific disciplines. Reproducibility can be especially salient when multiple iterations of the same experimental protocol need to be repeated. Using liquid handling robots that can minimize volume loss between experimental steps are often used to reduce error rates and improve reproducibility. An automated DMF system for CRISPR-Cas9 genome editing was described by Sinha et al, and was used to culture and genetically modify H1299 lung cancer cells. The authors noted that no variation in knockout efficiencies across loci was observed when cells were cultured on the DMF device, whereas cells cultured in well-plates showed variability in upstream loci knockout efficiencies. This reduction in variability was attributed to culturing on a DMF device being more homogenous and reproducible compared with well plate methods.
Increased throughput
While DMF systems cannot match the same throughput achieved by some liquid handling pipetting robots, or by some droplet-based microfluidic systems, there are still throughput advantages when compared to conventional methods carried out manually.
Individual droplet addressability
DMF allows for droplet level addressability, meaning individual droplets can be treated as spatially distinct microreactors. This level of droplet control is important for workflows where reactions are sensitive to the order of reagent mixing and incubation times, but where the optimal values of these parameters may still need to be determined. These types of workflows are common in cell-free biology, and Liu et al. were able to demonstrate a proof-of-concept DMF-based strategy for carrying out remote-controlled cell-free protein expression on an OpenDrop chip.
Detector module integration for end-to-end and closed-loop automation
An often cited advantage DMF platforms have is their potential to integrate with on-chip sensors and off-chip detector modules. In theory, real-time and end-point data can be used in conjunction with machine learning methods to automate the process of parameter optimization.
Separation and extraction
Digital microfluidics can be used for separation and extraction of target analytes. These methods include the use of magnetic particles, liquid-liquid extraction, optical tweezers, and hydrodynamic effects.
Magnetic particles
For magnetic particle separations a droplet of solution containing the analyte of interest is placed on a digital microfluidics electrode array and moved by the changes in the charges of the electrodes. The droplet is moved to an electrode with a magnet on one side of the array with magnetic particles functionalized to bind to the analyte. Then it is moved over the electrode, the magnetic field is removed and the particles are suspended in the droplet. The droplet is swirled on the electrode array to ensure mixing. The magnet is reintroduced and the particles are immobilized and the droplet is moved away. This process is repeated with wash and elution buffers to extract the analyte.
Magnetic particles coated with antihuman serum albumin antibodies have been used to isolate human serum albumin, as proof of concept work for immunoprecipitation using digital microfluidics.5 DNA extraction from a whole blood sample has also been performed with digital microfluidics.3 The procedure follows the general methodology as the magnetic particles, but includes pre-treatment on the digital microfluidic platform to lyse the cells prior to DNA extraction.
Liquid-liquid extraction
Liquid-liquid extractions can be carried out on digital microfluidic device by taking advantage of immiscible liquids.9 Two droplets, one containing the analyte in aqueous phase, and the other an immiscible ionic liquid are present on the electrode array. The two droplets are mixed and the ionic liquid extracts the analyte, and the droplets are easily separable.
Optical tweezers
Optical tweezers have also been used to separate cells in droplets. Two droplets are mixed on an electrode array, one containing the cells, and the other with nutrients or drugs. The droplets are mixed and then optical tweezers are used to move the cells to one side of the larger droplet before it is split. For a more detailed explanation on the underlying principles, see Optical tweezers.
Hydrodynamic separation
Particles have been applied for use outside of magnetic separation, with hydrodynamic forces to separate particles from the bulk of a droplet. This is performed on electrode arrays with a central electrode and ‘slices’ of electrodes surrounding it. Droplets are added onto the array and swirled in a circular pattern, and the hydrodynamic forces from the swirling cause the particles to aggregate onto the central electrode.
Chemical synthesis
Digital Microfluidics (DMF) allows for precise manipulation and coordination in small-scale chemical synthesis reactions due to its ability to control micro scale volumes of liquid reagents, allowing for overall less reagent use and waste. This technology can be used in the synthesis compounds such as peptidomimetics and PET tracers. PET tracers require nanogram quantities and as such, DMF allows for automated and rapid synthesis of tracers with 90-95% efficiency compared to conventional macro-scale techniques.
Organic reagents are not commonly used in DMF because they tend to wet the DMF device and cause flooding; however synthesis of organic reagents can be achieved through DMF techniques by carrying the organic reagents through an ionic liquid droplet, thus preventing the organic reagent from flooding the DMF device. Droplets are combined together by inducing opposite charges thus attracting them to each other. This allows for automated mixing of droplets. Mixing of droplets are also used to deposit MOF crystals for printing by delivering reagents into wells and evaporating the solutions for crystal deposition. This method of MOF crystal deposition is relatively cheap and does not require extensive robotic equipment.
Chemical synthesis using digital microfluidics (DMF) has been applied to many noteworthy biological reactions. These include polymerase chain reaction (PCR), as well as the formation of DNA and peptides. Reduction, alkylation, and enzymatic digestion have also shown robustness and reproducibility utilizing DMF, indicating potential in the synthesis and manipulation of proteomics. Spectra obtained from the products of these reactions are often identical to their library spectra, while only utilizing a small fraction of bench-scale reactants. Thus, conducting these syntheses on the microscale has the benefit of limiting money spent on purchasing reagents and waste products produced while yielding desirable experimental results. However, numerous challenges need to be overcome to push these reactions to completion through DMF. There have been reports of reduced efficiency in chemical reactions as compared to bench-scale versions of the same syntheses, as lower product yields have been observed. Furthermore, since picoliter and nanoliter size samples must be analyzed, any instrument used in analysis needs to be high in sensitivity. In addition, system setup is often difficult due to extensive amounts of wiring and pumps that are required to operate microchannels and reservoirs. Finally, samples are often subject to solvent evaporation which leads to changes in volume and concentration of reactants, and in some cases reactions to not go to completion.
The composition and purity of molecules synthesized by DMF are often determined utilizing classic analytical techniques. Nuclear magnetic resonance (NMR) spectroscopy has been successfully applied to analyze corresponding intermediates, products, and reaction kinetics. A potential issue that arises through the use of NMR is low mass sensitivity, however this can be corrected for by employing microcoils that assist in distinguishing molecules of differing masses. This is necessary since the signal-to-noise ratio of sample sizes in the microliter to nanoliter range is dramatically reduced compared to bench-scale sample sizes, and microcoils have been shown to resolve this issue. Mass spectrometry (MS) and high-performance liquid chromatography (HPLC) have also been used to overcome this challenge. Although MS is an attractive analytical technique for distinguishing the products of reactions accomplished through DMF, it poses its own weaknesses. Matrix-assisted laser desorption ionization (MALDI) and electrospray ionization (ESI) MS have recently been paired with analyzing microfluidic chemical reactions. However, crystallization and dilution associated with these methods often leads to unfavorable side effects, such as sample loss and side reactions occurring. The use of MS in DMF is discussed in more detail in a later section.
Cell culture
Connecting the DMF chip to use in the field or world-to-chip interfaces have been accomplished by means of manual pumps and reservoirs which deliver microbes, cells, and media to the device. The lack of extensive pumps and valves allow for elaborate multi step applications involving cells performed in a simple and compact system. In one application, microbial cultures have been transferred onto the chip and allowed to grow with the use of sterile procedures and temperature required for microbial incubation. To validate that this was a viable space for microbial growth, a transformation assay was carried out in the device. This involves exposing E.coli to a vector and heat shocking the bacteria until they take up the DNA. This is then followed by running a DNA gel to assure that the wanted vector was taken up by the bacteria. This study found that the DNA indeed was taken up by the bacteria and expressed as predicted.
Human cells have also been manipulated in Digital Microfluidic Immunocytochemistry in Single Cells (DISC) where DMF platforms were used to culture and use antibodies to label phosphorylated proteins in the cell. Cultured cells are then removed and taken off chip for screening. Another technique synthesizes hydrogels within DMF platforms. This process uses electrodes to deliver reagents to produce the hydrogel, and delivery of cell culture reagents for absorption into the gel. The hydrogels are an improvement over 2D cell culture because 3D cell culture have increased cell-cell interactions and cel-extracellular matrix interactions. Spherical cell cultures are another method developed around the ability of DMF to deliver droplets to cells. Application of an electric potential allows for automation of droplet transfer directly to the hanging cell culture.] This is beneficial as 3 dimensional cell culture and spheroids better mimic in vivo tissue by allowing for more biologically relevant cultures that have cells growing in an extracellular matrix similarly resembling that in the human body. Another use of DMF platforms in cell culture is its ability to conduct in vitro cell-free cloning using single molecule PCR inside droplets. PCR amplified products are then validated by transfection into yeast cells and a Western blot protein identification.
Problems arising from cell culture applications using DMF include protein adsorption to the device floor, and cytotoxicity to cells. To prevent adsorption of protein to the platform's floor, a surfactant stabilized Silicon oil or hexane was used to coat the surface of the device, and droplets were manipulated atop of the oil or hexane. Hexane was later rapidly evaporated from cultures to prevent a toxic effect on cell cultures. Another approach to solve protein adhesion is the addition of Pluronic additives to droplets in the device. Pluronic additives are generally not cytotoxic but some have been shown to be harmful to cell cultures.
Bio-compatibility of device set up is important for biological analyses. Along with finding Pluronic additives that are not cytotoxic, creating a device whose voltage and disruptive movement would not affect cell viability was accomplished. Through the readout of live/dead assays it was shown that neither voltage required to move droplets, nor the motion of moving cultures affected cell viability.
Biological extraction
Biological separations usually involve low concentration high volume samples. This can pose an issue for digital microfluidics due to the small sample volume necessary. Digital microfluidic systems can be combined with a macrofluidic system designed to decrease sample volume, in turn increasing analyte concentration. It follows the same principles as the magnetic particles for separation, but includes pumping of the droplet to cycle a larger volume of fluid around the magnetic particles.
Extraction of drug analytes from dried urine samples has also been reported. A droplet of extraction solvent, in this case methanol, is repeatedly flowed over a sample of dried urine sample then moved to a final electrode where the liquid is extracted through a capillary and then analyzed using mass spectrometry.
Immunoassays
The advanced fluid handling capabilities of digital microfluidics (DMF) allows for the adoption of DMF as an immunoassay platform as DMF devices can precisely manipulate small quantities of liquid reagents. Both heterogeneous immunoassays (antigens interacting with immobilized antibodies) and homogeneous immunoassays (antigens interacting with antibodies in solution) have been developed using a DMF platform. With regards to heterogeneous immunoassays, DMF can simplify the extended and intensive procedural steps by performing all delivery, mixing, incubation, and washing steps on the surface of the device (on-chip). Further, existing immunoassay techniques and methods, such as magnetic bead-based assays, ELISAs, and electrochemical detection, have been incorporated onto DMF immunoassay platforms.
The incorporation of magnetic bead-based assays onto a DMF immunoassay platform has been demonstrated for the detection of multiple analytes, such as human insulin, IL-6, cardiac marker Troponin I (cTnI), thyroid stimulating hormone (TSH), sTNF-RI, and 17β-estradiol. For example, a magnetic bead-based approached has been used for the detection of cTnI from whole blood in less than 8 minutes. Briefly, magnetic beads containing primary antibodies were mixed with labeled secondary antibodies, incubated, and immobilized with a magnet for the washing steps. The droplet was then mixed with a chemiluminescent reagent and detection of the accompanying enzymatic reaction was measured on-chip with a photomultiplier tube.
The ELISA template, commonly used for performing immunoassays and other enzyme-based biochemical assays, has been adapted for use with the DMF platform for the detection of analytes such as IgE and IgG. In one example, a series of bioassays were conducted to establish the quantification capabilities of DMF devices, including an ELISA-based immunoassay for the detection of IgE. Superparamagnetic nanoparticles were immobilized with anti-IgE antibodies and fluorescently labeled aptamers to quantify IgE using an ELISA template. Similarly, for the detection of IgG, IgG can be immobilized onto a DMF chip, conjugated with horseradish-peroxidase (HRP)-labeled IgG, and then quantified through measurement of the color change associated with product formation of the reaction between HRP and tetramethylbenzidine.
To further expand the capabilities and applications of DMF immunoassays beyond colorimetric detection (i.e., ELISA, magnetic bead-based assays), electrochemical detection tools (e.g., microelectrodes) have been incorporated into DMF chips for the detection of analytes such as TSH and rubella virus. For example, Rackus et al. integrated microelectrodes onto a DMF chip surface and substituted a previously reported chemiluminescent IgG immunoassay with an electroactive species, enabling detection of rubella virus. They coated magnetic beads with rubella virus, anti-rubella IgG, and anti-human IgG coupled with alkaline phosphatase, which in turn catalyzed an electron transfer reaction that was detected by the on-chip microelectrodes.
Mass spectrometry
The coupling of digital microfluidics (DMF) and Mass Spectrometry can largely be categorized into indirect off-line analysis, direct off-line analysis, and in-line analysis and the main advantages of this coupling are decreased solvent and reagent use, as well as decreased analysis times.
Indirect off-line analysis is the usage of DMF devices to combine reactants and isolate products, which are then removed and manually transferred to a mass spectrometer. This approach takes advantage of DMF for the sample preparation step but also introduces opportunities for contamination as manual intervention is required to transfer the sample. In one example of this technique, a Grieco three-component condensation was carried out on chip and was taken off the chip by micropipette for quenching and further analysis.
Direct off-line analysis is the usage of DMF devices that have been fabricated and incorporated partially or totally into a mass spectrometer. This process is still considered off-line, however as some post-reaction procedures may be carried out manually (but on chip), without the use of the digital capabilities of the device. Such devices are most often used in conjugation with MALDI-MS. In MALDI-based direct off-line devices, the droplet must be dried and recrystallized along with matrix – operations that oftentimes require vacuum chambers. The chip with crystallized analyte is then placed in to the MALDI-MS for analysis. One issue raised with MALDI-MS coupling to DMF is that the matrix necessary for MALDI-MS can be highly acidic, which may interfere with the on-chip reactions
Inline analysis is the usage of devices that feed directly into mass spectrometers, thereby eliminating any manual manipulation. Inline analysis may require specially fabricated devices and connecting hardware between the device and the mass spectrometer. Inline analysis is often coupled with electrospray ionization. In one example, a DMF chip was fabricated with a hole that led to a microchannel This microchannel was, in turn, connected to an electrospray ionizer that emitted directly into a mass spectrometer. Integration ambient ionization techniques where ions are formed outside of the mass spectrometer with little or no treatment pairs well with the open or semi-open microfluidic nature of DMF and allows easy inline couping between DMF and MS systems. Ambient Ionization techniques such as Surface Acoustic Wave (SAW) ionization generate surface waves on a flat piezoelectric surface that imparts enough acoustic energy on the liquid interface to overcome surface tension and desorb ions off the chip into the mass analyzer. Some couplings utilize an external high-voltage pulse source at the physical inlet to the mass spectrometer but the true role of such additions is uncertain.
A significant barrier to the widespread integration of DMF with mass spectrometry is biological contamination, often termed bio-fouling. High throughput analysis is a significant advantage in the use of DMF systems, but means that they are particularly susceptible to cross contamination between experiments. As a result, the coupling of DMF with mass spectrometry often requires the integration of a variety of methods to prevent cross contamination such as multiple washing steps, biologically compatible surfactants, and or super hydrophobic surfaces to prevent droplet adsorption. In one example, a reduction in cross contaminant signal during the characterization of an amino acid required 4-5 wash steps between each sample droplet for the contamination intensity to fall below the limit of detection.
Miniature Mass Spectrometers
Conventional mass spectrometers are often large as well as prohibitively expensive and complex in their operation which has led to the increased attractiveness of miniature mass spectrometers (MMS) for a variety of applications. MMS are optimized towards affordability and simple operation, often forgoing the need for experienced technicians, having a low cost of manufacture, and being small enough in size to allow for the transfer of data collection from the laboratory into the field. These advantages often come at the cost of reduced performance where MMS resolution, as well as the limits of detection and quantitation, are often barely adequate to perform specialized tasks. The integration of DMF with MMS has the potential for significant improvement of MMS systems by increasing throughput, resolution, and automation, while decreasing solvent cost, enabling lab grade analysis at a much reduced cost. In one example the use of a custom DMF system for urine drug testing enabled the creation of an instrument weighing only 25 kg with performance comparable to standard laboratory analysis.
Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance (NMR) spectroscopy can be used in conjunction with digital microfluidics (DMF) through the use of NMR microcoils, which are electromagnetic conducting coils that are less than 1 mm in size. Due to their size, these microcoils have several limitations, directly influencing the sensitivity of the machinery they operate within.
Microchannel/microcoil interfaces, previous to digital microfluidics, had several drawbacks such as in that many created large amounts of solvent waste and were easily contaminated. In this way, the use of digital microfluidics and its capability to manipulate singlet droplets is promising.
The interface between digital microfluidics and NMR relaxometry has led to the creation of systems such as those used to detect and quantify the concentrations of specific molecules on microscales with some such systems using two step processes in which DMF devices guide droplets to the NMR detection site. Introductory systems of high-field NMR and 2D NMR in conjunction with microfluidics have also been developed. These systems use single plate DMF devices with NMR microcoils in place of the second plate. Recently, further modified version of this interface included pulsed field gradients (PFG) units that enabled this platform to perform more sophisticated NMR measurements (e.g. NMR diffusometry, gradients encoded pulse measurements). This system has been successfully applied into monitoring rapid organic reactions.
References
Biotechnology
Microfluidics | Digital microfluidics | Materials_science,Biology | 8,074 |
37,720,522 | https://en.wikipedia.org/wiki/Binning%20%28metagenomics%29 | In metagenomics, binning is the process of grouping reads or contigs and assigning them to individual genome. Binning methods can be based on either compositional features or alignment (similarity), or both.
Introduction
Metagenomic samples can contain reads from a huge number of organisms. For example, in a single gram of soil, there can be up to 18000 different types of organisms, each with its own genome. Metagenomic studies sample DNA from the whole community, and make it available as nucleotide sequences of certain length. In most cases, the incomplete nature of the obtained sequences makes it hard to assemble individual genes, much less recovering the full genomes of each organism. Thus, binning techniques represent a "best effort" to identify reads or contigs within certain genomes known as Metagenome Assembled Genome (MAG). Taxonomy of MAGs can be inferred through placement into a reference phylogenetic tree using algorithms like GTDB-Tk.
The first studies that sampled DNA from multiple organisms used specific genes to assess diversity and origin of each sample. These marker genes had been previously sequenced from clonal cultures from known organisms, so, whenever one of such genes appeared in a read or contig from the metagenomic sample that read could be assigned to a known species or to the OTU of that species. The problem with this method was that only a tiny fraction of the sequences carried a marker gene, leaving most of the data unassigned.
Modern binning techniques use both previously available information independent from the sample and intrinsic information present in the sample. Depending on the diversity and complexity of the sample, their degree of success vary: in some cases they can resolve the sequences up to individual species, while in some others the sequences are identified at best with very broad taxonomic groups.
Binning of metagenomic data from various habitats might significantly extend the tree of life. Such approach on globally available metagenomes binned 52 515 individual microbial genomes and extended diversity of bacteria and archaea by 44%.
Algorithms
Binning algorithms can employ previous information, and thus act as supervised classifiers, or they can try to find new groups, those act as unsupervised classifiers. Many, of course, do both. The classifiers exploit the previously known sequences by performing alignments against databases, and try to separate sequence based in organism-specific characteristics of the DNA, like GC-content.
Some prominent binning algorithms for metagenomic datasets obtained through shotgun sequencing include TETRA, MEGAN, Phylopythia, SOrt-ITEMS, and DiScRIBinATE, among others.
TETRA
TETRA is a statistical classifier that uses tetranucleotide usage patterns in genomic fragments. There are four possible nucleotides in DNA, therefore there can be different fragments of four consecutive nucleotides; these fragments are called tetramers. TETRA works by tabulating the frequencies of each tetramer for a given sequence. From these frequencies z-scores are then calculated, which indicate how over- or under-represented the tetramer is in contraposition with what would be expected by looking to individual nucleotide compositions. The z-scores for each tetramer are assembled in a vector, and the vectors corresponding to different sequences are compared pair-wise, to yield a measure of how similar different sequences from the sample are. It is expected that the most similar sequences belong to organisms in the same OTU.
MEGAN
In the DIAMOND+MEGAN approach, all reads are first aligned against a protein reference database, such as NCBI-nr, and then the resulting alignments are analyzed using the naive LCA algorithm, which places a read on the lowest taxonomic node in the NCBI taxonomy that lies above all taxa to which the read has a significant alignment. Here, an alignment is usually deemed "significant", if its bit score lies above a given threshold (which depends on the length of the reads) and is within 10%, say, of the best score seen for that read. The rationale of using protein reference sequences, rather than DNA reference sequences, is that current DNA reference databases only cover a small fraction of the true diversity of genomes that exist in the environment.
Phylopythia
Phylopythia is one supervised classifier developed by researchers at IBM labs, and is basically a support vector machine trained with DNA k-mers from known sequences.
SOrt-ITEMS
SOrt-ITEMS is an alignment-based binning algorithm developed by Innovations Labs of Tata Consultancy Services (TCS) Ltd., India. Users need to perform a similarity search of the input metagenomic sequences (reads) against the nr protein database using BLASTx search. The generated BLASTx output is then taken as input by the SOrt-ITEMS program. The method uses a range of BLAST alignment parameter thresholds to first identify an appropriate taxonomic level (or rank) where the read can be assigned. An orthology-based approach is then adopted for the final assignment of the metagenomic read. Other alignment-based binning algorithms developed by the Innovation Labs of Tata Consultancy Services (TCS) include DiScRIBinATE, ProViDE and SPHINX. The methodologies of these algorithms are summarized below.
DiScRIBinATE
DiScRIBinATE is an alignment-based binning algorithm developed by the Innovations Labs of Tata Consultancy Services (TCS) Ltd., India. DiScRIBinATE replaces the orthology approach of SOrt-ITEMS with a quicker 'alignment-free' approach. Incorporating this alternate strategy was observed to reduce the binning time by half without any significant loss in the accuracy and specificity of assignments. Besides, a novel reclassification strategy incorporated in DiScRIBinATE was seem to reduce the overall misclassification rate.
ProViDE
ProViDE is an alignment-based binning approach developed by the Innovation Labs of Tata Consultancy Services (TCS) Ltd. for the estimation of viral diversity in metagenomic samples. ProViDE adopts the reverse orthology based approach similar to SOrt-ITEMS for the taxonomic classification of metagenomic sequences obtained from virome datasets. It a customized set of BLAST parameter thresholds, specifically suited for viral metagenomic sequences. These thresholds capture the pattern of sequence divergence and the non-uniform taxonomic hierarchy observed within/across various taxonomic groups of the viral kingdom.
PCAHIER
PCAHIER, another binning algorithm developed by the Georgia Institute of Technology., employs n-mer oligonucleotide frequencies as the features and adopts a hierarchical classifier (PCAHIER) for binning short metagenomic fragments. The principal component analysis was used to reduce the high dimensionality of the feature space. The effectiveness of the PCAHIER was demonstrated through comparisons against a non-hierarchical classifier, and two existing binning algorithms (TETRA and Phylopythia).
SPHINX
SPHINX, another binning algorithm developed by the Innovation Labs of Tata Consultancy Services (TCS) Ltd., adopts a hybrid strategy that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. The approach was designed with the objective of analyzing metagenomic datasets as rapidly as composition-based approaches, but nevertheless with the accuracy and specificity of alignment-based algorithms. SPHINX was observed to classify metagenomic sequences as rapidly as composition-based algorithms. In addition, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX was observed to be comparable with results obtained using alignment-based algorithms.
INDUS and TWARIT
Represent other composition-based binning algorithms developed by the Innovation Labs of Tata Consultancy Services (TCS) Ltd. These algorithms utilize a range of oligonucleotide compositional (as well as statistical) parameters to improve binning time while maintaining the accuracy and specificity of taxonomic assignments.
References
Metagenomics
Bioinformatics algorithms | Binning (metagenomics) | Biology | 1,628 |
25,744,542 | https://en.wikipedia.org/wiki/Berlekamp%E2%80%93Zassenhaus%20algorithm | In mathematics, in particular in computational algebra, the Berlekamp–Zassenhaus algorithm is an algorithm for factoring polynomials over the integers, named after Elwyn Berlekamp and Hans Zassenhaus. As a consequence of Gauss's lemma, this amounts to solving the problem also over the rationals.
The algorithm starts by finding factorizations over suitable finite fields using Hensel's lemma to lift the solution from modulo a prime p to a convenient power of p. After this the right factors are found as a subset of these.
The worst case of this algorithm is exponential in the number of factors.
improved this algorithm by using the LLL algorithm, substantially reducing the time needed to choose the right subsets of mod p factors.
See also
Berlekamp's algorithm
References
.
.
.
.
.
.
External links
Computer algebra | Berlekamp–Zassenhaus algorithm | Mathematics,Technology | 177 |
8,732,281 | https://en.wikipedia.org/wiki/GOR%20method | The GOR method (short for Garnier–Osguthorpe–Robson) is an information theory-based method for the prediction of secondary structures in proteins. It was developed in the late 1970s shortly after the simpler Chou–Fasman method. Like Chou–Fasman, the GOR method is based on probability parameters derived from empirical studies of known protein tertiary structures solved by X-ray crystallography. However, unlike Chou–Fasman, the GOR method takes into account not only the propensities of individual amino acids to form particular secondary structures, but also the conditional probability of the amino acid to form a secondary structure given that its immediate neighbors have already formed that structure. The method is therefore essentially Bayesian in its analysis.
Method
The GOR method analyzes sequences to predict alpha helix, beta sheet, turn, or random coil secondary structure at each position based on 17-amino-acid sequence windows. The original description of the method included four scoring matrices of size 17×20, where the columns correspond to the log-odds score, which reflects the probability of finding a given amino acid at each position in the 17-residue sequence. The four matrices reflect the probabilities of the central, ninth amino acid being in a helical, sheet, turn, or coil conformation. In subsequent revisions to the method, the turn matrix was eliminated due to the high variability of sequences in turn regions (particularly over such a large window). The method was considered as best requiring at least four contiguous residues to score as alpha helices to classify the region as helical, and at least two contiguous residues for a beta sheet.
Algorithm
The mathematics and algorithm of the GOR method were based on an earlier series of studies by Robson and colleagues reported mainly in the Journal of Molecular Biology and The Biochemical Journal. The latter describes the information theoretic expansions in terms of conditional information measures. The use of the word "simple" in the title of the GOR paper reflected the fact that the above earlier methods provided proofs and techniques somewhat daunting by being rather unfamiliar in protein science in the early 1970s; even Bayes methods were then unfamiliar and controversial. An important feature of these early studies, which survived in the GOR method, was the treatment of the sparse protein sequence data of the early 1970s by expected information measures. That is, expectations on a Bayesian basis considering the distribution of plausible information measure values given the actual frequencies (numbers of observations). The expectation measures resulting from integration over this and similar distributions may now be seen as composed of "incomplete" or extended zeta functions, e.g. z(s,observed frequency) − z(s, expected frequency) with incomplete zeta function z(s, n) = 1 + (1/2)s + (1/3)s+ (1/4)s + …. +(1/n)s. The GOR method used s=1. Also, in the GOR method and the earlier methods, the measure for the contrary state to e.g. helix H, i.e. ~H, was subtracted from that for H, and similarly for beta sheet, turns, and coil or loop. Thus the method can be seen as employing a zeta function estimate of log predictive odds. An adjustable decision constant could also be applied, which thus implies a decision theory approach; the GOR method allowed the option to use decision constants to optimize predictions for different classes of protein. The expected information measure used as a basis for the information expansion was less important by the time of publication of the GOR method because protein sequence data became more plentiful, at least for the terms considered at that time. Then, for s=1, the expression z(s,observed frequency) − z(s,expected frequency) approaches the natural logarithm of (observed frequency / expected frequency) as frequencies increase. However, this measure (including use of other values of s) remains important in later more general applications with high-dimensional data, where data for more complex terms in the information expansion are inevitably sparse.
See also
List of protein structure prediction software
References
Bioinformatics
Protein methods
Applications of Bayesian inference | GOR method | Chemistry,Engineering,Biology | 864 |
58,465,819 | https://en.wikipedia.org/wiki/Aspergillus%20thermomutatus | Aspergillus thermomutatus (also called Neosartorya pseudofischeri) is a species of fungus in the genus Aspergillus. It is from the Fumigati section. The species was first described in 1992. It has been reported to produce asperfuran, cytochalasin-like compounds, fiscalin-like compounds, pyripyropens, and gliotoxin.
Growth and morphology
A. thermomutatus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
thermomutatus
Fungi described in 1992
Fungus species | Aspergillus thermomutatus | Biology | 163 |
71,507,666 | https://en.wikipedia.org/wiki/Path%20space%20%28algebraic%20topology%29 | In algebraic topology, a branch of mathematics, the based path space of a pointed space is the space that consists of all maps from the interval to X such that , called based paths. In other words, it is the mapping space from to .
A space of all maps from to X, with no distinguished point for the start of the paths, is called the free path space of X. The maps from to X are called free paths. The path space is then the pullback of along .
The natural map is a fibration called the path space fibration.
See also
mapping space
References
Further reading
https://ncatlab.org/nlab/show/path+space
Algebraic topology | Path space (algebraic topology) | Mathematics | 144 |
56,116,830 | https://en.wikipedia.org/wiki/Year-riddle | The year-riddle is one of the most widespread, and apparently most ancient, international riddle-types in Eurasia. This type of riddle is first attested in Vedic tradition thought to originate in the second millennium BCE.
Research
Studies have surveyed the exceptionally wide attestation of this riddle type. The riddle is conventionally thought to been eastern in origin, though this may simply reflect the early date of writing in the east. A variety of guiding metaphors appear in the Year-riddle, and it can be helpful to analyse its variants on these lines. It has been argued that 'versions usually express the conventional tropes of a given culture or society and indicate regional sources'.
Year-riddles are numbered 984, 1037 and 1038 in Archer Taylor's English Riddles from Oral Tradition. As a folktale motif, the riddle is motif H721 in the classificatory system established by Stith Thompson's Motif-Index of Folk-Literature.
Examples
Most year-riddles are guided by one (or more) of four metaphors: wheels, trees, animals (particularly human families), and artefacts (particularly architecture).
The first three all make an appearance in the year-riddle that appears in several versions of the Tale of Ahikar. This story seems to have originated in Aramaic around the late seventh or early sixth century BCE. The relevant passage is lost from the earliest surviving versions, but the following instance, from a later Syriac version, is thought likely to represent the early form of the text:
[The king] said to me, “Aḥiḳar, expound to me this riddle: A pillar has on its head twelve cedars; in every cedar there are thirty wheels, and in every wheel two cables, one white and one black.” And I answered and said to him, “(...) The pillar of which thou hast spoken to me is the year: the twelve cedars are the twelve months of the year; the thirty wheels are the thirty days of the month; the two cables, one white and one black, are the day and the night.
Wheels and wheeled vehicles
Ancient Indian sources afford the earliest attestations of the year-riddle: examples in the Rigveda are thought to originate c. 1500×1200 BCE. Their use of metaphors of wheels reflects the prominence of the concept of the wheel of time in Asian culture.
Hymn 164 of the first book of the Rigveda can be understood to comprise a series of riddles or enigmas. Many are now obscure, but may have been an enigmatic exposition of the pravargya ritual. Rigveda 1.164.11 runs:
Devanagari
Latin transliteration
Modern English translation
With twelve spokes—for it does not become old—
the wheel of the truthful order turns on and on around the sky.
Sons, in pairs, o Agni,
seven hundred and twenty, are standing.
Meanwhile, 1.164.48 gives:
Devanagari
Latin transliteration
Modern English translation
The felly-pieces are twelve, the wheel is one,
the nave-pieces three; who has understood this?
On it are placed, as it were, 360
pegs that do not wobble.
Likewise, in the Mahabharata, the riddles posed to Ashtavakra by King Janaka in the third book begin with the year-riddle: what has six naves, twelve axles, twenty-four joints, and three hundred and sixty spokes?
A later Indic example using a similar metaphor is from the riddles ascribed to Amir Khusrau (1253–1325):
Here the three hundred and fifty-five days of the year correspond to the twelve lunar cycles of the Islamic calendar year.
Trees and other plants
A large number of variants (numbered 1037-38 in Archer Taylor's English Riddles from Oral Tradition) compare the year to a tree; indeed, 'riddlers rarely use the scene of a tree and its branches for other subjects than the year and the months or the sun and its rays'. The tree is also the metaphor usually found in Arabic folktales. The popularity of tree-imagery is thought to echo the widespread popularity of trees as a metaphor for the world or life in Eurasian traditions.
Most year-riddles of this kind can be thought of as containing some or all of the following components:
Versions containing only parts of the tree occur across Eurasia, whereas the versions with the nests occur only in Europe.
Examples include this verse from the early eleventh-century Shahnameh:
What are the dozen cypresses erect
In all their bravery and loveliness,
Each one of them with thirty boughs bedeckt--
In Persia never more and never less?.
To which Zāl responds
First as to those twelve cypresses which rear
Themselves, with thirty boughs upon each tree:
They are the twelve new moons of every years,
Like new-made monarchs, throned in majesty.
Upon the thirtieth day its course is done
For each; thus our revolving periods run.
Later examples include:
Yonder stands a tree of honor,
twelve limbs grow upon her,
every limb a different name.
It would take a wise man to tell you the same. [English]
A tree with twelve branches and thirty leaves, of which fifteen are black and fifteen white, strewn with open flowers of whitish yellow [Malay]
There stands a tree with twelve branches, on each branch are four nests, in each nest are six young, and the seventh is the mother. [Lithuanian]
More elaborate variations on this theme are also found. One Icelandic example is preserved in three early-modern manuscripts of the riddle-contest in the medieval Heiðreks saga and runs: ('what is that assembly which has twelve flowers, and in each flower are four nests? And in each nest are seven birds and each has its own name? How are they guessed? 12 months, 4 weeks, 7 days').
Animals
Human families
Human families provide a guiding metaphor for a number of year riddles (numbered 984 in Archer Taylor's English Riddles from Oral Tradition), though almost never in English-language tradition. Although only first attested in the roughly tenth-century CE Greek Anthology, the following Greek riddle is there ascribed to Cleobulus (fl. C6 BCE); even if it is not that ancient, the attribution to him would suggest that the riddle was thought of in Ancient Greek culture as itself being very old:
Another example of the family metaphor is this French riddle, first published in 1811:
The only English-language example of this kind of year-riddle found by Taylor was from Saba Island in the Antilles: ''. — 'Twelve months of the year, de second one February'.
Other animals
Eastern Europe and Asia exhibit a range of other animal metaphors for the year, usually involving a team of twelve oxen pulling one plough or gatherings of different species of birds.
Artefacts
A number of year-riddles use man-made objects as their controlling metaphor. For example, a modern Greek riddle invokes a cask made with twelve staves, and a Parsi riddle the contents of a chest. But most often these riddles draw on architecture, as in the following mid-twentieth-century example from central Myanmar:
Ein-daw-thar-lan:
set-nhit khan:
ta khan: thone gyait
ah phay eit
phwiṇ hlet ta gar:
lay: paut htar:
win bu twet bu bar yẹ lar:
Hnit
It is a beautiful house with twelve rooms and thirty people can sleep in each room. There are four doors left open. Have you ever passed through these doors?
A year.
References
Riddles
Chronology | Year-riddle | Physics | 1,597 |
20,325,453 | https://en.wikipedia.org/wiki/Sodium%20methanethiolate | Sodium methanethiolate or sodium thiomethoxide (CH3SNa, MeSNa) is the sodium conjugate base of methanethiol. This compound is commercially available as a white solid. It is a powerful nucleophile that can be used to prepare methylthioether and other organic compounds like ethyl bromide. Its hydrolysis in moist air produces methanethiol, which has a low odor threshold and a noxious fecal smell.
References
Thiolates
Organic sodium salts | Sodium methanethiolate | Chemistry | 105 |
23,648,859 | https://en.wikipedia.org/wiki/Fucosylation | Fucosylation is the process of adding fucose sugar units to a molecule. It is a type of glycosylation.
It is important clinically, and high levels of fucosylation have been reported in cancer. In cancer and inflammation there are significant changes in the expression of fucosylated molecules. Therefore, antibodies and lectins that are able to recognize cancer associated fucosylated oligosaccharides have been used as tumor markers in oncology.
It is performed by fucosyltransferase enzymes.
Fucosylation has been observed in vertebrates, invertebrates, plants, bacteria, and fungi. It has a role in cellular adhesion and immune regulation. Fucosylation inhibition applications are being explored for a range of clinical application including some associated with sickle cell disease, rheumatoid arthritis, tumor inhibition, and chemotherapy improvements. Recent studies on melanoma patient specimens indicated that melanoma fucosylation and fucosylated HLA-DRB1 are associated with anti-programmed cell death protein 1 (PD1) responder status, pointing to the potential use of melanoma fucosylation as a method for immunotherapy patient stratification. Moreover, it has been reported that fucosylation is an important regulator of anti-tumor immunity and L-fucose can be used as a potent tool for increasing immunotherapy efficacy in melanoma.
Fucosylation can help with immune response when a foreign pathogen is introduced in the body. Rapid fucosylation can occur in the epithelial lining of the small intestine as a protective mechanism to support the body’s symbiotic gut bacteria. This may regulate the bacterial genes responsible for quorum sensing or virulence, thus resulting in an increased tolerance of the infection.
References
Carbohydrate chemistry | Fucosylation | Chemistry | 394 |
6,390,507 | https://en.wikipedia.org/wiki/Multivalued%20dependency | In database theory, a multivalued dependency is a full constraint between two sets of attributes in a relation.
In contrast to the functional dependency, the multivalued dependency requires that certain tuples be present in a relation. Therefore, a multivalued dependency is a special case of tuple-generating dependency. The multivalued dependency plays a role in the 4NF database normalization.
A multivalued dependency is a special case of a join dependency, with only two sets of values involved, i.e. it is a binary join dependency.
A multivalued dependency exists when there are at least three attributes (like X,Y and Z) in a relation and for a value of X there is a well defined set of values of Y and a well defined set of values of Z. However, the set of values of Y is independent of set Z and vice versa.
Formal definition
The formal definition is as follows:
Let be a relation schema and let and be sets of attributes. The multivalued dependency (" multidetermines ") holds on if, for any legal relation and all pairs of tuples and in such that , there exist tuples and in such that:
Informally, if one denotes by the tuple having values for collectively equal to , then whenever the tuples and exist in , the tuples and should also exist in .
The multivalued dependency can be schematically depicted as shown below:
Example
Consider this example of a relation of university courses, the books recommended for the course, and the lecturers who will be teaching the course:
Because the lecturers attached to the course and the books attached to the course are independent of each other, this database design has a multivalued dependency; if we were to add a new book to the AHA course, we would have to add one record for each of the lecturers on that course, and vice versa.
Put formally, there are two multivalued dependencies in this relation: {course} {book} and equivalently {course} {lecturer}.
Databases with multivalued dependencies thus exhibit redundancy. In database normalization, fourth normal form requires that for every nontrivial multivalued dependency X Y, X is a superkey. A multivalued dependency X Y is trivial if Y is a subset of X, or if is the whole set of attributes of the relation.
Properties
If , Then
If and , Then
If and , then
The following also involve functional dependencies:
If , then
If and , then
The above rules are sound and complete.
A decomposition of R into (X, Y) and (X, R − Y) is a lossless-join decomposition if and only if X Y holds in R.
Every FD is an MVD because if X Y, then swapping Y's between tuples that agree on X doesn't create new tuples.
Splitting Doesn't Hold. Like FD's, we cannot generally split the left side of an MVD.But unlike FD's, we cannot split the right side either, sometimes you have to leave several attributes on the right side.
Closure of a set of MVDs is the set of all MVDs that can be inferred using the following rules (Armstrong's axioms):
Complementation: If X Y, then X R - Y
Augmentation: If X Y and Z W, then XW YZ
Transitivity: If X Y and Y Z, then X Z - Y
Replication: If X Y, then X Y
Coalescence: If X Y and W s.t. W Y = , W Z, and Z Y, then X Z
Definitions
Full constraint A constraint which expresses something about all attributes in a database. (In contrast to an embedded constraint.) That a multivalued dependency is a full constraint follows from its definition, as where it says something about the attributes .
Tuple-generating dependency A dependency which explicitly requires certain tuples to be present in the relation.
Trivial multivalued dependency 1 A multivalued dependency which involves all the attributes of a relation i.e.. A trivial multivalued dependency implies, for tuples and , tuples and which are equal to and .
Trivial multivalued dependency 2 A multivalued dependency for which .
References
External links
Multivalued dependencies and a new Normal form for Relational Databases (PDF) - Ronald Fagin, IBM Research Lab
On the Structure of Armstrong Relations for Functional Dependencies (PDF) - CATRIEL BEERI (The Hebrew University), MARTIN DOWD (Rutgers University), RONALD FAGIN (IBM Research Laboratory) AND RICHARD STATMAN (Rutgers University)
On a problem of Fagin concerning multivalued dependencies in relational databases (PDF) - Sven Hartmann, Massey University
Data modeling
Database constraints | Multivalued dependency | Engineering | 1,010 |
40,597,241 | https://en.wikipedia.org/wiki/Vinorine | Vinorine is an indole alkaloid isolated from Alstonia.
References
Alkaloids found in Apocynaceae
Tryptamine alkaloids
Quinolizidine alkaloids
Acetate esters
Heterocyclic compounds with 6 rings | Vinorine | Chemistry | 55 |
2,139,612 | https://en.wikipedia.org/wiki/A%20Course%20of%20Modern%20Analysis | A Course of Modern Analysis: an introduction to the general theory of infinite processes and of analytic functions; with an account of the principal transcendental functions (colloquially known as Whittaker and Watson) is a landmark textbook on mathematical analysis written by Edmund T. Whittaker and George N. Watson, first published by Cambridge University Press in 1915. The first edition was Whittaker's alone, but later editions were co-authored with Watson.
History
Its first, second, third, and the fourth edition were published in 1902, 1915, 1920, and 1927, respectively. Since then, it has continuously been reprinted and is still in print today. A revised, expanded and digitally reset fifth edition, edited by Victor H. Moll, was published in 2021.
The book is notable for being the standard reference and textbook for a generation of Cambridge mathematicians including Littlewood and Godfrey H. Hardy. Mary L. Cartwright studied it as preparation for her final honours on the advice of fellow student Vernon C. Morton, later Professor of Mathematics at Aberystwyth University. But its reach was much further than just the Cambridge school; André Weil in his obituary of the French mathematician Jean Delsarte noted that Delsarte always had a copy on his desk. In 1941, the book was included among a "selected list" of mathematical analysis books for use in universities in an article for that purpose published by American Mathematical Monthly.
Notable features
Some idiosyncratic but interesting problems from an older era of the Cambridge Mathematical Tripos are in the exercises.
The book was one of the earliest to use decimal numbering for its sections, an innovation the authors attribute to Giuseppe Peano.
Contents
Below are the contents of the fourth edition:
Part I. The Process of Analysis
Part II. The Transcendental Functions
Reception
Reviews of the first edition
George B. Mathews, in a 1903 review article published in The Mathematical Gazette opens by saying the book is "sure of a favorable reception" because of its "attractive account of some of the most valuable and interesting results of recent analysis". He notes that Part I deals mainly with infinite series, focusing on power series and Fourier expansions while including the "elements of" complex integration and the theory of residues. Part II, in contrast, has chapters on the gamma function, Legendre functions, the hypergeometric series, Bessel functions, elliptic functions, and mathematical physics.
Arthur S. Hathaway, in another 1903 review published in the Journal of the American Chemical Society, notes that the book centers around complex analysis, but that topics such as infinite series are "considered in all their phases" along with "all those important series and functions" developed by mathematicians such as Joseph Fourier, Friedrich Bessel, Joseph-Louis Lagrange, Adrien-Marie Legendre, Pierre-Simon Laplace, Carl Friedrich Gauss, Niels Henrik Abel, and others in their respective studies of "practice problems". He goes on to say it "is a useful book for those who wish to make use of the most advanced developments of mathematical analysis in theoretical investigations of physical and chemical questions."
In a third review of the first edition, Maxime Bôcher, in a 1904 review published in the Bulletin of the American Mathematical Society notes that while the book falls short of the "rigor" of French, German, and Italian writers, it is a "gratifying sign of progress to find in an English book such an attempt at rigorous treatment as is here made". He notes that important parts of the book were otherwise non-existent in the English language.
See also
Bateman Manuscript Project
References
Further reading
(9 pages)
(1 page)
(1 page)
(1 page)
(2 pages)
(2 pages)
(2 pages)
(1 page)
(1 page)
(1 page)
(1 page)
(1 of 6 pages)
1902 non-fiction books
Cambridge University Press books
Mathematics textbooks
Mathematical analysis
Complex analysis
Books by E. T. Whittaker | A Course of Modern Analysis | Mathematics | 814 |
52,975,808 | https://en.wikipedia.org/wiki/Slosh%20baffle | A slosh baffle is a device used to dampen the adverse effects of liquid slosh in a tank. Slosh baffles have been implemented in a variety of applications including tanker trucks, and liquid rockets, although any moving tank containing liquid may employ them.
Baffle rings
Baffle rings are rigid rings placed within the inside of a tank to retard the flow of liquid between sections. The location and orifice size of the rings yield varying performance for a given application.
See also
Baffle blocks
References
Fluid dynamics | Slosh baffle | Chemistry,Engineering | 108 |
5,847,044 | https://en.wikipedia.org/wiki/Stanford%20Web%20Credibility%20Project | The Stanford Web Credibility Project, which involves assessments of website credibility conducted by the Stanford University Persuasive Technology Lab, is an investigative examination of what leads people to believe in the veracity of content found on the Web. The goal of the project is to enhance website design and to promote further research on the credibility of Web resources.
Origins
The Web has become an important channel for exchanging information and services, resulting in a greater need for methods to ascertain the credibility of websites. In response, since 1998, the Stanford Persuasive Technology Lab (SPTL) has investigated what causes people to believe, or not, what they find online. SPTL provides insight into how computers can be designed to change what people think and do, an area called captology. Directed by experimental psychologist B.J. Fogg, the Stanford team includes social scientists, designers, and technologists who research and design interactive products that motivate and influence their users.
Objectives
The ongoing research of the Stanford Web Credibility Project includes:
Performing quantitative research on Web credibility
Collecting all public information on Web credibility
Acting as a clearinghouse for this information
Facilitating research and discussion about Web credibility
Collaborating with academic and industry research groups
How Do People Evaluate a Web Site's Credibility?
A study by the Stanford Web Credibility Project, How Do People Evaluate a Web Site's Credibility? Results from a Large Study, published in 2002, invited 2,684 "average people" to rate the credibility of websites in ten content areas. The study evaluated the credibility of two live websites randomly assigned from one of ten content categories: e-commerce, entertainment, finance, health, news, nonprofit, opinion or review, search engines, sports, and travel. A total of one hundred sites were assessed.
This study was launched jointly with a parallel, expert-focused project conducted by Sliced Bread Design, LLC. In their study, Experts vs. Online Consumers: A Comparative Credibility Study of Health and Finance Web Sites, fifteen health and finance experts were asked to assess the credibility of the same industry-specific sites as those reviewed by the Stanford PTL consumers. The Sliced Bread Design study revealed that health and finance experts were far less concerned about the surface aspects of these industry-specific types of sites and more concerned about the breadth, depth, and quality of a site's information. Similarly, Consumer Reports WebWatch, which commissioned the study, has the goal to investigate, inform, and improve the credibility of information published on the World Wide Web. Consumer Reports had plans for a similar investigation into whether consumers actually perform the necessary credibility checks while online, and had already conducted a national poll concerning consumer awareness of privacy policies.
The common goals of the three organizations led to a collaborative research effort that may represent the largest web credibility project ever conducted. The project, based on three years of research that included over 4,500 people, enabled the lab to publish Stanford Guidelines for Web Credibility, which established ten guidelines for building the credibility of a website.
Findings
The study found that when people assessed a real website's credibility, they did not use rigorous criteria, a contrast to earlier national survey findings by Consumer Reports WebWatch, A Matter of Trust: What Users Want From Web Sites (April 16, 2002). The data showed that the average consumer paid far more attention to the superficial aspects of a site, such as visual cues, than to its content. For example, nearly half of all consumers (or 46.1%) in the study assessed the credibility of sites based in part on the appeal of the overall visual design of a site, including layout, typography, font size and color schemes.
This reliance on a site's overall visual appeal to gauge credibility occurred more often with some categories of sites then others. Consumer credibility-related comments about visual design issues occurred with more frequency with websites dedicated to finance, 54.6%, search engines, 52.6%, travel, 50.5%, and e-commerce sites, 46.2%, and less frequently when assessing health, 41.8%, news, 39.6%, and nonprofit, 39.4%.
"I would like to think that when people go on the Web they're very tough integrators of information, they compare sources, they think really hard," says Fogg, "but the truth of the matter--and I didn't want to find this in the research but it's very clear--is that people do judge a Web site by how it looks. That's the first test of the Web site. And if it doesn't look credible or it doesn't look like what they expect it to be, they go elsewhere. It doesn't get a second test. And it's not so different from other things in life. It's the way we judge automobiles and politicians.
Recommended guidelines
See also
Persuasive technology
Web literacy (Credibility)
External links
credibility.stanford.edu - Stanford Web Credibility Project website
WebCredibility.org (discontinued around 2007-2008, this domain appears to be an alias for the content still available at credibility.stanford.edu) - 'Stanford Guidelines for Web Credibility: How can you boost your web site's credibility? We have compiled 10 guidelines for building the credibility of a website. These guidelines are based on three years of research that included over 4,500 people.', Stanford University
Stanford.edu - 'What's Captology?' The Stanford Persuasive Technology Lab
- 'How Do People Evaluate a Web Site's Credibility? Results from a Large Study (October 29, 2002)
EContentInstitute.org - 'A good first impression: Stanford's Web credibility project reveals that the secret to repeat Web site traffic may be in the first click', Sue Bowness (January/February 2004)
Media studies
Web Credibility Project
Software projects | Stanford Web Credibility Project | Technology,Engineering | 1,200 |
57,655,879 | https://en.wikipedia.org/wiki/VS%20%28nerve%20agent%29 | VS is a nerve agent of the V-series. Its chemical structure is very similar to the VX nerve agent, but the methyl group on the phosphorus atom is replaced by an ethyl group.
See also
VX (nerve agent)
References
V-series nerve agents
Acetylcholinesterase inhibitors
Diisopropylamino compounds
Ethyl esters | VS (nerve agent) | Chemistry | 74 |
419,495 | https://en.wikipedia.org/wiki/Oka%20%28mass%29 | The oka, okka, or oke () was an Ottoman measure of mass, equal to 400 dirhems (Ottoman drams). Its value varied, but it was standardized in the late empire as 1.2829 kilograms. 'Oka' is the most usual spelling today; 'oke' was the usual contemporary English spelling; 'okka' is the modern Turkish spelling, and is usually used in academic work about the Ottoman Empire.
In Turkey, the traditional unit is now called the eski okka 'old oka' or kara okka 'black okka'; the yeni okka 'new okka' is the kilogram.
In Greece, the oka (, plural ) was standardized at 1.282 kg and remained in use until traditional units were abolished on March 31, 1953—the metric system had been adopted in 1876, but the older units remained in use.
In Cyprus, the oka was equal to 1.270058636 kg or 4 onjas, each weighing 100 drams, and it remained in use until 1986, when Cyprus adopted the metric system.
In Egypt, the monetary oka weighted 1.23536 kg. In Tripolitania, it weighed 1.2208 kg, equal to 2½ artals.
The oka was also used as a unit of volume. In Wallachia, it was 1.283 liters of liquid and 1.537 L of grain (dry measure). In Greece, an oka of oil was 1.280 kg, which would have translated to about 1.340 litres (at 0.916 kg/l).
References
General references
A.D. Alderson and Fahir İz, The Concise Oxford Turkish Dictionary, 1959
Γ. Μπαμπινιώτης (Babiniotis), Λεξικό της Νέας Ελληνικής Γλώσσας, Athens, 1998
Encyclopædia Britannica Eleventh Edition, 1911
La Grande Encyclopédie
Diran Kélékian, Dictionnaire Turc-Français, Constantinople: Imprimerie Mihran, 1911
OED
Obsolete units of measurement
Units of mass
Units of volume
Ottoman units of measurement | Oka (mass) | Physics,Mathematics | 474 |
34,299,760 | https://en.wikipedia.org/wiki/Parasite%20experiment | In experimental physics, and particularly in high energy and nuclear physics, a parasite experiment or parasitic experiment is an experiment performed using a big particle accelerator or other large facility, without interfering with the scheduled experiments of that facility. This allows the experimenters to proceed without the usual competitive time scheduling procedure. These experiments may be instrument tests or experiments whose scientific interest has not been clearly established.
Further reading
Experimental particle physics | Parasite experiment | Physics | 82 |
65,540,554 | https://en.wikipedia.org/wiki/Benadryl%20challenge | The Benadryl challenge is an internet challenge that emerged in 2020, revolving around the deliberate consumption, excessive use and overdose of the antihistamine medicine diphenhydramine (commonly sold in the United States under the brand name Benadryl), which acts as a deliriant in high doses. The challenge, which reportedly spread via the social media platform TikTok, instructs participants to film themselves consuming large doses of Benadryl and documenting the effect of tripping or hallucinating.
Numerous authorities have advised against the challenge, as deliberate overconsumption of diphenhydramine can lead to adverse effects, including confusion, delirium, psychosis, organ damage, hyperthermia, convulsions, coma, and death. On September 24, 2020, the FDA formally released a statement advising parents and medical practitioners to be aware of the challenge's prevalence and its risks.
The recreational use of diphenhydramine and addiction is well-reported in medical literature, and overdoses are treatable with correct intervention. Its psychoactive effects at high dosages, which are a symptom of anticholinergic poisoning, are also well documented. In severe cases, the overdose of diphenhydramine and other anticholinergic medicines can lead to a phenomenon referred to as an anticholinergic toxidrome, which can affect organ systems throughout the body, including the nervous system and cardiovascular system.
Several participants have been hospitalized as a result of the challenge, including three teenagers admitted to the Cook Children's Medical Center after consuming at least 14 diphenhydramine tablets, and a 15-year-old Oklahoman teen who died from an overdose after attempting to take part. TikTok said it had not seen such "content trend" but proceeded to block the search term to prevent copycats.
Attention towards the challenge was renewed in 2023 when Jacob Stevens, 13, a teenager from Columbus, Ohio, died after six days in intensive care. Stevens had his friends film him as he consumed over a dozen Benadryl tablets, and began convulsing shortly afterwards. Upon admission to an intensive care unit, it was found that he had suffered critical brain damage, and he died following six days of mechanical ventilation. TikTok expressed sympathy for the family and reiterated that this type of content is prohibited on the platform. Hashtags such as "Benadryl" and "BenadrylChallenge" have been disabled, and the challenge does not appear to be widespread. Although searching for "Benadryl" has been blocked since 2020, it can still result in suggestions such as "bena challenge" or "benary changle" and videos related to the original challenge.
See also
Consumption of Tide Pods
Recreational use of dextromethorphan
Milk crate challenge, a risky physical challenge that gained popularity on TikTok in 2021
Blackout challenge
Skullbreaker challenge
Notes
References
Internet memes introduced in 2020
Substance-related disorders
2020s fads and trends
Eating behaviors of humans
TikTok trends | Benadryl challenge | Biology | 624 |
7,280,707 | https://en.wikipedia.org/wiki/Variable-length%20code | In coding theory, a variable-length code is a code which maps source symbols to a variable number of bits. The equivalent concept in computer science is bit string.
Variable-length codes can allow sources to be compressed and decompressed with zero error (lossless data compression) and still be read back symbol by symbol. With the right coding strategy, an independent and identically-distributed source may be compressed almost arbitrarily close to its entropy. This is in contrast to fixed-length coding methods, for which data compression is only possible for large blocks of data, and any compression beyond the logarithm of the total number of possibilities comes with a finite (though perhaps arbitrarily small) probability of failure.
Some examples of well-known variable-length coding strategies are Huffman coding, Lempel–Ziv coding, arithmetic coding, and context-adaptive variable-length coding.
Codes and their extensions
The extension of a code is the mapping of finite length source sequences to finite length bit strings, that is obtained by concatenating for each symbol of the source sequence the corresponding codeword produced by the original code.
Using terms from formal language theory, the precise mathematical definition is as follows: Let and be two finite sets, called the source and target alphabets, respectively. A code is a total function mapping each symbol from to a sequence of symbols over , and the extension of to a homomorphism of into , which naturally maps each sequence of source symbols to a sequence of target symbols, is referred to as its extension.
Classes of variable-length codes
Variable-length codes can be strictly nested in order of decreasing generality as non-singular codes, uniquely decodable codes, and prefix codes. Prefix codes are always uniquely decodable, and these in turn are always non-singular:
Non-singular codes
A code is non-singular if each source symbol is mapped to a different non-empty bit string; that is, the mapping from source symbols to bit strings is injective.
For example, the mapping is not non-singular because both "a" and "b" map to the same bit string "0"; any extension of this mapping will generate a lossy (non-lossless) coding. Such singular coding may still be useful when some loss of information is acceptable (for example, when such code is used in audio or video compression, where a lossy coding becomes equivalent to source quantization).
However, the mapping is non-singular; its extension will generate a lossless coding, which will be useful for general data transmission (but this feature is not always required). It is not necessary for the non-singular code to be more compact than the source (and in many applications, a larger code is useful, for example as a way to detect or recover from encoding or transmission errors, or in security applications to protect a source from undetectable tampering).
Uniquely decodable codes
A code is uniquely decodable if its extension is § non-singular. Whether a given code is uniquely decodable can be decided with the Sardinas–Patterson algorithm.
The mapping is uniquely decodable (this can be demonstrated by looking at the follow-set after each target bit string in the map, because each bitstring is terminated as soon as we see a 0 bit which cannot follow any existing code to create a longer valid code in the map, but unambiguously starts a new code).
Consider again the code from the previous section. This code is not uniquely decodable, since the string 011101110011 can be interpreted as the sequence of codewords 01110 – 1110 – 011, but also as the sequence of codewords 011 – 1 – 011 – 10011. Two possible decodings of this encoded string are thus given by cdb and babe. However, such a code is useful when the set of all possible source symbols is completely known and finite, or when there are restrictions (such as a formal syntax) that determine if source elements of this extension are acceptable. Such restrictions permit the decoding of the original message by checking which of the possible source symbols mapped to the same symbol are valid under those restrictions.
Prefix codes
A code is a prefix code if no target bit string in the mapping is a prefix of the target bit string of a different source symbol in the same mapping. This means that symbols can be decoded instantaneously after their entire codeword is received. Other commonly used names for this concept are prefix-free code, instantaneous code, or context-free code.
The example mapping above is not a prefix code because we do not know after reading the bit string "0" whether it encodes an "a" source symbol, or if it is the prefix of the encodings of the "b" or "c" symbols.
An example of a prefix code is shown below.
Example of encoding and decoding:
→ 00100110111010 → |0|0|10|0|110|111|0|10| →
A special case of prefix codes are block codes. Here, all codewords must have the same length. The latter are not very useful in the context of source coding, but often serve as forward error correction in the context of channel coding.
Another special case of prefix codes are LEB128 and variable-length quantity (VLQ) codes, which encode arbitrarily large integers as a sequence of octets—i.e., every codeword is a multiple of 8 bits.
Advantages
The advantage of a variable-length code is that unlikely source symbols can be assigned longer codewords and likely source symbols can be assigned shorter codewords, thus giving a low expected codeword length. For the above example, if the probabilities of (a, b, c, d) were , the expected number of bits used to represent a source symbol using the code above would be:
.
As the entropy of this source is 1.75 bits per symbol, this code compresses the source as much as possible so that the source can be recovered with zero error.
See also
Golomb code
Kruskal count
Variable-length instruction sets in computing
References
Further reading
(xii+191 pages) Errata 1Errata 2
Draft available online
Coding theory
Entropy coding
Data compression | Variable-length code | Mathematics | 1,295 |
2,475,030 | https://en.wikipedia.org/wiki/Tianeptine | Tianeptine, sold under the brand names Stablon, Tatinol, and Coaxil among others, is an atypical tricyclic antidepressant which is used mainly in the treatment of major depressive disorder, although it may also be used to treat anxiety, asthma, and irritable bowel syndrome.
Tianeptine has antidepressant and anxiolytic effects with a relative lack of sedative, anticholinergic, and cardiovascular side effects. It has been found to act as an atypical agonist of the μ-opioid receptor with clinically negligible effects on the δ- and κ-opioid receptors. This may explain part of its antidepressant and anxiolytic effects; however, it is thought that tianeptine also modulates glutamate receptors, and this may also explain tianeptine's antidepressant/anxiolytic effects.
Tianeptine was discovered and patented by the French Society of Medical Research in the 1960s. It was introduced for medical use in France in 1983. Currently, tianeptine is approved in France and manufactured and marketed by Laboratories Servier SA; it is also marketed in a number of other European countries under the trade name Coaxil as well as in Asia (including Singapore) and Latin America as Stablon and Tatinol but it is not available in Australia, Canada, New Zealand, or the United Kingdom.
In the US, it is an unregulated drug sold under several names and some of these products have been found to be adulterated with other recreational drugs. It is commonly known by the nickname 'gas station heroin'.
Medical uses
Depression and anxiety
Tianeptine shows efficacy against serious depressive episodes (major depression), comparable to amitriptyline, imipramine and fluoxetine, but with significantly fewer side effects. It was shown to be more effective than maprotiline in a group of people with co-existing depression and anxiety. Tianeptine also displays significant anxiolytic properties and is useful in treating a spectrum of anxiety disorders including panic disorder, as evidenced by a study in which those administered 35% CO2 gas (carbogen) on paroxetine or tianeptine therapy showed equivalent panic-blocking effects. Like many antidepressants (including bupropion, the selective serotonin reuptake inhibitors, the serotonin-norepinephrine reuptake inhibitors, moclobemide and numerous others) it may also have a beneficial effect on cognition in people with depression-induced cognitive dysfunction.
A 2005 study in Egypt showed tianeptine to be effective in men with depression and erectile dysfunction.
Tianeptine has been found to be effective in depression, in people with Parkinson's disease, and with post-traumatic stress disorder for which it was as safe and effective as fluoxetine and moclobemide.
Other uses
A clinical trial comparing its efficacy and tolerability with amitriptyline in the treatment of irritable bowel syndrome showed that tianeptine was at least as effective as amitriptyline and produced fewer prominent adverse effects, such as dry mouth and constipation.
Tianeptine has been reported to be very effective for asthma. In August 1998, Dr. Fuad Lechin and colleagues at the Central University of Venezuela Institute of Experimental Medicine in Caracas published the results of a 52-week randomized controlled trial of asthmatic children; the children in the groups who received tianeptine had a sharp decrease in clinical rating and increased lung function. Two years earlier, they had found a close, positive association between free serotonin in plasma and severity of asthma in symptomatic persons. As tianeptine was the only agent known to both reduce free serotonin in plasma and enhance uptake in platelets, they decided to use it to see if reducing free serotonin levels in plasma would help. By November 2004, there had been two double-blind placebo-controlled crossover trials and an under-25,000 person open-label study lasting over seven years, both showing effectiveness.
Tianeptine also has anticonvulsant and analgesic effects, and a clinical trial in Spain that ended in January 2007 has shown that tianeptine is effective in treating pain due to fibromyalgia. Tianeptine has been shown to have efficacy with minimal side effects in the treatment of attention-deficit hyperactivity disorder.
Contraindications
Known contraindications include the following:
Hypersensitivity to tianeptine or any of the tablet's excipients.
Side effects
Compared to other tricyclic antidepressants, it produces significantly fewer cardiovascular, anticholinergic (like dry mouth or constipation), sedative and appetite-stimulating effects. Unlike other tricyclic antidepressants, tianeptine does not affect heart function.
μ-Opioid receptor agonists can sometimes induce euphoria, as does tianeptine, occasionally, at high doses, well above the normal therapeutic range (see below). Tianeptine can also cause severe withdrawal symptoms after prolonged use at high doses which should prompt extreme caution.
By frequency
Sources:
Common (>1% frequency)
Headache (up to 18%)
Dizziness (up to 10%)
Insomnia/nightmares (up to 20%)
Drowsiness (up to 10%)
Dry mouth (up to 20%)
Constipation (up to 15%)
Nausea
Abdominal pain
Weight gain (~3%)
Agitation
Anxiety/irritability
Uncommon (0.1–1% frequency)
Bitter taste
Flatulence
Gastralgia
Blurred vision
Muscle aches
Premature ventricular contractions
Micturition disturbances
Palpitations
Orthostatic hypotension
Hot flushes
Tremor
Rare (<0.1% frequency)
Hepatitis
Hypomania
Euphoria
ECG changes
Pruritus/allergic-type skin reactions
Protracted muscle aches
General fatigue
Pharmacology
Pharmacodynamics
Atypical μ-opioid receptor agonist
In 2014, tianeptine was found to be a μ-opioid receptor (MOR) full agonist using human proteins. It was also found to act as a full agonist of the δ-opioid receptor (DOR), although with approximately 200-fold lower potency. The same researchers subsequently found that the MOR is required for the acute and chronic antidepressant-like behavioral effects of tianeptine in mice and that its primary metabolite had similar activity as a MOR agonist but with a much longer elimination half-life. Moreover, in mice, although tianeptine produced other opioid-like behavioral effects such as analgesia and reward, it did not result in tolerance or withdrawal. The authors suggested that tianeptine may be acting as a biased agonist of the MOR and that this may be responsible for its atypical profile as a MOR agonist. However, there are reports that suggest that withdrawal effects resembling those of other typical opioid drugs (including but not limited to depression, insomnia, and cold/flu-like symptoms) do manifest following prolonged use at dosages far beyond the medical range. In addition to its therapeutic effects, activation of the MOR is likely to also be responsible for the abuse potential of tianeptine at high doses that are well above the normal therapeutic range and efficacy threshold.
In rats, when co-administered with morphine, tianeptine prevents morphine-induced respiratory depression without impairing analgesia. In humans, however, tianeptine was found to increase respiratory depression when administered in conjunction with the potent opioid remifentanil.
Glutamatergic, neurotrophic, and neuroplastic modulation
Research suggests that tianeptine produces its antidepressant effects through indirect alteration and inhibition of glutamate receptor activity (i.e., AMPA receptors and NMDA receptors) and release of , in turn affecting neural plasticity. Some researchers hypothesize that tianeptine has a protective effect against stress induced neuronal remodeling. There is also action on the NMDA and AMPA receptors. In animal models, tianeptine inhibits the pathological stress-induced changes in glutamatergic neurotransmission in the amygdala and hippocampus. It may also facilitate signal transduction at the CA3 commissural associational synapse by altering the phosphorylation state of glutamate receptors. With the discovery of the rapid and novel antidepressant effects of drugs such as ketamine, many believe the efficacy of antidepressants is related to promotion of synaptic plasticity. This may be achieved by regulating the excitatory amino acid systems that are responsible for changes in the strength of synaptic connections as well as enhancing BDNF expression, although these findings are based largely on preclinical studies.
Serotonin reuptake enhancer
Tianeptine is no longer labelled a selective serotonin reuptake enhancer (SSRE) antidepressant.
Tianeptine had been found to bind to the same allosteric site on the serotonin transporter (SERT) as conventional TCAs. However, whereas conventional TCAs inhibit serotonin reuptake by the SERT, tianeptine appeared to enhance it. This seems to be because of the unique C3 amino heptanoic acid side chain of tianeptine, which, in contrast to other TCAs, is thought to lock the SERT in a conformation that increases affinity for and reuptake (Vmax) of serotonin. As such, tianeptine was thought to act a positive allosteric modulator of the SERT, or as a "serotonin reuptake enhancer".
Although tianeptine was originally found to have no effect in vitro on monoamine reuptake, release, or receptor binding, upon acute and repeated administration, tianeptine decreased the extracellular levels of serotonin in rat brain without a decrease in serotonin release, leading to a theory of tianeptine enhancing serotonin reuptake. The (−)-enantiomer is more active in this sense than the (+)-enantiomer. However, more recent studies found that long-term administration of tianeptine does not elicit any marked alterations (neither increases nor decreases) in extracellular levels of serotonin in rats. However, coadministration of tianeptine and the selective serotonin reuptake inhibitor fluoxetine inhibited the effect of tianeptine on long-term potentiation in hippocampal CA1 area. This is considered an argument for the opposite effects of tianeptine and fluoxetine on serotonin uptake, although it has been shown that fluoxetine can be partially substituted for tianeptine in animal studies. In any case, the collective research suggests that direct modulation of the serotonin system is unlikely to be the mechanism of action underlying the antidepressant effects of tianeptine.
Other actions
Tianeptine modestly enhances the mesolimbic release of dopamine and potentiates CNS D2 and D3 receptors. Tianeptine has no affinity for the dopamine transporter or the dopamine receptors. CREB-TF (CREB, cAMP response element-binding protein) is a cellular transcription factor. It binds to certain DNA sequences called cAMP response elements (CRE), thereby increasing or decreasing the transcription of the genes. CREB has a well-documented role in neuronal plasticity and long-term memory formation in the brain. Cocaine- and amphetamine-regulated transcript, also known as CART, is a neuropeptide protein that in humans is encoded by the CARTPT gene. CART appears to have roles in reward, feeding, stress, and it has the functional properties of an endogenous psychostimulant. Taking into account that CART production is upregulated by CREB, it could be hypothesized that due to tianeptine's central role in BDNF and neuronal plasticity, this CREB may be the transcription cascade through which this drug enhances mesolimbic release of dopamine.
Research indicates possible anticonvulsant (anti-seizure) and analgesic (painkilling) activity of tianeptine via downstream modulation of adenosine A1 receptors (as the effects could be experimentally blocked by antagonists of this receptor). Tianpetine is also weak histone deacetylase inhibitor and analogs with increased potency and selectivity are developed.
Tianeptine has been shown to be a high-efficacy agonist of PPAR-delta, a nuclear receptor.
Pharmacokinetics
The bioavailability of tianeptine is approximately 99%. Its plasma protein binding is about 95%. The metabolism of tianeptine is hepatic, via β-oxidation. CYP enzymes are not involved, which limits the potential for drug-drug interactions. Maximal concentration is reached in about an hour and the elimination half-life is 2.5 to 3 hours. The elimination half-life has been found to be increased to 4 to 9 hours in the elderly. Tianeptine is usually packaged as a sodium salt but can also be found as tianeptine sulfate, a slower-releasing formulation patented by Janssen in 2012. In 2022 Tonix Pharmaceuticals received permission from the US FDA to conduct phase II clinical trials on tianeptine hemioxalate extended-release tablets designed for once-daily use. The project was discontinued in late 2023 because of disappointing results in clinical trials.
Tianeptine has two active metabolites, MC5 (a pentanoic acid derivative of the parent compound) and MC3 (a propionic acid derivative). MC5 has a longer elimination half-life of approximately 7.6 hours, and takes about a week to reach steady-state concentration under daily-dosing. MC5 is a mu-opioid agonist but not delta-opioid agonist, with EC50 at the mu-opioid receptor of 0.545 μM (vs 0.194 μM for tianeptine). MC3 is a very weak mu-opioid agonist, with an EC50 of 16 μM. Tianeptine is excreted 65% in the urine and 15% in feces.
Chemistry
In terms of chemical structure, it is similar to tricyclic antidepressants (TCAs), but it has significantly different pharmacology and important structural differences, so it is not usually grouped with them.
Analogues
Although several related compounds are disclosed in the original patent, no activity data are provided and it was unclear whether these share tianeptine's unique pharmacological effects. More recent structure-activity relationship studies have since been conducted, providing some further insight on μ-opioid, δ-opioid, and pharmacokinetic activity. Derivatives where the aromatic chlorine substituent is replaced by bromine, iodine or methylthio, and/or the heptanoic acid tail is varied in length or replaced with other groups such as 3-methoxypropyl, show similar or increased opioid receptor activity relative to tianeptine itself. Amineptine, the most closely related drug to have been widely studied, is a dopamine reuptake inhibitor with no significant effect on serotonin levels, nor opioid agonist activity. Tianeptinaline, analog of tianeptine, is a notable class I HDAC inhbitor.
History
Tianeptine was introduced for medical use in France under the brand name Stablon in 1983.
Society and culture
Approval and brand names
Brand names include:
Coaxil (BG, CR, CZ, EE, HU, LT, LV, PL, RO, RU, SK UA)
Salymbra (EE)
Stablon (AR, AT, BR, FR, HK, IN, ID, MY, MX, PK, PT, SG, SK, TH, TT, TR, VE)
Tatinol (CN)
Tianeurax (DE)
Tynept (IN)
Zinosal (ES)
Tianesal (PL)
Development
Under the code names JNJ-39823277 and TPI-1062, tianeptine was previously under development for the treatment of major depressive disorder in the United States and Belgium. Phase I clinical trials were completed in Belgium and the United States in May and June 2009, respectively. For unclear reasons development of tianeptine was discontinued in both countries in January 2012. In October 2023, Tonix Pharmaceuticals announced that it had discontinued its development of tianeptine as a monotherapy for major depressive disorder after disappointing phase-2 clinical trial results. An ongoing clinical trial, sponsored by the New York Psychiatric Institute, is examining tianeptine's use in treatment-resistant depression.
U.S. National Poison Data System data on tianeptine showed a nationwide increase in tianeptine exposure calls and calls related to abuse and misuse during 2014–2017.
Recreational use
As a μ-opioid agonist, tianeptine in large doses has high abuse potential. In 2001, Singapore's Ministry of Health restricted tianeptine prescribing to psychiatrists due to its recreational potential.
Between 1989 and 2004, in France 141 cases of recreational use were identified, correlating to an incidence of 1 to 3 cases per 1000 persons treated with tianeptine and 45 between 2006 and 2011. According to Servier, stopping of treatment with tianeptine is difficult, due to the possibility of withdrawal symptoms in a person. The severity of the withdrawal is dependent on the daily dose, with high doses being extremely difficult to quit. An official DEA statement states that the withdrawal symptoms in humans typically result in: agitation, nausea, vomiting, tachycardia, hypertension, diarrhea, tremor, and diaphoresis, similar to other opioid drugs.
In 2007, according to French Health Products Safety Agency, tianeptine's manufacturer Servier agreed to modify the drug's label, following problems with dependency.
Tianeptine has been intravenously injected by drug users in Russia. This method of administration reportedly causes an opioid-like effect and is sometimes used in an attempt to lessen opioid withdrawal symptoms. Tianeptine tablets contain silica and do not dissolve completely. Often the solution is not filtered well thus particles in the injected fluid block capillaries, leading to thrombosis and then severe necrosis. Thus, in Russia tianeptine (sold under the brand name "Coaxil") is a schedule III controlled substance in the same list as the majority of benzodiazepines and barbiturates.
The Centers for Disease Control and Prevention (CDC) has expressed concern that tianeptine may be an "emerging public health risk", citing an increase in exposure-related calls to poison control centers in the United States. Sold retail as a dietary supplement and touted as a mood-booster and an aid for concentration, it is colloquially known as "gas-station heroin". In the US, it is an unregulated drug sold under several product names and has been found to be adulterated with synthetic cannabinoid receptor agonists (SCRAs) or other drugs.
A literature review conducted in 2018 found 25 articles involving 65 patients with tianeptine abuse or dependence. Limited data showed that a majority of patients were male and that age ranged from 19 to 67. Routes of intake included oral, intravenous, and insufflation entry. In the 15 cases of overdose, 8 combined ingestion with at least one other substance, of which 3 resulted in death. Six additional deaths are reported involving tianeptine (making it 9 in total). In this report, the amount of tianeptine used ranged from 50 mg/day to 10 g/day orally.
Legality
In 2003, Bahrain classified tianeptine a controlled substance due to increasing reports of misuse and recreational use.
In Russia, tianeptine (sold under the brand name "Coaxil") is a schedule III controlled substance in the same list as the majority of benzodiazepines and barbiturates.
On March 13, 2020, with a decree approved by the Minister of Health, Italy became the first European country to outlaw tianeptine considering it a Class I controlled substance.
United States
In the US, tianeptine is not considered by the Drug Enforcement Administration as a controlled substance or analogue thereof. However, its use in dietary supplements and food is unlawful. The Food and Drug Administration (FDA) has issued warnings, as recently as January 2024, about the dangers of recreational tianeptine use and the risks posed by adulterated dietary supplements containing undeclared tianeptine.
On 6 April 2018 Michigan became the first US state to outlaw tianeptine sodium, classifying it as a schedule II controlled substance. The scheduling of tianeptine sodium is effective 4 July 2018.
On March 15, 2021, Alabama outlawed tianeptine, initially classifying it as a schedule II controlled substance. It was later reclassified as a schedule I controlled substance in November 14th, 2021.
On July 1, 2022, Tennessee outlawed tianeptine and adds "any salt, sulfate, free acid, or other preparation of tianeptine, and any salt, sulfate, free acid, compound, derivative, precursor, or preparation thereof that is substantially chemically equivalent or identical with tianeptine", classifying it as a schedule II controlled substance.
On December 22, 2022, Ohio outlawed tianeptine, classifying it as a schedule I controlled substance with Ohio Governor Mike DeWine referencing the widespread availability of the chemical there as "gas-station heroin".
On March 23, 2023, Kentucky outlawed tianeptine, classifying it as a schedule I substance by an order of the Governor of Kentucky.
On September 20, 2023, Florida outlawed tianeptine, classifying it as a schedule I substance by an administrative edict issued by the Florida Attorney General.
See also
Amineptine
List of antidepressants
List of investigational anxiolytics
Tianeptine/naloxone
References
External links
Tianeptine – David Pearce – The Good Drug Guide
Amines
Anxiolytics
Chloroarenes
Euphoriants
Laboratoires Servier
Mu-opioid receptor agonists
Synthetic opioids
Tricyclic antidepressants
Sultams
Sulfur heterocycles
Nitrogen heterocycles
Drugs with unknown mechanisms of action | Tianeptine | Chemistry | 4,761 |
52,747,022 | https://en.wikipedia.org/wiki/List%20of%20corticosteroid%20esters | This is a list of corticosteroid esters, including esters of steroidal glucocorticoids and mineralocorticoids.
Esters of natural corticosteroids
Desoxycortone esters
Desoxycortone acetate (deoxycortone acetate; desoxycorticosterone acetate)
Desoxycortone cypionate (deoxycortone cypionate; desoxycorticosterone cypionate)
Desoxycortone enanthate (deoxycortone enanthate; deoxycorticosterone enanthate)
Desoxycortone glucoside (deoxycortone glucoside; deoxycorticosterone glucoside)
Desoxycortone pivalate (deoxycortone pivalate; deoxycorticosterone pivalate)
Hydrocortisone esters
Benzodrocortisone (hydrocortisone 17-benzoate)
Hydrocortamate (hydrocortisone 21-(diethylamino)acetate)
Hydrocortisone aceponate (hydrocortisone 21-acetate 17α-propionate)
Hydrocortisone acetate
Hydrocortisone bendazac
Hydrocortisone buteprate (hydrocortisone 17α-butyrate 21-propionate)
Hydrocortisone butyrate (hydrocortisone 17α-butyrate)
Hydrocortisone 21-butyrate
Hydrocortisone cypionate (hydrocortisone cyclopentanepropionate)
Hydrocortisone phosphate
Hydrocortisone succinate (hydrocortisone hemisuccinate)
Hydrocortisone tebutate
Hydrocortisone valerate
Hydrocortisone xanthogenic acid
Esters of other natural corticosteroids
11-Dehydrocorticosterone acetate
Cortifen (cortodoxone chlorphenacyl ester)
Cortisone acetate
Corticosterone acetate
Corticosterone benzoate
Cortodoxone acetate
Esters of synthetic corticosteroids
Beclometasone esters
Beclometasone dipropionate (beclomethasone dipropionate)
Beclometasone salicylate
Beclometasone valeroacetate
Betamethasone esters
Betamethasone acetate
Betamethasone acibutate (betamethasone 21-acetate 17α-isobutyrate)
Betamethasone adamantoate
Betamethasone benzoate
Betamethasone dipropionate
Betamethasone divalerate
Betamethasone phosphate (betamethasone phosphate)
Betamethasone succinate
Betamethasone valerate
Betamethasone valeroacetate (betamethasone 21-acetate 17α-valerate)
Cortobenzolone (betamethasone salicylate)
Clocortolone esters
Clocortolone acetate
Clocortolone caproate
Clocortolone pivalate
Dexamethasone esters
Dexamethasone acefurate
Dexamethasone acetate (flumeprednisolone)
Dexamethasone cipecilate
Dexamethasone diethylaminoacetate
Dexamethasone dipropionate
Dexamethasone isonicotinate
Dexamethasone linoleate
Dexamethasone metasulphobenzoate
Dexamethasone palmitate
Dexamethasone phosphate
Dexamethasone pivalate
Dexamethasone succinate
Dexamethasone sulfate
Dexamethasone tebutate (dexamethasone tert-butylacetate)
Dexamethasone troxundate
Dexamethasone valerate
Fluocinolone acetonide esters
Ciprocinonide (fluocinolone acetonide cyclopropylcarboxylate)
Fluocinonide (fluocinolone acetonide 21-acetate)
Procinonide (fluocinolone acetonide propionate)
Fluocortolone esters
Fluocortin (fluocortolone-21-carboxylate)
Fluocortin butyl (fluocortolone-21-carboxylate 21-butylate)
Fluocortolone caproate
Fluocortolone pivalate
Fluprednisolone esters
Fluprednisolone acetate
Fluprednisolone succinate (fluprednisolone hemisuccinate)
Fluprednisolone valerate
Methylprednisolone esters
Methylprednisolone aceponate
Methylprednisolone acetate
Methylprednisolone cyclopentylpropionate
Methylprednisolone phosphate
Methylprednisolone succinate (methylprednisolone hemisuccinate)
Methylprednisolone suleptanate
Prednisolone esters
Prednazate (prednisolone succinate and perphenazine compound)
Prednazoline (prednisolone phosphate and fenoxazoline compound)
Prednicarbate (prednisolone 17-(ethyl carbonate) 21-propionate)
Prednimustine (prednisolone chlorambucil ester)
Prednisolamate (prednisolone diethylaminoacetate)
Prednisolone acetate
Prednisolone hexanoate
Prednisolone metasulphobenzoate (prednisolone 21-(3-sulfobenzoate))
Prednisolone palmitate
Prednisolone phosphate
Prednisolone piperidinoacetate
Prednisolone pivalate
Prednisolone steaglate (prednisolone stearoyl-glycolate)
Prednisolone stearoylglycolate
Prednisolone succinate (prednisolone hemisuccinate)
Prednisolone sulfate
Prednisolone tebutate (prednisolone tert-butylacetate)
Prednisolone tetrahydrophthalate
Prednisolone valerate
Prednisolone valeroacetate
Prednisone esters
Prednisone acetate
Prednisone palmitate
Prednisone succinate
Tixocortol esters
Butixocort (tixocortol butyrate)
Butixocort propionate (tixocortol butyrate propionate)
Tixocortol pivalate
Triamcinolone acetonide esters
Flupamesone (triamcinolone acetonide metembonate)
Triamcinolone acetonide phosphate
Triamcinolone acetonide succinate (triamcinolone acetonide hemisuccinate)
Triamcinolone benetonide (triamcinolone acetonide 21-(benzoyl-β-aminoisobutyrate))
Triamcinolone furetonide (triamcinolone acetonide 21-(2-benzofurancarboxylate))
Triamcinolone hexacetonide (triamcinolone acetonide 21-(tert-butylacetate))
Esters of other synthetic corticosteroids
Δ7-Prednisolone 21-acetate
Alclometasone dipropionate
Amcinonide (triamcinolone acetate cyclopentanonide)
Chloroprednisone acetate
Ciclometasone (a corticosteroid 21-[4-[(acetylamino)methyl]cyclohexyl]carboxylate ester)
Clobetasol propionate
Clobetasone butyrate
Cloprednol acetate
Cormetasone acetate
Cortivazol (a corticosteroid 21-acetate ester)
Cloticasone propionate
Deflazacort (a corticosteroid 21-acetate ester)
Deprodone propionate
Desonide phosphate
Desonide pivalate
Dichlorisone acetate
Dichlorisone diacetate
Diflorasone diacetate
Diflucortolone pivalate
Diflucortolone valerate
Difluprednate (difluoroprednisolone butyrate acetate)
Dimesone acetate
Drocinonide phosphate
Etiprednol dicloacetate
Fluazacort (a corticosteroid 21-acetate ester)
Fludrocortisone acetate
Flumetasone acetate
Flumetasone pivalate
Flunisolide acetate
Fluorometholone acetate
Fluperolone acetate
Fluprednidene acetate
Fluticasone furoate
Fluticasone propionate
Formocortal (a corticosteroid 21-acetate ester)
Halopredone acetate (halopredone diacetate)
Icometasone enbutate
Isoflupredone acetate
Locicortolone dicibate
Loteprednol etabonate
Meclorisone dibutyrate
Meprednisone acetate
Meprednisone succinate (meprednisone hemisuccinate)
Mometasone furoate
Nicocortonide (a corticosteroid 21-isonicotinate ester)
Nicocortonide acetate
Paramethasone acetate
Paramethasone phosphate
Prebediolone acetate
Prednylidene diethylaminoacetate
Rofleponide palmitate
Ticabesone propionate
Timobesone acetate
Triamcinolone aminobenzal benzamidoisobutyrate
Triamcinolone diacetate
Ulobetasol propionate (halobetasol propionate)
See also
Steroid ester
List of corticosteroid cyclic ketals
List of corticosteroids
List of steroid esters
References
Corticosteroids
Steroid esters
Glucocorticoids
Prodrugs | List of corticosteroid esters | Chemistry | 2,224 |
9,142,511 | https://en.wikipedia.org/wiki/Archaeamphora | Archaeamphora longicervia is a fossil plant species, the only member of the hypothetical genus Archaeamphora. Fossil material assigned to this taxon originates from the Yixian Formation of northeastern China, dated to the Early Cretaceous (around ).
The species was originally described as a pitcher plant with close affinities to extant members of the family Sarraceniaceae. This would make it the earliest known carnivorous plant and the only known fossil record of Sarraceniaceae, or the New World pitcher plant family. Archaeamphora is also one of the three oldest known genera of angiosperms (flowering plants). Li (2005) wrote that "the existence of a so highly derived Angiosperm in the Early Cretaceous suggests that Angiosperms should have originated much earlier, maybe back to 280 mya as the molecular clock studies suggested".
Subsequent authors have questioned the identification of Archaeamphora as a pitcher plant and a taxon of angiosperm at all. The fossils more probably represent leaves (needles) of the coniferous Liaoningocladus boii deformed by insect galls.
Etymology
The generic name Archaeamphora is derived from the Greek αρχαίος, ("ancient"; combining form in Latin: -), and ἀμφορεύς, ("pitcher"). The specific epithet longicervia is derived from the Latin ("long") and ("with a neck"), in reference to the characteristic constriction in the pitcher-like structures of this species.
Fossil material
All known fossil material of A. longicervia originates from the Jianshangou Formation in Beipiao, western Liaoning, China. These Early Cretaceous beds constitute the lower part of the Yixian Formation, which is dated at 124.6 million years old. Nine specimens of A. longicervia have been found, including holotype CBO0220 and paratype CBO0754.
Description
Archaeamphora longicervia was supposed to be a herbaceous plant growing to around in height. The stem, at least long by wide, bore distinctive vertical ridges and grooves. The pitcher-like structures were ascidiate in form and long. Mature pitchers and underdeveloped pitchers or phyllodia-like leaves were arranged spirally around the stem. Pitchers consisted of a tubular base, expanded middle section, constriction around the mouth, and a vertical, spoon-shaped lid. A single wing ran down the adaxial side of each pitcher. Three to five parallel major veins were present on the pitchers, along with a few intercostal veins and numerous small veinlets.
Two unusual bag-like structures were present on each pitcher, one on either side of the central wing. Similar but semi-circular structures were found on the margin of the lid. These structures exhibited strong yellow-green intrinsic fluorescence when exposed to visible light with a wavelength of 500 nm (blue-green).
Tiny glands, approximately 4 μm in diameter, were found on the inner surface of the pitchers and partially embedded in the grooves along the veins. These also showed very strong golden-yellow fluorescence.
A single seed was found intimately associated with the fossil material of A. longicervia and is presumed to belong to the same species. It is winged and reticulate-tuberculate in morphology, closely resembling the seeds of Sarraceniaceae taxa. The seed is oval-shaped, covered with black-brown warts, and measures .
Taxonomy
The fossil material of A. longicervia was subjected to chemical analysis for oleanane, considered a key marker differentiating angiosperms from gymnosperms. Oleanane was detected in these specimens, suggesting that they belong to the angiosperms.
Pitcher plant interpretation
According to Li (2005), several morphological features of A. longicervia indicate a close relationship to Sarraceniaceae: both taxa exhibit one or two pitcher wings, a smooth peristome, and pitchers that extend vertically from the top of a short petiole.
Li (2005) suggests that A. longicervia is morphologically similar to modern Sarracenia purpurea. It shares with this species the spiral arrangement of its pitchers and phyllodia-like tubular leaves with parallel major veins. Archaeamphora longicervia also shows a resemblance to species of the genus Heliamphora in having pitchers with a long neck and upright lid. Of particular note is the similarity between the thick semi-circular structures on the lid of A. longicervia and the large nectar-secreting "bubble" present on the upper posterior portion of Heliamphora exappendiculata pitchers.
Li (2005) mentions the discovery of another type of "pitcher plant" from the same formation. This variety differs from the type material of A. longicervia in having pitchers that lack any constriction before the mouth, instead gradually expanding from the petiole into a hollow trumpet-like shape. He suggests that it "should be a different species" from A. longicervia. An intermediate form with a wider neck is also reported, suggesting that these plants were already a diversified group in the Early Cretaceous.
Current understanding
Heřmanová & Kvaček (2010) opined that the pitcher plant interpretation of Archaeamphora is "problematic and the fossil is in need of revision".
In their 2011 book, Sarraceniaceae of South America, McPherson et al. summarised current thinking on Archaeamphora as follows:
Serious doubt is emerging that reduces the likelihood that Archaeamphora longicervia belongs in the Sarraceniaceae lineage, or was even a pitcher plant at all. [...] Although Archaeamphora might well be a representative of the earliest flowering plants on Earth [...] it is very unlikely that it represents an ancestor of Sarraceniaceae since it is much too old to be part of the advanced "crown group" of Ericales to which Sarraceniaceae belong. [...] Another contradiction is that except for Archaeamphora, there is no other evidence to suggest that Sarraceniaceae evolved outside the New World, to which all extant members of the family are endemic.
Wong et al. (2015) put forward a new perspective as follows:
Archaeamphora longicervia H. Q. Li was described as an herbaceous, Sarraceniaceae-like pitcher plant from the mid Early Cretaceous Yixian Formation of Liaoning Province, northeastern China. Here, a re-investigation of A. longicervia specimens from the Yixian Formation provides new insights into its identity and the morphology of pitcher plants claimed by Li. We demonstrate that putative pitchers of Archaeamphora are insect-induced leaf galls that consist of three components: (1) an innermost larval chamber; (2) an intermediate zone of nutritive tissue; and (3) an outermost wall of sclerenchyma. Archaeamphora is not a carnivorous, Sarraceniaceae-like angiosperm, but represents insect-galled leaves of the previously reported gymnosperm Liaoningocladus boii G. Sun et al. from the Yixian Formation.
Habitat
The area inhabited by A. longicervia is thought to have experienced significant climatic fluctuations during the Early Cretaceous, ranging from arid or semi-arid to more humid conditions. The substrate in the region was mostly composed of lacustrine sediments and volcanic rocks.
See also
Cephalotus follicularis, an Australian carnivorous plant whose pitcher traps are convergently similar to those of Nepenthes
Notes
References
Sarraceniaceae
Prehistoric gymnosperm genera
Early Cretaceous plants
Extinct carnivorous plants
Cretaceous angiosperms
Fossil taxa described in 2005
Pinales
Controversial plant taxa | Archaeamphora | Biology | 1,638 |
6,250,598 | https://en.wikipedia.org/wiki/Oph%20162225-240515 | Oph 162225-240515, often abbreviated Oph 1622, and also known as Oph 11, is a pair of brown dwarfs that have been reported as orbiting each other. The bodies are located in the constellation Scorpius and are about 400 light years away. Mass estimates of the two objects are uncertain, but they are probably each higher than the brown-dwarf/planet dividing line of 13 Jupiter masses. Oph1622B is located 1.94 arcseconds from Oph1622A, at a position angle of 182°.
The discovery of the pair was announced in a 2006 Science article by Ray Jayawardhana and Valentin D. Ivanov
. The objects were discovered using telescopes of the European Southern Observatory's New Technology Telescope in La Silla, Chile. The masses were originally reported to be lower, at 14 and 7 Jupiter masses, which would have made the smaller object a planetary-mass object, or planemo. The system was announced as the first reported binary system of objects this small. However, later observations and calculations have revised the masses upward. Close et al. estimate the mass of the primary, designated Oph1622A as 12–21 times that of Jupiter and 9–20 Jupiter masses for the less massive Oph1622B, while Luhman et al. assign values of 57 and 20 Jupiter masses. Despite the acronym "Oph" appearing in its name (implying that it may belong to the young Ophiuchus molecular cloud), Oph1622 is likely to be older than its originally adopted age of 1 million years. More likely, it is a member of the Upper Scorpius subgroup of the Scorpius–Centaurus association, which has an age of 11 million years. For an adopted age of 11 million years, the system has inferred masses of 53 and 21 Jupiter masses, similar to that derived by Luhman et al.
The distance between the two is approximately 240 AU—a distance so great that Space.com wrote that "their connection is so tenuous ... that a passing star or brown dwarf could permanently separate the two objects." As such, the discovery was reported as casting doubt on the theory that such free-floating planet-like objects have been ejected from a stellar system, such an event being too violent to leave them in such a wide orbit around each other. Given their wide separation and high masses, the system is best thought of as a wide brown dwarf binary rather than a binary planetary system.
See also
2M1101AB
UScoCTIO 108
SDSS J1416+1348
Binary brown dwarfs
References
External links
https://www.sciencedaily.com/releases/2006/08/060804084105.htm
http://www.spacedaily.com/reports/Astronomers_Discover_Twin_Planemos_999.html
http://www.space.com/scienceastronomy/060803_planemo_twins.html
http://news.bbc.co.uk/1/hi/sci/tech/5241774.stm
Scorpius
Brown dwarfs
Upper Scorpius | Oph 162225-240515 | Astronomy | 660 |
27,996,219 | https://en.wikipedia.org/wiki/List%20of%20landslides | This list of landslides is a list of notable landslides and mudflows divided into sections by date and type. This list may be incomplete as there is no central catalogue for landslides, although it does exist for some for individual countries or areas. Volumes of landslides are recorded in the scientific literature using cubic kilometres (km3) for the largest and millions of cubic metres (MCM) for most events.
Prehistoric landslides
Note: km3 = cubic kilometre(s)
Submarine landslides
Note: MCM = million cubic metres; km3 = cubic kilometre(s)
Pre-20th-century historic landslides
Note: km3 = cubic kilometre(s); MCM = million cubic metres
20th-century landslides
1901–1950
Note: km3 = cubic kilometre(s); MCM = million cubic metres
1951–1975
Note: km3 = cubic kilometre(s); MCM = million cubic metres
1976–2000
Note: MCM = million cubic metres
21st-century landslides
2001–2010
Note: m3 = cubic metre(s); MCM = million cubic metres
2011–2020
Note: MCM = million cubic metres
2021–present
Note: MCM = million cubic metres
Ongoing landslides
Note: MCM = million cubic metres
See also
List of avalanches by death toll
References
External links
United States Geological Survey site
Landslides | List of landslides | Environmental_science | 276 |
2,172,717 | https://en.wikipedia.org/wiki/Kingdom%20of%20Crystal | The Kingdom of Crystal (Swedish: Glasriket, The glass realm) is a geographical area today containing a total of 14 glassworks in the municipalities of Emmaboda, Nybro, Uppvidinge, and Lessebo in southern Sweden. The two municipalities Emmaboda and Nybro belong to Kalmar County and Lessebo and Uppvidinge belong to Kronoberg County. The area is part of the province Småland, and Nybro is considered the capital of the Kingdom of Crystal area. The Kingdom of Crystal is known for its handblown glass with a continuous story since 1742. The glassworks have become part of the culture of Sweden; examples can be found in many Swedish homes, recognisable by a small sticker at the bottom with the name Orrefors, Kosta Boda, etc. The height of glass production was the end of the 19th century during which 77 glass factories were established with more than half of them situated in Småland.
When touring the forested province of Småland in Sweden, it is normal to visit at least one of the glassworks. The larger ones have adjacent museums and are open for visitors to see the glass blowing hall, normally looking down from a platform. Food is available as well as shopping for various glass products such as glasses, bowls, vases and unique glass ornaments. The Kingdom of Crystal is a popular and a well known tourist destination. The Regional Council of Kalmar County conducts a study every four years to survey the Swedish public regarding their knowledge and awareness about Kalmar County, its places to visit and tourist attractions. The survey was conducted by Kantar Sifo and included 1500 respondents aged 20–79 years old. The survey results found that the Kingdom of Crystal area was the most visited tourist attraction in the Kalmar County attracting a wide range of people. One in five aged 40–79 years old had visited the area in the last five years. The visitors had no significant difference in terms of geographical belonging, income or gender. The only difference that could be noted was that the group visiting the area tended to be of older age rather than young. Among the respondents aged 40–65+ years old only 3% claimed they never heard of the Kingdom of Crystal.
The more notable are Orrefors Glasbruk, with the adjacent National School of Glass and Kosta Boda. Each one of the glassworks have distinctive design traditions, character and atmosphere.
Companies
Glassworks
Hitorp Glasbruk
Målerås
Kosta Boda
Orrefors
Sea
Nybro
Sandvik
Bergdala
Rosdala
Johansfors
Lindshammar
Strömbergshyttan
Pukeberg (founded 1871)
Åfors
Transjö hytta
Smaller glasswork-related companies
In the Kingdom of Crystal, there are a large number of small businesses in the glass industry, which are often spin-offs from some of the larger companies. Activities of these businesses include:
Studio glass
Glass engraving
Glass repair
Glass painting
Design
Training
Riksglasskolan in Orrefors
Glasskolan in Kosta (Secondary school, the Nordic line, Commissioned, Vocational Education)
References
External links
Official website in Swedish, English and German
Glassmaking companies of Sweden
Småland
Museums in Kalmar County
Glass museums and galleries
Art museums and galleries in Sweden
Museums in Kronoberg County | Kingdom of Crystal | Materials_science,Engineering | 686 |
13,737,983 | https://en.wikipedia.org/wiki/Water%20cremation | Alkaline hydrolysis (also called biocremation, resomation, flameless cremation, aquamation or water cremation) is a process for the disposal of human and pet remains using lye and heat; it is alternative to burial, cremation, or sky burial.
Process
The process is based on alkaline hydrolysis: the body is placed in a pressure vessel which is then filled with a mixture of water and potassium hydroxide, and heated to a temperature of around at an elevated pressure which precludes boiling. The body is efficiently broken down into its chemical components, (completely disintegrating its DNA), a process which takes approximately four to six hours. Also, lower temperatures () and pressures may be used such that the process takes a leisurely 14 to 16 hours. At the start, the mixture is very alkaline, with a pH level of approximately 14; this drops to approximately 11 by the end, but the exact value depends on the total operation time and the amount of fat in the body.
The result is a quantity of green-brown tinted liquid (containing amino acids, peptides, sugars and salts) and soft, porous white bone remains (calcium phosphate) easily crushed in the hand (although a cremulator is more commonly used) to form a white-colored dust. The "ash" can then be returned to the next of kin of the deceased. The liquid is disposed of either through the sanitary sewer system, or through some other method, including use in a garden or green space. To dispose of of biomass, approximately of water are used, resulting in of effluent, which carries a dried weight (inorganic and mineral
content) of (approximately 2% of original weight).
This alkaline hydrolysis process has been championed by a number of ecological campaigning groups, for using 90 kWh of electricity, one-quarter the energy of flame-based cremation, and producing less carbon dioxide and pollutants. It is being presented as an alternative option at some British crematorium sites. , about 1,000 people had chosen this method for the disposal of their remains in the United States. The operating cost of materials, maintenance, and labor associated with the disposal of of remains was estimated at $116.40, excluding the capital investment cost of equipment.
Alkaline hydrolysis has also been adopted by the pet and animal industry. A handful of companies in North America offer the procedure as an alternative to pet cremation. Alkaline hydrolysis is also used in the agricultural industry to sterilize animal carcasses that may pose a health hazard, because the process inactivates viruses, bacteria, and prions that cause transmissible spongiform encephalopathy.
History
The process was patented by Amos Herbert Hobson in 1888 as a method to process animal carcasses into plant food. In 2005, Bio-Response Solutions designed, sold, and installed the first single cadaver alkaline hydrolysis system at the Mayo Clinic, where it was still in use as of 2019. In 2007, a Scottish biochemist, Sandy Sullivan, started a company making the machines, and calling the process (and company) Resomation.
Religious views
In Christian countries and cultures, cremation has historically been discouraged and viewed as a desecration of God's image, and as interference with the resurrection of the dead taught in scripture. It is now acceptable to some denominations. Desmond Tutu, former Anglican Archbishop of Cape Town, was aquamated, per his wish. The Eastern Orthodox Church does not allow cremation.
The Roman Catholic Church allows cremation of bodies as long as it is not done in denial of the beliefs in the sacredness of the human body or the resurrection of the dead. In 2008, Renée Mirkes published the first Catholic moral analysis of alkaline hydrolysis. He argued that it is morally neutral and may be an alternative to burial on similar grounds to cremation. However, the Catholic Church in the United States does not approve of alkaline hydrolysis as a method of final disposal of human remains. In 2011, Donald Cardinal Wuerl, Archbishop of Washington and then chairman of the Committee on Doctrine of the United States Conference of Catholic Bishops (USCCB), determined it "unnecessarily disrespectful of the human body." The Archdiocese of St. Louis explained that it was considered this way because the Church took concern with the final disposal of the liquid solution, which is typically to the sewer system. This was considered disrespectful of the sanctity of the human body. Additionally, when alkaline hydrolysis was proposed in New York state in 2012, the New York State Catholic Conference condemned the practice, stating that hydrolysis does not show sufficient respect for the teaching of the intrinsic dignity of the human body.
Judaism forbids cremation as it is not in line with the religion’s teachings of respect and dignity for humans, who are believed by the religion to be created in God’s image. Islam also forbids cremation of the deceased. Both religions are likely to reject alkaline hydrolysis as they believe that the body must be laid to rest through burial in order to prepare for the afterlife. The Baháʼí Faith, like other Abrahamic religions, discourages cremation of the deceased. The human body is seen as having to be treated with respect, and merely wrapped in a shroud before burial no further than an hour from the place of death.
Sikhism, Hinduism, and Buddhism each place theological emphasis on the complete immolation of the corpse.
Native Hawaiians consider aquamation a way to approximate their traditional burial ritual, which involves removing the bones (iwi) cleanly from the flesh using a beachside underground oven (imu), wrapping the bones, and hiding them. The use of an imu on human bodies is no longer allowed, but aquamation may offer an alternative as it produces similarly clean bones.
Legal status
Australia
Aquamation based in New South Wales is the only company to provide alkaline hydrolysis in Australia, with the remains being used as fertilizer on plantation forests, due to difficulty with obtaining permits from Sydney Water.
New Zealand
Debbie Richardson of Water Cremation Aotearoa has been an advocate for bringing the service to New Zealand (Aotearoa). Water cremation services will be offered in Christchurch by Bell, Lamb and Trotter.
Belgium
Flanders
The Flemish minister of Interior Administration Bart Somers asked in September 2021 the opinion of an advisory bioethics committee on resomation. The advice, received in November 2021, saw no objections.
Canada
Saskatchewan approved the process in 2012, becoming the first province to do so. Quebec and Ontario have also legalized the process.
A funeral home in Granby, Quebec, was the first in the province to receive an alkaline hydrolysis machine.
Ireland
In 2023, water cremation became available in Ireland, making it the first country in Europe to offer this form of burial.
When the process is complete, the remaining water undergoes further treatment to ensure that it is completely sterile. Analysis is then completed to ensure Water Authority standards are met. At this stage, the water can be recycled back to the Local Authority water treatment plant.
Mexico
Since 2019, Grupo Gayosso offers alkaline hydrolysis in Baja California.
The Netherlands
In May 2020, the Health Council of the Netherlands issued an advisory report on the admissibility of new techniques of disposing of the dead. The Council proposed a framework to assess alkaline hydrolysis. It concluded that alkaline hydrolysis is safe, dignified and sustainable. In addition to alkaline hydrolysis, the council also considered human composting as a technique to dispose bodies yet concluded that too little is known about composting and hence it cannot be assessed whether this technique fulfills the conditions. Taking into account the council's recommendations, the Ministry of the Interior and Kingdom Relations prepared a law proposal to amend the Corpse Disposal Act. Once the proposed law has been submitted to the Parliament, the democratic process to admit alkaline hydrolysis as body disposal technique can be commenced.
South Africa
In November 2019, Avbob introduced aquamation in South Africa, following the mutual assurance society's recent introduction of the alkaline hydrolysis process at its Maitland agency in Cape Town. Aquamation has been legal in South Africa since then. Following his death in December 2021 the body of Archbishop Desmond Tutu was aquamated.
United Kingdom
A public crematorium operated by Sandwell Metropolitan Borough Council at Rowley Regis, central England, was the first to receive planning permission to offer the process but in March 2017, the local water utility, Severn Trent Water, refused the council's application for a "trade effluent permit" because there was no water industry standard regulating the disposal of liquefied human remains into sewers.
In July 2023, the BBC reported that “[w]ater cremation is set to be made available for the first time in the UK.”
United States
Alkaline hydrolysis as a method of final disposition of human remains is legal in 24 states . Legislation is pending in New Jersey, New York, Ohio, Pennsylvania, and Virginia. The process was legal in New Hampshire for several years but amid opposition by religious lobby groups it was banned in 2008 and a proposal to legalize it was rejected in 2013.
Alkaline hydrolysis has been used for cadavers donated for research at the University of Florida since the mid-1990s and at the Mayo Clinic since 2005. UCLA uses the process to dispose of donor bodies.
See also
Burial
Promession
Human composting
References
Further reading
New in mortuary science: Dissolving bodies with lye – ABC News
New body 'liquefaction' unit unveiled in Florida funeral home – BBC News
Death customs
Biodegradation
Funeral-related industry
Legal aspects of death
Waste management | Water cremation | Chemistry | 2,054 |
35,844,937 | https://en.wikipedia.org/wiki/HAMP%20domain | In molecular biology, the HAMP domain (present in Histidine kinases, Adenylate cyclases, Methyl accepting proteins and Phosphatases) is an approximately 50-amino acid alpha-helical region that forms a dimeric, four-helical coiled coil. It is found in bacterial sensor and chemotaxis proteins and in eukaryotic histidine kinases. The bacterial proteins are usually integral membrane proteins and part of a two-component signal transduction pathway. One or several copies of the HAMP domain can be found in association with other domains, such as the histidine kinase domain, the bacterial chemotaxis sensory transducer domain, the PAS repeat, the EAL domain, the GGDEF domain, the protein phosphatase 2C-like domain, the guanylate cyclase domain, or the response regulatory domain. In its most common setting, the HAMP domain transmits conformational changes in periplasmic ligand-binding domains to cytoplasmic signalling kinase and methyl-acceptor domains and thus regulates the phosphorylation or methylation activity of homodimeric receptors.
References
Protein families | HAMP domain | Biology | 250 |
38,138,287 | https://en.wikipedia.org/wiki/Beta%20function%20%28accelerator%20physics%29 | The beta function in accelerator physics is a function related to the transverse size of the particle beam at the location s along the nominal beam trajectory.
It is related to the transverse beam size as follows:
where
is the location along the nominal beam trajectory
the beam is assumed to have a Gaussian shape in the transverse direction
is the width parameter of this Gaussian
is the RMS geometrical beam emittance, which is normally constant along the trajectory when there is no acceleration
Typically, separate beta functions are used for two perpendicular directions in the plane transverse to the beam direction (e.g. horizontal and vertical directions).
The beta function is one of the Courant–Snyder parameters (also called Twiss parameters).
Beta star
The value of the beta function at an interaction point is referred to as beta star.
The beta function is typically adjusted to have a local minimum at such points (in order to
minimize the beam size and thus maximise the interaction rate). Assuming that this point is
in a drift space, one can show that the evolution of the beta function around the
minimum point is given by:
where z is the distance along the nominal beam direction from the minimum point.
This implies that the smaller the beam size at the interaction
point, the faster the rise of the beta function (and thus the beam size) when going away from the interaction point.
In practice, the aperture of the beam line elements (e.g. focusing magnets) around the interaction point
limit how small beta star can be made.
References
Accelerator physics | Beta function (accelerator physics) | Physics | 312 |
72,606,989 | https://en.wikipedia.org/wiki/Virivore | Virivore (equivalently virovore) comes from the English prefix viro- meaning virus, derived from the Latin word for poison, and the suffix -vore from the Latin word vorare, meaning to eat, or to devour; therefore, a virivore is an organism that consumes viruses. Virivory is a well-described process in which organisms, primarily heterotrophic protists, but also some metazoans consume viruses.
Viruses are considered a top predator in marine environments, as they can lyse microbes and release nutrients (i.e. the viral shunt). Viruses also play an important role in the structuring of microbial trophic relationships and regulation of carbon flow.
Discovery
The first described virovore was a small marine flagellate that was shown to ingest and digest virus particles. Subsequently, numerous studies directly and indirectly demonstrated the consumption of virions. In 2022, DeLong et al. showed that over the course of two days the ciliates Halteria and Paramecium reduced chlorovirus plaque-forming units by up to two orders of magnitude, supporting the idea that nutrients were transferred from the viruses to consumers.
Furthermore, the Halteria population grew with chlorovirus as the only source of nutrition, and grew minimally in the absence of chlorovirus. The Paramecium population, however, did not differ in growth when fed chloroviruses compared to the control group. Since the Paramecium population size remained constant in the presence of only cholorviruses, this indicated that Paramecium is capable of maintaining its population size, but not growing using chlorovirus as the sole carbon source. These data showed that some grazers can grow on viruses, but it does not apply to all grazers. It was estimated that Halteria consumed between 10,000 and 1,000,000 viruses per day. It's known that small protists, such as Halteria and Paramecium, are consumed by zooplankton indicating the movement of viral-derived energy and matter up through the aquatic food web. This contradicts the idea that the viral shunt limits the movement of energy up food webs by cutting off the grazer-microbe interaction. The amount of energy and matter passed up would depend on virion size and nutritional content, which would vary depending on the strain.
Biogeochemical impact
Viruses are the most abundant biological entities in the world's oceans. The life cycle of a lytic virus is an important process within the worlds oceans for the cycling of dissolved organic matter and particulate organic matter, i.e. the viral shunt. Viral particles themselves also make up a large proportion of the nitrogen and phosphorus rich particles within the dissolved organic matter pool, as they are made up of lipids, amino acids, nucleic acids, and likely carbon incorporated from host cells. It's considered that viruses can complement a grazers diet if ingested, and the microbe is not infected.
General grazing on viruses is widespread throughout the marine environment, with grazing rates as high as 90.3 mL−1 day−1. When both bacteria and viruses are present, viruses can be ingested at rates comparable to bacteria.
Using Oikopleura dioica and Equid alphaherpesvirus 1 (EhV) as a model, scientists estimated the nutritional gain from viruses;
24.2 ng C individual−1 day−1
2.8 ng N individual−1 day−1
0.2 ng P individual−1 day−1
It's suggested that in smaller grazers, viruses could potentially have a more significant impact on host nutrition. For example, in nanoflagellates, the estimated contribution is 9% carbon, 14% nitrogen, and 28% phosphorus.
While smaller bacteria are the ideal food source for grazers due to their size and carbon content, viruses are small, non-motile, and extremely abundant for grazers making them an alternative nutritional choice. For general grazers, to obtain the same amount of carbon from viruses that they get from bacteria, they would need to consume 1000 times more viruses. This does not make viruses the ideal carbon source for grazers. However, there are other benefits to consuming viruses besides growth. Studies show that digested viral particles release amino acids that the grazer can then utilize during their own polypeptide synthesis.
The viral sweep
Trophic interactions between grazers, bacteria, and viruses are important in regulating nutrient and organic matter cycling. The viral sweep is a mechanism in which grazers cycle carbon back into the classical food web by ingesting viral particles. Infection of host cells leads to the release of viral progeny, which are subsequently consumed by grazers. Grazers are then consumed by higher trophic organisms, therefore cycling carbon from viruses back into the classical food web and to higher trophic levels.
The viral sweep could be affected by many factors such as the size and abundance of the viral particles. The size of the virus will effect the elemental content of the virus particles. For example, a virus with a larger capsid will contribute more carbon, and viruses with larger genomes will contribute more nitrogen and phosphorus as a result of the increased nucleic acids. Additionally, the impact of the viral sweep could be more significant if grazers preying on bacteria infected with viruses are also considered. Overall, by consuming bacteria and viruses, grazers play an important role in cycling carbon.
Viral grazing
The consumption of viruses is largely based on the feeding behaviour of the organism.
Filter feeding
Filter feeding is a type of suspension feeding. Filter feeders usually actively capture single food particles on cili, hairs, mucus, or other structures. Researchers used Salpingeoca as a model filter feeder to observe change in viral abundance. Salpingeoca produce lorica to help them attach to the substrate. They also have one flagellum to create a water current which transports small particles towards them where tiny pseudopodia engulf the prey particles. When viruses were co-incubated with Salpingeoca, viral abundances decreased steadily over 90 days, showing that filter feeding is an effective mechanism for feeding on viruses.
Grazing on sediment particles
Grazers move over surfaces to gather and ingest food as they go. Researchers used Thaumatomonas coloniensis as a model grazer to observe changes in viral abundances. T. coloniensis glides along the substrate and produce filopodia, which are used to engulf particles associated with the substrate. Over the 90 days, viral abundances steadily decreased when co-incubated with T. coloniensis, showing that grazing is an effective mechanism for feeding on viruses.
Raptorial feeding
Raptorial feeding is a form of active feeding, in which the organism seeks out its prey. Researchers used Goniomonas truncata as a model of raptorial feeding. G. truncata is a cryptomonad that has two flagella which are used to swim close to the substrate searching for food, and they have vacuoles to aid in food uptake. In the presence of G. truncata, viral abundances did not significantly decrease over the course of 90 days. However, this does not exclude the possibility that viral particles are taken up, and then released back into the environment. This data shows that raptorial feeding may not be a method of viral grazing, but it may have other ecological implications in terms of viral transmission.
Selective grazing
Grazing on viruses differs between viruses, and therefore it is subject to selective feeding. Flagellates are capable of ingesting many viruses of different sizes, with the smallest viruses having the lowest ingestion rate. There is huge diversity amongst marine viruses, including size, shape, morphology, and surface charge that may influence the selection, and therefore ingestion rates. Additionally, digestion rates of different viruses by the same flagellate were variable. This implies selection when grazing on viruses. For example, significant differences in virus removal by Tetrahymena pyriformis was observed when the protist was co-incubated with 13 different types of viruses. Additionally, the removal rates for the specific viruses were maintained when the protist was co-incubated with multiple viruses at once. T. pyriformis were able to identify viruses as food, which drives their movement and consumption of certain viruses over others, supporting the idea that some protists are capable of selective grazing.
Impact of viral infection on grazing
Viruses have the capacity to influence the grazing of their host cells during infection, showing that viral infection plays a role in selective grazing.
Copepods are a key link in marine food webs as they connect primary and secondary production with higher trophic levels. When phytoplankton Emiliania huxleyi were infected with the coccolithovirus EhV-86, ingestion of the infected cells by the calanoid copepod Acartia tonsa was significantly reduced compared to non-infected cells, indicating selective grazing against infected cells. These results suggest that viral infections reduce grazing, and may potentially reduce food web efficiency by keeping the carbon within the viral shunt-microbial loop, and inhibiting the movement of carbon to higher trophic levels. This emphasizes the importance of the viral sweep for cycling carbon into higher trophic levels.
Conversely, Oxyrrhis marina had a grazing preference for virally infected Emiliania huxleyi. It's suggested that the preference of infected cells over non-infected cells is due to physiological changes or change in size of the host cell. O. marina prefer to graze on larger cells as they could potentially get a greater nutritional value from them compared to a smaller cell, which would require the same amount of energy to consume. Infected E. huxleyi exhibit increased cell size compared to non-infected, making them an ideal prey for O. marina. Infected E. huxleyi may also be selected for their palatability as a result of physiological changes during infection. For example, infected cells will have higher nucleic acid content compared to non-infected cells which could improve the nutritional gain to the grazers. Additionally, grazing activity of O. marina has been linked to prey with lower dimethylsulfoniopropionate lyase (DMSP lyase) activity, as they would produce less of the potentially toxic compound acrylate. Virally infected E. huxleyi show reduced levels of DMSP lyase activity, which makes them appealing to O. marina by reducing their exposure to harmful compounds. Lastly, chemical cues such as the release of dimethyl sulfide and hydrogen peroxide during infection likely generate a gradient, making it easier for O. marina to locate the infected E. huxleyi. Preferential grazing on infected cells would make the carbon available to higher trophic levels by sequestering it in particulate form.
Overall, grazing on virus particles and virally infected cells are subject to selective grazing.
Ecological significance
Studies have shown that viruses may be ingested and digested, or ingested and released back into the environment by grazers. The observation that grazers could potentially release viruses back into the environment after ingestion could have significant ecological impacts.
Mode of transmission
The ingestion and release of viruses could mediate the transmission and dispersal of viruses in the marine environment. Using copepods as the model transmission vector, and EhV as the model virus, Frada et al. identified a potential mechanism of viral dispersal in marine environments.
EhV particles can be consumed by copepods either as individual virion particles or via host cell infection (in this case, infected Emiliania huxleyi). When infected E. huxleyi was co-incubated with copepods, the fecal pellets produced by the copepods contained an average of 4500 EhVs per pellet. These virion containing pellets were then co-incubated with a fresh culture of E. huxleyi, and rapid viral-mediated lysis of the host cells was observed. When EhV particles alone were co-incubated with copepods, i.e. no E. huxleyi, the fecal particles collected did not contain any virion particles. However, when they fed copepods EhV and Thalassiosira weissflogii, a diatom outside the host range of EhV, the fecal pellets collected contained 200 EhVs per pellet. These pellets when co-incubated with a fresh E. huxleyi culture were highly infectious and completely killed the culture. The absence of virion particles in the fecal pellets produced from sole EhV incubation supports the idea that grazers exhibit selective grazing for viruses. EhV can still be taken up by copepods through host cell infection and when in the presence of an ideal food source. Since viral abundance follows bacterial abundance, it is unlikely that there will be a marine environment where viruses will be the sole nutrient source for grazers.
The results of this experiment have significant ecological impacts. Copepods are capable of moving up and down the water column, and migrating short distances between feeding zones. Specifically, for copepods and EhV, the movement of copepods can transport viruses into new and non-infected populations of E. huxleyi, promoting bloom demise. Additionally, fecal pellets can sink from the mixed layer into deeper parts of the ocean, where they can be assimilated multiple times. These two scenarios represent potential mechanisms in which viruses can be introduced into new marine environments.
Non-host organisms
Grazers are not the only organisms capable of removing viruses from the water column. Non-host organisms such as anemones, polychaeta larvae, sea squirts, crabs, cockles, oysters, and sponges are all capable of significantly reducing the viral abundance. Sponges were found to have the greatest potential for removing viruses.
The method in which non-host organisms disrupt the viral-host contact is known as transmission interference. Non-host organisms can either have a direct impact by removing the host-organisms, or an indirect one by removing the viruses. These mechanisms cause a reduction in the virus-host contact rates which could significantly impact local microbial population dynamics.
Non-host organisms are capable of removing viruses at rates comparable to natural food particles, bacterial cells, and algal cells, which is higher when compared to grazers that have a viral clearance rate around 4%. In regions of high sponge densities, such as coastal and tropical regions, it is likely that the virus removal rate has been underestimated. The effective removal of viruses likely has global ecological impacts that have gone unrecognized.
References
Further reading
Viruses
Microbiology
Ecology terminology | Virivore | Chemistry,Biology | 3,044 |
4,843,717 | https://en.wikipedia.org/wiki/Fullpower%20Technologies | Fullpower is a Santa Cruz, California-based privately held developer of cloud-based IoT and wearable product technology used for activity tracking and sleep monitoring. Fullpower specializes in wireless technology, microelectromechanical systems, and nanotechnology. The company holds over 125 patents for its intellectual property, which it licenses to manufacturers.
The company was founded in 2005 by entrepreneurs Philippe Kahn and Sonia Lee.
History
2005-2009
Fullpower was founded in Santa Cruz, California in 2005 by entrepreneurs Philippe Kahn and Sonia Lee, who had previously founded and sold technology companies Starfish Software and LightSurf. The inspiration behind some of the key Fullpower technology came from Kahn's passion for sailing; he created prototype sleep trackers using biosensors that optimized 26-minute power naps to maximize sleep benefits and sail time.
In 2008, the company launched its MotionX Platform tracking technology, which included licensing deals to include the technology on third party devices. Later in 2008, the company launched iOS gaming apps MotionX Poker and MotionX Dice, along with handheld GPS app MotionX-GPS, targeted to outdoor enthusiasts.
In September 2009, the company released MotionX-GPS Drive for the iPhone, a door-to-door pedestrian and driving navigation application. The company later released customized versions of its navigation application for the iPad.
2010-2016
In September 2010, Nike released the Nike+ Running App (now called Nike+ Running) that tracks human motion using the accelerometer and GPS sensors of the iPhone and Android phones. MotionX provides the underlying motion sensing technology for the Nike+ Running Application, which was later named one of 2010's best apps of the year by the Wall Street Journal.
At the 2011 Consumer Electronics Show, JVC and Pioneer Corporation announced car stereo systems that integrate with the MotionX-GPS Drive application so that driving directions are shown on the in-car screen and audio verbal directions are heard over the car speakers. This was said to be the first time a commercially available iPhone navigation application used an after-market in-car screen as a display. In November 2011, Jawbone launched the UP band with ID design by Yves Béhar and integrated with the MotionX technology.
In February 2012, the MotionX 24/7 application was announced for the Apple App Store, with functions for sleep analysis, heart rate monitoring, and activity monitoring.
In 2015, Fullpower partnered with Swiss watch corporation Union Horlogere Holdings to form the joint venture Manufacture Modules Technologies (MMT), and launched the MotionX Horological Smartwatch Open Platform for the Swiss watch industry. The initial partners were Frederique Constant, the Geneva-based luxury watch manufacturer of classical watches; Alpina, the Swiss Sports Watch manufacturer founded in 1883; and Mondaine, known for its SBB Swiss Railway watches.
In November 2015, watchmaker Movado announced the release of the Movado Motion collection of fine Swiss made watches, using MMT's MotionX technology platform.
2017-present
In February 2017, Fullpower partnered with bedding product company Simmons to launch the Beautyrest Sleeptracker monitor, designed to monitor two individuals' sleep patterns. In June, Tomorrow Sleep also started selling Sleeptracker monitors powered by Fullpower technology.
In August 2019, Fullpower announced a partnership with mattress manufacturer Tempur Sealy to make beds that analyze sleep and snoring patterns, and adjust to correct sleep problems.
In January 2020, Fullpower partnered with opioid risk management company OPOS to develop an application to monitor the effects of opioids on patient sleep cycles.
Products
MotionX
The MotionX Platform is a suite of coupled and integrated firmware, software and communication components for wearable wireless devices. The MotionX Platform is used by several wearable product brands, including Nike, Manufacture Modules Technologies (MMT), Alpina, Frederique Constant, Mondaine, and Jawbone.
MotionX-GPS is a handheld GPS Multi-Sport app for runners, hikers, sailors, stand-up paddleboarding (SUP), cyclists, geocachers, and other outdoor sport enthusiasts. It leverages the iPhone built-in GPS chip as well as other on-board sensors to provide location data. MotionX-GPS supports map data provided by OpenStreetMap, Google, USGS and others.
Sleeptracker
Sleeptracker is a cloud-based, stand-alone solution capable of monitoring two individuals simultaneously and providing personalized tips to improve sleep. The system utilizes Fullpower's Sleeptracker Artificial Intelligence (AI) Engine, which has sensors to detect snoring and silently adjusts a sleeper's head position. It can also raise the bed and elevate the upper body 15 degrees to minimize snoring, and produces a daily customized sleep report and sleep score. The technology is used by bedding product companies including Serta, Simmons, Tempur Sealy, and Tomorrow Sleep.
Opioid management
Fullpower develops an application in conjunction with opioid risk management company OPOS that combines artificial intelligence and sleep sensors to help monitor the effects of opioids on a patient's sleep cycles.
Manufacture Modules Technologies (MMT)
Manufacture Modules Technologies (MMT) is a joint venture between Fullpower and Swiss watch corporation Union Horlogere Holdings. Fullpower creates and manages the circuit design, firmware, smartphone applications, as well as the cloud Infrastructure. MMT manages the Swiss watch movement development and production, as well as licensing and support for the Swiss watch industry.
Patents
As of December 2016, the Fullpower wearable patent portfolio includes more than 125 patents issued or pending covering Sleeptracker, MotionX, bands, pods, smartwatches, eyewear, clothing, sensor-fusion, health, medical, wellness and machine learning.
Some of Fullpower's issued US patents include:
US patent number 11,793,455 covers a sleep sensor system that monitors a sleeper on an adjustable smart bed and gathers data on the sleeper's movement. With this system is a receiver that connects to a server for data transmission allowing for uninterrupted analysis of the data.
US patent number 11,766,213 a compliance and effectiveness tracking (CETE) system combining "sensed sleep data, perceived sleep data obtained from the user, and predicted sleep data derived from user background and medical intervention history". The system provides insights into the effectiveness of medical intervention and compliance with protocols through evaluation for medical providers and users.
US patent number 9,474,876 B1 covers a sleep monitoring system, including a “method or apparatus to improve sleep efficacy” by monitoring user's sleep patterns continuously and making adjustments to various types of sleep aids accordingly.
US patent number 9,192,326 covers covering a sleep monitoring system, including monitoring a user's movement to determine when the user is falling asleep, as well as distinguishing between power naps and longer sleeps. This enables the user to optimize their sleep patterns, including setting wake-up alarms, allowing them to wake at the optimal time in their sleep cycle to feel more refreshed.
US patent number 8,996,332 covers key practical aspects of monitoring human activity, specifically identifying motion states of the user. Automatic activity identification is important for smartwatches and advanced fitness trackers.
US patent number 8,568,310 is "a method of using a motion sensor and a location-based sensor together to perform sensor fusion, enabling activity identification," according to the patent description. Other related US Patents include numbers 7,647,195, 7,970,586, and 8,320,578.
US patent number 8,187,182 outlines a method and apparatus using sensor fusion for accurate activity identification. US patent number 7,705,723 outlines a method and apparatus to provide outbreak notifications based on historical location data.
References
External links
Medical technology companies of the United States
Software companies based in California
Companies based in Santa Cruz, California
American companies established in 2005
Computer companies established in 2005
Health care companies established in 2005
Software companies established in 2005
2003 establishments in California
Sleep researchers
Software companies of the United States | Fullpower Technologies | Biology | 1,668 |
55,617,041 | https://en.wikipedia.org/wiki/Marine%20biogenic%20calcification | Marine biogenic calcification is the production of calcium carbonate by organisms in the global ocean.
Marine biogenic calcification is the biologically mediated process by which marine organisms produce and deposit calcium carbonate minerals to form skeletal structures or hard tissues. This process is a fundamental aspect of the life cycle of some marine organisms, including corals, mollusks, foraminifera, certain types of plankton, and other calcifying marine invertebrates. The resulting structures, such as shells, skeletons, and coral reefs, function as protection, support, and shelter and create some of the most biodiverse habitats in the world. Marine biogenic calcifiers also play a key role in the biological carbon pump and the biogeochemical cycling of nutrients, alkalinity, and organic matter.
Processes of Marine Biogenic Calcification
Biochemical mechanisms
Cellular and molecular processes of biogenic calcification
Calcium carbonate plays a fundamental role in the skeletal formation of marine calcifiers. The skeletal structures of these organisms are predominantly composed of calcium carbonate minerals, specifically aragonite and calcite. These structures provide support, protection, and housing for marine calcifiers and are formed through the biochemical processes of biomineralization to precipitate the crystal structures that form the hard tissues of these organisms.
The biogenic formation of calcium carbonate structures is the result of a combination of biological and physical processes such as genetics, cellular activity, crystal competition, growth in confined spaces, and self-organization processes. The composition of these structures, and the mechanisms involved in building them, are highly diverse. For example, some corals can incorporate both calcite and aragonite polymorphs into their skeletons. Some species, like corals and byrozoans, can incorporate other minerals to form complex protein matrices that perform specific functions.
The key steps involved in marine biogenic calcification include the uptake of dissolved calcium ions (Ca2+) and carbonate ions (CO32-) from seawater, the precipitation of calcium carbonate crystals, and the controlled formation of skeletal structures through biomineralization processes. These organisms often regulate the calcification process through the secretion of organic molecules and proteins that influence the nucleation and growth of crystalline structures.
A range of biochemical calcification (biocalcification) mechanisms exist, indicated by the fact that marine calcifiers use different forms of calcium carbonate minerals. Within this range of mechanisms, there are two broad categories of biogenic calcification in marine organisms: extracellular mineralization and intracellular mineralization. In particular, mollusks and corals use the extracellular strategy in which ion exchange pumps actively pump ions out of a cell into the extracellular space, where environmental conditions, such as pH, can be tightly controlled. In contrast, during intracellular mineralization the calcium carbonate is formed within the organism and can either be kept within the organism as an internal structure or is later moved to the outside while retaining the cell membrane covering. Broadly, the intracellular mechanism pumps ions into a vesicle within the cell. This vesicle can then be secreted to the outside of the organism. Often, cells will fuse their membranes and combine these vesicles in order to build very large calcium carbonate structures that would not be possible within a single cell.
Forms of calcium carbonate
The three most common calcium carbonate minerals are aragonite, calcite, and vaterite. Although these minerals have the same chemical formula (CaCO3), they are considered polymorphs because the atoms that make up the molecule are stacked in different arrangements. For example, aragonite minerals have an orthorhombic crystal lattice structure, while calcite crystals have a trigonal structure. Some of the calcite polymorphs are further subdivided by relative magnesium content (Mg/Ca ratio), with calcite solubility increasing with increasing Mg.
The solubility of various forms of CaCO3 differs in seawater; specifically, aragonite exhibits greater solubility compared to pure calcite.
Chemical processes and saturation state
The surface ocean engages in air-sea interactions and absorbs carbon dioxide (CO2) from the atmosphere, making the ocean the Earth's largest sink for atmospheric CO2. Carbon dioxide dissolves in and reacts with seawater to form carbonic acid. Subsequent reactions then produce carbonate (CO32−), bicarbonate (HCO3−), and hydrogen (H+) ions. Carbonate and bicarbonate are also deposited into the global ocean by rivers through the weathering of rock formations. The three species of carbon in seawater, carbon dioxide, bicarbonate, and carbonate, make up the total concentration of dissolved organic carbon (DIC) in the ocean. Approximately 90% of DIC is bicarbonate ions, 10% is carbonate ions, and <1% is dissolved carbon dioxide, with some spatial variation. The equilibria reactions between these species result in the buffering of seawater in terms of the concentrations of hydrogen ions present.
The following chemical reactions exhibit the dissolution of carbon dioxide in seawater and its subsequent reaction with water:
CO2(g) + H2O(l) ⥨ H2CO3(aq)
H2CO3(aq) ⥨ HCO3−(aq) + H+(aq)
HCO3−(aq) ⥨ CO32−(aq) + H+(aq)
This series of reactions governs the pH levels in the ocean and also dictates the saturation state of seawater, indicating how saturated or unsaturated the seawater is with carbonate ions. Consequently, the saturation state significantly influences the balance between the dissolution and calcification processes in marine biogenic calcifiers. When seawater is oversaturated with calcium carbonate, the concentration of calcium ions and carbonate ions exceed the saturation point for a particular mineral, such as aragonite or calcite, which make up the skeletons of many marine organisms. Such conditions are favorable to marine calcifiers for the formation of calcium carbonate skeletons or shells. When seawater is undersaturated, meaning the concentration of calcium and carbonate ions is below the saturation point, it becomes challenging for marine calcifiers to build and maintain their skeletal structures, as the equilibrium conditions favor dissolution of calcium carbonate. As a general rule, seawater that is undersaturated (Ω < 1) can dissolve the structures of calcifying organisms. However, many organisms see negative effects on growth at saturation states above Ω = 1. For example, a saturation state of Ω = 3 is optimal for coral growth, so a saturation state Ω < 3 can potentially have negative effects on coral growth and survival.
Calcium carbonate saturation can be determined using the following equation:
Ω = ([Ca2+][CO32−])/Ksp
where the numerator ([Ca2+][CO32−]) denotes the concentration of calcium and carbonate ions and the denominator (Ksp) refers to the mineral (solid) phase stoichiometric solubility product of calcium carbonate.
When saturation is high, organisms can extract calcium and carbonate ions from seawater, forming solid crystals of calcium carbonate:
Ca2+(aq) + 2HCO3−(aq) → CaCO3(s) + CO2 + H2O
For marine calcifiers to build and maintain calcium carbonate structures, CaCO3 production must be greater than CaCO3 loss through physical, chemical, and biological processes. This net production can be thought of as follows:
CaCO3 accretion = CaCO3 production – CaCO3 dissolution – physical loss of CaCO3
The decreasing saturation of seawater with respect to calcium carbonate, associated with ocean acidification, a result of increased carbon dioxide (CO2) absorption by the oceans, poses a significant threat to marine calcifiers. As CO2 concentrations in seawater rise, a decrease in pH and a reduction in carbonate ion concentrations in seawater follows. And, since calcification is a source of CO2 to the surrounding water, decreased rates of global calcification would inversely affect the rate of atmospheric CO2 absorption, perpetuating these effects. This can make it difficult for marine organisms to precipitate and maintain their calcium carbonate structures, affecting growth, development, and overall health.
The widespread use of calcification by marine organisms has relied on the ability of calcium carbonate to readily form in seawater, where the saturation states (Ω) of aragonite and calcite minerals have consistently surpassed Ω = 1 (indicating oversaturation) in surface waters for hundreds of millions of years. The impacts of reduced calcium carbonate saturation on marine calcifiers have broader ecological implications, as these organisms play vital roles in marine ecosystems. For example, coral reefs, which are built by coral polyps secreting calcium carbonate skeletons, are particularly vulnerable to changes in calcium carbonate saturation.
There is much debate in the scientific community on whether calcification rates correlate more with carbonate ions and saturation state or with pH. Some researchers state that a correlation exists between calcification and the Ω of carbonate ions in seawater. Meanwhile, others state that from a physiological standpoint there are numerous marine organisms, and their calcification control is attributed more so to the concentrations of seawater bicarbonate (HCO3−) and protons (H+) rather than the Ω. Further research is essential to gain a comprehensive understanding of the intricate connections between Ω, ocean acidification, and their impacts on the calcification rates of marine biogenic calcifiers, elucidating the distinct roles played by each.
Marine Calcifying Organisms
Corals
Coral reefs, physical structures formed from calcium carbonate, are important on biological and ecological scales to the regions they are endemic to. Their robust calcification abilities have resulted in extensive calcium carbonate deposits, some housing significant hydrocarbon reserves. However, this group only accounts for about 10% of the global production of calcium carbonate.
Corals undergo extracellular calcification and first develop an organic matrix and skeleton on top of which they will form their calcite structures. It is proposed that calcification via pH upregulation of the coral's extracellular calcifying fluid occurs at least in part via Ca2+-ATPase. Ca2+-ATPase is an enzyme in the calicoblastic epithelium that pumps Ca2+ ions into the calcifying region and ejects protons (H+). This process circumvents the kinetic barriers to CaCO3precipitation that exist naturally in seawater.
Molluscs
Mollusks are a diverse group including slugs, oysters, limpets, snails, scallops, mussels, clams, cephalopods and others. Mollusks employ a strategic approach to protect their soft tissues and deter predation by developing an external calcified shell. This process involves specialized cells following genetic instructions to synthesize minerals under non-equilibrium conditions. The resulting minerals exhibit complex shapes and sizes along with being formed within a confined space. These organisms also pump hydrogen out of the calcifying area so that it will not bond to the carbonate ions, which prevents crystallization of calcium carbonate.
Echinoderms
Echinoderms, of the phylum Echinodermata, include organisms such as sea stars, sea urchins, sand dollars, crinoids, sea cucumbers and brittle stars. These organisms form extensive endoskeletons consisting of magnesium-rich calcite. Magnesium-rich calcite maintains the chemical composition of CaCO3, yet features substitutions of Mg for Ca as calcite and aragonite are mineral forms or polymorphs of CaCO3. Adult echinoderm skeletons consist of teeth, spines, tests, tubule feet, and in some cases, spicules. Echinoderms serve as excellent blueprints for biomineralization. Adult sea urchins are a particularly popular species studied to better understand the molecular and cellular processes that the calcification and biomineralization of their skeletal structures requires. Unlike many other marine calcifiers, echinoderm tests are not formed purely from calcite; instead, their structures also heavily consist of organic matrices that increases the toughness and strength of their endoskeletons.
Crustaceans
Crustaceans have a hard outer shell formed from calcium carbonate. These organisms form a network of chitin-protein fibers and then precipitate calcium carbonate within this matrix. The chitin-protein fibers are first hardened by sclerotization, or crosslinking of protein and polysaccharides, followed by the crosslinking of proteins with other proteins. The presence of a hard, calcified exoskeleton means that the crustacean has to molt and shed the exoskeleton as its body size increases. This links molting cycles to calcification processes, making access to a regular source of calcium and carbonate ions crucial for the growth and survival of crustaceans. Various body parts of the crustacean will have a different mineral content, varying the hardness at these locations with the harder areas being generally stronger. This calcite shell provides protection for the crustaceans, meaning between molting cycles the crustacean must avoid predators while it waits for its calcite shell to form and harden.
Foraminifera
Foraminifera, or forams, are single-celled protists that form chambered shells (tests) from calcium carbonate. Forams first appeared approximately 170 million years ago, and populate oceans globally. Forams are microscopic organisms, typically no larger than 1 mm in length. The calcification and dissolution of their shells causes changes both in the surface seawater carbonate chemistry, and in deep-water chemistry. These organisms are excellent paleo-proxies as they record ambient water chemistry during shell formation and are well-preserved in the sedimentary fossil record. Planktonic foraminifera, found in large numbers in the ocean, contribute significantly to oceanic carbonate production. Unlike their benthic counterparts, more of these species have algal symbionts.
Coccolithophores
Phytoplankton, especially haptophytes such as coccolithophores, are also well known for their calcium carbonate production. It is estimated that these phytoplankton may contribute up to 70% of the global calcium carbonate precipitation, and coccolithophores are the largest phytoplankton contributors, along with diatoms and dinoflagellates. Contributing between 1 and 10% of total ocean primary productivity, 200 species of coccolithophores live in the ocean, and under the right conditions they can form large blooms. These large bloom formations are a driving force for the export of calcium carbonate from the surface to the deep ocean in what is sometimes called “Coccolith rain”. As the coccolithophores sink to the seafloor they contribute to the vertical carbon dioxide gradient in the water column.
Great Calcite Belt of the Southern Ocean
Coccolithophores produce calcite plates termed coccoliths which together cover the entire cell surface forming the coccosphere. The coccoliths are formed using the intracellular strategy where the plates are formed in a coccoliths vesicle, but the product forming within the vesicle varies between the haploid and diploid phases. A coccolithophore in the haploid phase will produce what is called a holococcolith, while one in the diploid phase will produce heterococcoliths. Holococcoliths are small calcite crystals held together in an organic matrix, while heterococcoliths are arrays are larger, more complex calcite crystals. These are often formed over a pre-existing template, giving each plate its particular structure and forming complex designs. Each coccolithophore is a cell surrounded by the exoskeleton coccosphere, but there exists a wide range of sizes, shapes and architectures between different cells. Advantages of these plates may include protection against infection by viruses and bacteria, as well as protection from grazing zooplankton. The calcium carbonate exoskeleton enhances the amount of light the coccolithophore can uptake, increasing the level of photosynthesis. Finally, the coccoliths protect the phytoplankton from photodamage by UV light from the sun.
The coccolithophores are also important in the geological history of Earth. The oldest coccolithophore fossil records are more than 209 million years old, placing their earliest presence in the Late Triassic period. Their calcium carbonate formation may have been the first deposition of carbonate on the seafloor.
Corallinales (red algae)
Calcifying rhodophytes stock their filamentous cell walls with calcium carbonate and magnesium. Corallinales is the one genus of red algae exists but their distribution ranges across the world's oceans. Examples include Corallina, Neogoniolithon, and Harveylithon. The magnesium-rich calcium carbonate of Corallinales cell wall provides shelter from predators and structural integrity in the intertidal zone. The CaCO3 production in Coralline also plays a role in habitat formation and provides resources for benthic invertebrates.
Calcifying bacteria
Evidence shows that some calcifying cyanobacteria strains have existed for millions of years and contributed to large land formations. About 70 strains of cyanobacteria can precipitate calcium carbonate, including some strains of Synechococcus, Bacillus sphaericus, Bactilus subtilus,and Sporosarcina psychrophile.
Morphological variations in calcium carbonate skeletons
Structural adaptations in different marine organisms
Diverse algae exhibit distinct mechanisms of CaCO3 formation, with calcification occurring internally or externally. Calcification may play a role in producing CO2 or supporting processes that need H+, based on the observed partial reaction. Phytoplankton species relying on CO2 diffusion for photosynthesis may face limitations due to CO2 concentration and diffusion to the chloroplast's Rubisco site. Calcifying macroalgae like Halimeda and Corallina also produce CaCO3 in alkaline extracellular spaces.
Coccolithophorid phytoplankton form CaCO3 in crystalline structures known as coccoliths, with holococcoliths formed externally and heterococcoliths produced intracellularly. Various coccolithophores produce two coccolith types: Heterococcoliths, from diploid cells, are complex, while holococcoliths, from haploid stages, are less studied. Factors influencing life cycle phase transitions and the role of specific proteins like GPA in coccolith morphology are explored. Polysaccharides, particularly coccolith-associated polysaccharides (CAPs), emerge as key regulators of calcite growth and morphology. CAPs' diverse roles, including nucleation promotion and inhibition, vary between species. External polysaccharides also influence coccolith adhesion and organization. Recent findings link cellular transport processes, carbonate saturation conditions, and regulatory processes determining calcite precipitation rate and morphology. Unexpectedly, silicon's role in coccolith morphology regulation is species-dependent, highlighting physiological distinctions among coccolithophore groups. These revelations raise questions about ecological implications, evolutionary adaptations, and the impact of changing ocean silicate levels on coccolithogenesis.
Calcification rates in coccolithophores often correlate with photosynthesis, implying a potential metabolic role. Heterococcoliths develop inside intracellular vesicles, with coccolith formation showing a unity ratio with photosynthetic carbon fixation under high calcification rates. The variability in isotope fractionation and calcification mechanisms underscores these organisms' adaptability and complexity in responding to environmental factors.
For corals, DIC from the seawater is absorbed and transferred to the coral skeleton. An anion exchanger will then be used to secrete DIC at the site of calcification. This DIC pool is also used by algal symbionts (dinoflagellates) that live in the coral tissue. These algae photosynthesize and produce nutrients, some of which are passed to the coral. The coral in turn will emit ammonium waste products which the algae uptake as nutrients. There has been an observed tenfold increase in calcium carbonate formation in corals containing algal symbionts compared with corals that do not have this symbiotic relationship. The coral algal symbionts, Symbiodinium, show decreased populations with increased temperatures, often leaving the coral colorless and unable to photosynthesize and losing pigments (known as coral bleaching).
Evolution of biogenic calcification
The evolution of biogenic calcification and carbonate structures within the eukaryotic domain is complex, highlighted by the distribution of mineralized skeletons across major clades. Five out of the eight major clades feature species with mineralized skeletons, and all five clades involve organisms that precipitate calcite or aragonite. Skeletal evolution occurred independently in foraminiferans and echinoderms, suggesting two separate origins of CaCO3 skeletons. The common ancestry for echinoderm and ascidian skeletons is less clear, but a conservative estimate indicates that carbonate skeletons evolved at least twenty-eight times within Eukarya.
Phylogenetic insights highlight repeated innovations in carbonate skeleton evolution, raising questions about homology in underlying molecular processes. Skeleton formation involves controlled mineral precipitation in specific biological environments, requiring directed calcium and carbonate transport, molecular templates, and growth inhibitors. Biochemical similarities, including the synthesis of acidic proteins and glycoproteins guiding mineralization, suggest an ancient capacity for carbonate formation in eukaryotes. While skeletons may not share structural homology, underlying physiological pathways are common, reflecting multiple cooptations of molecular and physiological processes across eukaryotic organisms.
The Cambrian Period marks a significant watershed in skeletal evolution, with the appearance of mineralized skeletons in various groups. Skeletal diversity increased during this period, driven by predation pressure favoring protective armor evolution. The Cambrian radiation of mineralized skeletons was likely part of a broader animal diversity expansion.
The evolution of mineralized skeletons during the Cambrian did not occur instantly, with a gradual increase in abundance and diversity over 25 million years. Environmental changes and predation pressure played key roles in shaping skeletal evolution. The diversity of minerals and skeletal architectures during this period challenges explanations solely based on changing ocean chemistry. The interplay between genetic possibility and environmental opportunity, influenced by factors like increased oxygen tensions, likely contributed to Cambrian diversification. Later Cambrian oceans witnessed a decline in mineralized skeletons, potentially influenced by high temperatures and pCO2 associated with a super greenhouse. Skeletal physiological responses to environmental conditions remain an area of study. Large-scale variations in carbonate chemistry suggest a connection between ocean chemistry and the mineralogy of carbonate precipitation. Skeletal organisms that precipitate massive skeletons under limited physiological control show stratigraphic patterns corresponding to shifts in seawater chemistry. This interplay between physiology, evolution, and environment underscores the complexity of mineralized skeleton evolution across geological time.
Calcium carbonate cycling and the biological carbon pump
The calcium carbonate cycle in the global ocean is of great significance to the biological, chemical, and physical state of the ocean. Mineral calcium carbonate most commonly presents as calcite in the ocean, and the majority of calcite is produced biologically in the upper layer of the ocean. CaCO3 material is exported from the upper ocean to sediments on the ocean floor where it either dissolves or is buried. Alternatively, CaCO3 can dissolve or be remineralized within the water column prior to reaching the seafloor.
Upon reaching the seafloor, CaCO3 undergoes a diagenetic process that ends in either dissolution or burial. The distribution of sediments consisting of calcium carbonate is fairly even across the global oceans, but specific locations are determined by the solubility and saturation level of calcium carbonate.
The “biological carbon pump” is a colloquial term coined by scientists to summarize the global carbon cycle in the ocean and its relationship to the biological processes that occur throughout the ocean. The calcium carbonate cycle is inherently linked to the biological pump. The formation of biogenic calcium carbonate by marine calcifiers is one way to add ballast to sinking particles and enhance transport of carbon to the deep ocean and seafloor. The calcium carbonate counter pump refers to the biological process of precipitation of carbonate and the sinking of particulate inorganic carbon. This process releases CO2 into the surface ocean and atmosphere across timescales spanning 100 to 1,000 years. Its crucial role in regulating atmospheric pCO2 significantly influences global changes in atmospheric CO2 concentration.
Inorganic sources of calcium carbonate
Of all the metals important to biogeochemical cycles in the ocean, calcium is one of the most significant in both its mobility and the role it plays in regulating climate over millions of years through its presence in calcium carbonate. Calcium has the ability to migrate relatively easily between the hydrosphere, the biosphere, and the crust of the Earth.
Calcium and bicarbonate ions are largely deposited into the ocean from the weathering of rock formations and are transported via riverine input. This process occurs on very long timescales. Weathering accounts for approximately 60-90% of solute calcium within the global calcium cycle. Limestone rock, which consists mostly of calcite, is a prime example of a rich source of calcium to the ocean. The source of the majority of inorganic calcium present in the ocean is due to riverine deposition, though volcanic activity interacting with seawater does provide some calcium as well. The distribution of calcium sources described above is the case for both the present day oceanic calcium budget, and the historical budget over the last 25 million years. The formation of biogenic calcium carbonate is the primary mechanism of removal of calcium in the ocean water column.
Impact of environmental factors on calcification
Rising temperature and light exposure
Marine biogenic calcifiers, such as corals, are facing challenges due to increasing ocean temperatures, leading to prolonged warming events. When sea surface temperatures exceed the local summer maximum monthly mean, coral bleaching and mortality occur as a result of the breakdown in symbiosis with Symbiodiniaceae. Predicted increases in summer-time temperatures, coupled with ocean warming, are expected to impact coral health and overall rates of calcification, particularly in tropical regions where many corals already live close to their upper thermal limits.
Corals are highly adapted to their local seasonal temperature and light conditions, influencing their physiology and calcification rates. While increased temperature or light levels typically stimulate calcification up to a certain optimum, beyond which rates decline, the effects of temperature and light on the calcifying fluid chemistry are less clear. Coral calcification is a biologically mediated process influenced by the regulation of internal calcifying fluid chemistry, including pH and dissolved inorganic carbon. The impacts of temperature and light on these factors remain a knowledge gap, with laboratory studies yielding contrasting results. Decoupling the effects of temperature and light on calcification processes is challenging due to their seasonal co-variation, highlighting the need for further research to address this gap and enhance our understanding of how marine biogenic calcifiers respond to future climate change.
Ocean Acidification
Calcifying organisms are particularly at risk due to changes in the chemical composition of ocean water associated with ocean acidification. As pH decreases due to ocean acidification, the availability of carbonate ions (CO32-) in seawater also decreases. Therefore, calcifying organisms experience difficulty building and maintaining their skeletons or shells in an acidic environment. There has been considerable debate in the literature regarding whether organisms are responding to reduced pH or reduced mineral saturation state as both variables decline with ocean acidification. However, recent studies that have isolated the effects of saturation state independent of pH changes point toward saturation state as the most important factor impacting shell formation development. However, we still need to fully constrain the carbonate chemistry to better interpret the ecological responses around ocean acidification.
Responses of marine calcifiers to reduced carbonate ion availability are seen in different ways. For example, coral reefs experience inhibited growth at decreased pH, and live calcium carbonate structures experience weakening of existing structures. Other organisms are particularly vulnerable in the early stages of their life cycle. Bivalves for instance are particularly susceptible during early larval stages during initial shell formation since these early stages have a high energetic cost to the individual's development. In contrast, adult bivalves are considerably more resilient to reduced pH.
Human interactions and applications
Economic importance
Shellfish industry
Ocean acidification (OA) presents a formidable threat to global shellfish production, particularly exerting its impact on calcification processes. Projections indicate that by the end of the century, mussel and oyster calcification could witness substantial reductions of 25% and 10%, respectively, as outlined in the IPCC IS92a scenario, which has an emissions trajectory that results in atmospheric CO2 reaching approximately 740 ppm in 2100. These species, integral to coastal ecosystems and representing a significant portion of global aquaculture, play crucial roles as ecosystem engineers. The anticipated decline in calcification due to OA not only jeopardizes coastal biodiversity and ecosystem functioning but also carries the potential for considerable economic losses. For example, global aquaculture production for shellfish contributed US$29.2 billion to the world economy.
Damaged shell surfaces, primarily resulting from reduced calcification rates, contribute to a significant decrease in sale prices, marking a critical economic concern. Economic assessments reveal that such damages, particularly impacting culture quasi-profits or applied cultural value, can lead to reductions ranging from 35% to 70%. Furthermore, when accounting for assumed pH-driven changes occurring concurrently, quasi-profits diminish even more substantially, reaching levels of 49% to 84% across diverse OA scenarios. Consequently, the economic fallout is substantial, with the UK facing potential direct losses of £3 to £6 billion in GDP by 2100, and globally, costs exceeding US$100 billion. These findings emphasize the urgent need for proactive measures to mitigate OA's impact on bivalve farming and underscore the importance of comprehensive climate policies to address these multifaceted challenges.
Coral reef tourism
For organisms relying on calcified structures (e.g. such as reef-associated organisms), OA can potentially disrupt entire ecosystems. As calcifiers play crucial roles in maintaining marine biodiversity, the repercussions of coral reef decline extend beyond economic considerations, emphasizing the urgency of comprehensive conservation efforts. Extensive degradation is occurring in the Caribbean and Western Atlantic region's coral reefs, stemming from issues like disease, overfishing, and a range of human activities. Adding to the challenges, rapid climate-induced ocean warming and acidification exacerbate the threats to these vital ecosystems. Tourism is integral to the Caribbean region with the sector contributing to over 15 percent of GDP and sustaining 13 percent of jobs in the region as a whole. In the face of these challenges, the worldwide combined economic value of coral reefs is an estimated average of US$490 per hectare annually. Specific regions showcase the economic significance of coral reefs, with Hawai'i's contributing US$360 million annually to its economy, and the Philippine economy receiving at least US$1.06 billion each year from coral reefs. In the St. Martin region, coral reefs contribute significantly, emphasizing the need for prioritized conservation and protection efforts. Proposed solutions encompass ecological measures such as water quality management, sustainable fishing practices, ecological engineering, and marine spatial planning. Additionally, socio-economic strategies involve establishing a regional reef secretariat, integrating reef health into blue economy plans, and initiating a reef labeling program to foster corporate partnerships.
See also
Ocean acidification - a threat for marine biogenic calcification
Protist shell
Seashell
Shell growth in estuaries
References
Biomineralization | Marine biogenic calcification | Chemistry | 6,689 |
40,543,716 | https://en.wikipedia.org/wiki/Bulbul%20coronavirus%20HKU11 | Bulbul coronavirus HKU11 (Bulbul-CoV HKU11) is a positive-sense single-stranded RNA Deltacoronavirus of avian origin found in Chinese bulbuls.
References
Deltacoronaviruses
Bird diseases
Animal viral diseases | Bulbul coronavirus HKU11 | Biology | 53 |
33,926,747 | https://en.wikipedia.org/wiki/Evolutionary%20aesthetics | Evolutionary aesthetics refers to evolutionary psychology theories in which the basic aesthetic preferences of Homo sapiens are argued to have evolved in order to enhance survival and reproductive success.
Based on this theory, things like color preference, preferred mate body ratios, shapes, emotional ties with objects, and many other aspects of the aesthetic experience can be explained with reference to human evolution.
Aesthetics and evolutionary psychology
Many animal and human traits have been argued to have evolved in order to enhance survival and reproductive success. Evolutionary psychology extends this to psychological traits including aesthetical preferences. Such traits are generally seen as being adaptations to the environment during the Pleistocene era and are not necessarily adaptative in our present environment. Examples include disgust of potentially harmful spoiled foods; pleasure from sex and from eating sweet and fatty foods; and fear of spiders, snakes, and the dark.
All known cultures have some form of art. This universality suggests that art is related to evolutionary adaptations. The strong emotions associated with art suggest the same.
Landscape and other visual arts preferences
An important choice for a mobile organism is selecting a good habitat to live in. Humans are argued to have strong aesthetical preferences for landscapes which were good habitats in the ancestral environment. When young human children from different nations are asked to select which landscape they prefer, from a selection of standardized landscape photographs, there is a strong preference for savannas with trees. The East African savanna is the ancestral environment in which much of human evolution is argued to have taken place. There is also a preference for landscapes with water, with both open and wooded areas, with trees with branches at a suitable height for climbing and taking foods, with features encouraging exploration such as a path or river curving out of view, with seen or implied game animals, and with some clouds. These are all features that are often featured in calendar art and in the design of public parks.
A survey of art preferences in many different nations found that realistic painting was preferred. Favorite features were water, trees as well as other plants, humans (in particular beautiful women, children, and well-known historical figures), and animals (in particular both wild and domestic large animals). Blue, followed by green, was the favorite color. Using the survey, the study authors constructed a painting showing the preferences of each nation. Despite the many different cultures, the paintings all showed a strong similarity to landscape calendar art. The authors argued that this similarity was in fact due to the influence of the Western calendar industry. Another explanation is that these features are those evolutionary psychology predicts should be popular for evolutionary reasons.
Physical attractiveness
Various evolutionary concerns have been argued to influence what is perceived to be physically attractive.
Such evolutionary based preferences are not necessarily static but may vary depending on environmental cues. Thus, availability of food influences which female body size is attractive which may have evolutionary reasons. Societies with food scarcities prefer larger female body size than societies having plenty of food. In Western society males who are hungry prefer a larger female body size than they do when not hungry.
Mate selection
An important adaptive function of courtship seems to be the selection of a mating partner with characteristics that would likely optimize reproductive success (selection for fitness). Such features include particular male or female characteristics that have aesthetic appeal to the opposite sex. Sexual selection tends to give rise to competition between individuals of the same gender. Darwin regarded such competition as having molded numerous aspects of animal behavior. Darwin particularly emphasized the striking evolution of aesthetic display in male birds. He also considered that a similar process had occurred in humans leading, for example, to the evolution of female beauty and sweeter voice and, in males, to the beard.
Evolutionary musicology
Evolutionary musicology is a subfield of biomusicology that grounds the psychological mechanisms of music perception and production in evolutionary theory. It covers vocal communication in non-human animal species, theories of the evolution of human music, and cross-cultural human universals in musical ability and processing. It also includes evolutionary explanations for what is considered aesthetically pleasing or not.
Darwinian literary studies
Darwinian Literary Studies (aka Literary Darwinism) is a branch of literary criticism that studies literature, including aesthetical aspects, in the context of evolution.
Evolution of emotion
Aesthetics are tied to emotions. There are several explanations regarding the evolution of emotion.
One example is the emotion of disgust which has been argued to have evolved in order to avoid several harmful actions such as infectious diseases due to contact with spoiled foods, feces, and decaying bodies.
Sexy son hypothesis, handicap principle, and arts
The sexy son hypothesis suggests that a female’s optimal choice among potential mates is a male whose genes will produce male offspring with the best chance of reproductive success by having trait(s) being attractive to other females. Sometimes the trait may have no reproductive benefit in itself, apart from attracting females, because of Fisherian runaway. The peacock's tail may be one example. It has also been seen as an example of the handicap principle.
It has been argued that the ability of the human brain by far exceeds what is needed for survival on the savanna. One explanation could be that the human brain and associated traits (such as artistic ability and creativity) are the equivalent of the peacock's tail for humans. According to this theory superior execution of art was important because it attracted mates.
References
Sociobiology
Human evolution
Evolutionary psychology
Movements in aesthetics
de:Evolutionäre Ästhetik | Evolutionary aesthetics | Biology | 1,083 |
1,007,168 | https://en.wikipedia.org/wiki/Stored%20Waste%20Examination%20Pilot%20Plant | The Stored Waste Examination Pilot Plant (SWEPP) is a facility at the Idaho National Laboratory for nondestructively examining containers of radioactive waste to determine if they meet criteria to be stored at the Waste Isolation Pilot Plant. SWEPP is part of the Radioactive Waste Management Complex, located southwest of EBR-I.
External links
Radioactive Waste Management Complex at the Idaho National Laboratory
Industrial buildings and structures in Idaho
Radioactive waste
Nuclear technology in the United States | Stored Waste Examination Pilot Plant | Physics,Chemistry,Technology | 92 |
57,184,777 | https://en.wikipedia.org/wiki/Mierzanowice%20culture | The Mierzanowice culture appeared in the area of the upper and middle basin of the Vistula, during the Early Bronze Age. It evolved from the so-called Proto-Mierzanowice cultural unit. The name of the culture comes from an eponymous site in Mierzanowice, where the cemetery was located. This entity was part of the pre-carpathian sphere epicorded cultures and it has been divided into three local groups: Samborzecka, Iwanowicka and Pleszowska. The initial phases of the culture are characterized by a small number of burials, seasonal settlements and single artifacts. The area of the Mierzanowice culture spread over from western Slovakia, through south - eastern Poland, reaching in the east the areas of the Volhynian Upland. It was followed by the Trzciniec culture.
Chronology
Based on relative dating, the Mierzanowice culture appeared in the Early Bronze Age. According to the archaeologist Jan Machnik, we can distinguish an older and a younger phase of this cultural unit. The discovery in Szarbia Zwierzyniecka allowed for a certain "rejuvenation" of the Mierzanowice culture as a result of the distinction of its late phase called szarbiańska. The younger phase of the Mierzanowice culture ended at the end of "Bronze A1" and the beginning of "Bronze A2" according to Paul Reinecke's chronology.
Burial
Cemeteries of the Mierzanowice cultural population were established near the settlements. The largest cemeteries were from 150 to 300 burials. Burials occurred mainly in skeletal form. Human remains were put into oval or rectangular burial pits or in coffins made of wooden logs. There were two systems for arranging human corpses: in the shrunken position and in straight position. The bodies of men were buried on the right side while corpses of women on the left side. The results of investigations conducted at the cemeteries of the Mierzanowice culture, showed a small advantage of men's graves over women's graves, which had a slightly poorer grave inventory.
Settlement
Proto-Mierzanowice appears with the arrival of Bell Beakers in the west part of Lesser Poland, around 2400–2300 BC, possibly representing an infiltration of groups rather than a massive migration. These groups were very mobile, with traces found from Moravia to Volhynia. Settlements of the Mierzanowice culture in most cases are represented by small and seasonal camps. Settlements with a larger area were founded on the hills with a naturally defensive character, near water reservoirs. A relatively large part of the archaeological sites of this culture are found on loess uplands. The best-known settlement of the Mierzanowice culture is the archaeological site called Iwanowice. In the classical phase of the Mierzanowice culture, the settlements were mostly accompanied by cemeteries.
Artefacts
One of the most common objects discovered in archaeological sites in the Mierzanowice culture are axes and sickles. Another type of artifacts are necklaces made of faience and bones. The Mierzanowice culture is well known for its earrings in the shape of a willow leaf, often produced in local workshops. Military objects discovered in the settlements are primarily leaf-shaped arrowheads. The faience beads are an extremely common element of the funeral inventory. The next category is pottery. In vascular ceramics, the influences of the Corded Ware culture are visible. Pottery of the late phase of the Mierzanowice culture is characterized by a huge variety of forms and ornamentation. Musical Panpipes made from bone have been found in Mierzanowice sites. Shell beads, bone pendants, ceramic vessels and other artifacts found in the graves indicate a rich and complex culture.
Gallery
Genetics
Juras et al. (2020) examined the mtDNA of 39 individuals ascribed to the Mierzanowice culture. The individuals appeared to be closely related to peoples of the Corded Ware culture, Bell Beaker culture, Unetice culture, and the Trzciniec culture but were notably genetically different from peoples of the neighboring Strzyżów culture, which displayed closer genetic relations to cultures further east.
Bibliography
Górski J., Kadrow S., Kultura mierzanowicka i kultura trzciniecka w zachodniej Małopolsce – problem zmiany kulturowej, [w:] Sprawozdania archeologiczne, T. XLVIII, 1996, Kraków.
Kadrow S., Machnik J., Kultura mierzanowicka: chronologia, taksonomia i rozwój przestrzenny, wyd. oddziału PAN, 1997, Kraków.
Kmieciński J., Pradzieje Ziem Polskich, T.I, cz.2 Epoka Brązu i początki Epoki Żelaza, wyd. PWN, 1989, Warszawa-Łódź.
References
Works cited
Archaeological cultures in Poland
Chronology
Bronze Age | Mierzanowice culture | Physics | 1,042 |
1,753,261 | https://en.wikipedia.org/wiki/Biographical%20Dictionary%20of%20Civil%20Engineers | The Biographical Dictionary of Civil Engineers in Great Britain and Ireland discusses the lives of the people who were concerned with building harbours and lighthouses, undertook fen drainage and improved river navigations, built canals, roads, bridges and early railways, and provided water supply facilities. The first volume, published in 2002, covers the years from 1500 to 1830; the second one, published in 2008, covers 1830 to 1890. The third and final volume, published 2014, covers 1890 to 1920. The principal editor of the first volume was Professor A. W. Skempton, and the entries were written by a number of specialist historians.
An 18-page introduction in the first volume discusses the practice of civil engineering from 1500-1830. The work concludes with appendices discussing wages, costs and inflation, a chronology of major civil engineering works, and indices of places and names. Volume Two's introduction discusses the practice of civil engineering from 1830-1890.
See also
List of civil engineers
References
2002 non-fiction books
Civil Engineers, Biographical Dictionary of
British biographical dictionaries | Biographical Dictionary of Civil Engineers | Engineering | 215 |
73,394,958 | https://en.wikipedia.org/wiki/Tedral | Tedral, or theophylline/ephedrine/phenobarbital, is a medicine formerly used to treat respiratory diseases such as asthma, chronic obstructive lung disease (COPD), chronic bronchitis, and emphysema. It is a combination drug containing three active ingredients - theophylline, ephedrine, phenobarbital. This medication relaxes the smooth muscle of the airways, making breathing easier. The common side effects of Tedral include gastrointestinal disturbances, dizziness, headache and lightheadedness. However, at high dose, it may lead to cardiac arrhythmias, hypertension, seizures or other serious cardiovascular and/or central nervous system adverse effects. Tedral is contraindicated in individuals with hypersensitivity to theophylline, ephedrine and/or phenobarbital. It should be also used in caution in patients with cardiovascular complications, such as ischemic heart disease and heart failure and/or other disease conditions. It can cause a lot of drug–drug interactions. Therefore, before prescribing patient with Tedral, drug interactions profile should be carefully checked if the patient had other concurrent medication(s). Being used as a treatment option for respiratory diseases for decades, Tedral was withdrawn from the US market in 2006 due to safety concerns.
Medical uses
Tedral is an oral bronchodilator, which contains three active ingredients, including (1) theophylline, (2) ephedrine, and (3) phenobarbital. It was indicated for the symptomatic relief of asthmatic bronchitis, chronic bronchial asthma, COPD or other bronchospastic disorders. It was usually used as an added-on therapy in asthmatic patients with inadequate symptomatic control even with inhaled bronchodilators or inhaled corticosteroids. Besides, it could also be used as a prophylactic treatment for the prevention of asthmatic attacks.
Mechanism of action
There are three active ingredients in Tedral and they have different mechanisms of action.
Theophylline
Theophylline relaxes the bronchial smooth muscle and pulmonary artery smooth muscle. In addition, it also reduces the airway responsiveness to allergens, adenosine, methacholine, and histamine by two distinct mechanisms:
First, it acts as a competitive nonselective phosphodiesterase inhibitor to inhibit type III and type IV phosphodiesterase. The inhibition of type III and type IV phosphodiesterase leads to an increase in the concentration of intracellular cAMP, which then activates protein kinase A, and inhibits TNF-alpha, and leukotriene synthesis. Thereby, suppressing inflammation and innate immunity
Second, theophylline is also a nonselective adenosine receptor antagonist, which acts on A1, A2, and A3 receptors with almost the same affinity. This possibly explains theophylline's cardiac effects. Adenosine-mediated channels also enhance diaphragmatic muscle contractility by promoting calcium uptake.
Other mechanisms of action of theophylline have also been proposed. These include the inhibition of nuclear factor-kappaB prevents the translocation of the pro-inflammatory transcription factor (NF-kappaB) to the nucleus, thereby reducing the expression of known inflammatory genes in conditions such as COPD and asthma. Additionally, it increases the secretion of interleukin-10, which has broad anti-inflammatory effects. This process also decreases poly (ADP-ribose) polymerase-1 (PARP-1), promotes apoptosis of inflammatory cells, including T cells and neutrophils, and increases levels of histone deacetylase 2 by inhibiting phosphoinositide 3-kinase-delta.
Ephedrine
Ephedrine, a stereoisomer of pseudoephedrine, acts as a direct and indirect sympathomimetic amine. Its indirect mechanism makes it more unique than other sympathomimetic agents, for example, pseudoephedrine and phenylephrine.
It directly binds to both alpha and beta receptors. However, its primary mechanism of action is indirectly achieved by the inhibition of neuronal norepinephrine reuptake and displacement of more norepinephrine from storage vesicles. These actions prolong the presence of norepinephrine in the synapse for binding to postsynaptic alpha and beta receptors. Thereby, leading to alpha- and beta-adrenergic stimulation.
The stimulation of alpha-1-adrenergic receptors in vascular smooth muscle cells leads to an increase in systemic vascular resistance and, thus, systolic and diastolic blood pressure. Direct stimulation of beta-1 receptors by ephedrine and norepinephrine also increases cardiac chronotropy and inotropy. Lastly, stimulation of beta-2-adrenergic receptors in the lungs results in bronchodilation, however, the effect is less significant than those seen in the cardiovascular system.
Phenobarbital
Phenobarbital prolongs the time that chloride channels are open. Thereby, depressing the central nervous system. This is accomplished by acting on GABA-A receptor subunits. When phenobarbital binds to these receptors, the chloride ion gates open and remain open, allowing these ions to enter neuronal cells steadily. This action causes the cell membrane to hyperpolarize, leading to a raise in the action potential threshold.
Adverse effects
Theophylline
Due to the presence of theophylline in Tedral, the most common side effects of this drug include:
Gastrointestinal: nausea and vomiting, increased stomach acid secretion, and gastroesophageal reflux. These could be due to the inhibition of phosphodiesterase;
Central nervous system: headache, lightheadedness, dizziness, insomnia, restlessness, and irritability.
However, at high serum concentrations, some serious adverse effects may occur:
Cardiovascular: convulsions, cardiac arrhythmias. These could be to adenosine A1-receptor antagonism
Central nervous system: seizures, non-convulsive status epilepticus
Other adverse side effects include:
Neuromuscular and skeletal: tremor
Genitourinary: difficulty in micturition in males with prostatism, transient diuresis
Endocrine and metabolic: hypercalcemia in patients with concomitant hyperthyroid disease
Ephedrine
Ephedrine has both alpha- and beta-agonist effects. Owing to its sympathomimetic effect, the common side effects of Tedral include:
Gastrointestinal: nausea, vomiting
Cardiovascular: tachycardia, hypertension, irregular pulse, palpitations, bradycardia
Central nervous system: dizziness, anxiety, restlessness, and insomnia
Besides, ephedrine can cause cardiac arrhythmias. When ephedrine is used in long-term, the catecholamine excess can bring about contraction band necrosis of the myocardium, which predisposes the heart to ventricular arrhythmias.
Phenobarbital
Phenobarbital also results in the adverse effects of Tedral. The most common side effects caused by phenobarbital are dizziness, sedation, incoordination, and impaired balance. However, these adverse effects affect geriatric patients to a greater extent.
Concerning the adverse effects of phenobarbital after long-term usage, loss of appetite, depression, irritability, achiness in the bones, joints, or muscles, and liver damage may occur.
Other reported adverse reactions include:
Cardiovascular: hypotension, bradycardia, syncope
Central nervous system: confusion, agitation, somnolence, ataxia, hyperkinesia, hallucinations, anxiety, nightmares, thinking abnormality, psychiatric disturbance
Respiratory: hypoventilation, apnea
Gastrointestinal: nausea, vomiting, constipation
Dermatologic reactions: exfoliative dermatitis, toxic epidermic necrolysis, Stevens-Johnson syndrome (rare)
Contraindications
Theophylline
Because one of the active ingredients in Tedral is theophylline, Tedral is contraindicated if the patient has:
Hypersensitivity to xanthine derivatives
Coronary artery disease (cardiac stimulating effects of Theophylline may prove harmful)
Peptic ulcer
Concomitant use with ephedrine in children.
Ephedrine
Because Tedral also contains Ephedrine, Tedral is contraindicated for patients who have:
Acute hypertension
Tachycardia
Ephedrine raises both chronotropy and inotropy, increasing myocardial oxygen demand. Therefore, it has to be used in caution in patients with ischemic heart disease or heart failure. It should also be avoided in situations where tachycardia would be undesirable, for example aortic stenosis.
Ephedrine's alpha-adrenergic stimulation causes contraction of the smooth muscle at the base of the bladder, resulting in resistance to urine output. It is, therefore, the use of Tedral in patients with urinary retention and prostatic hyperplasia has to be cautious.
In addition, due to excessive norepinephrine availability at the synapse, which could induce a hypertensive crisis via the indirect sympathomimetic effect of ephedrine, Tedral should be avoided or used with caution within 14 days of monoamine oxidase inhibitor (MAOI) therapy.
Phenobarbital
Tedral is also composed of phenobarbital, therefore, it is contraindicated for individuals with:
Hypersensitivity to phenobarbital, barbiturates or any component of the formulation.
A history/manifest or latent porphyria
Liver impairment
Nephritic syndrome (at high dose)
A history of sedative-hypnotic drug addiction
Drug interactions
Theophylline
Due to the presence of theophylline, Tedral interacts with:
Adenosine
Allopurinol
Alcohol
Anti-psychotic agents
Antithyroid agents
Barbiturates
Benzodiazepines
Bupropion
Beta-2 agonists
Beta blockers
CYP1A2 inhibitors
CYP1A2 inducers
Cambendazole
Clarithromycin
Erythromycin
Interleukin-6 (IL-6) inhibiting therapies
Iohexol
Levothyroxine
Methotrexate
Quinine
Verapamil
Zafirlukast
Theophylline cause interactions of Tedral with the following diseases:
Peptic ulcer disease (PUD)
Renal dysfunction
Seizure disorders
Ephedrine
Because of the presence of ephedrine, Tedral interacts with:
Alkalinizing agents
Alpha-1 blockers
Beta blockers
Cannabinoid-containing products
Carbonic anhydrase inhibitors
Clonidine
Clozapine
Inhalation Anesthetics
Iobenguane radiopharmaceutical products
Monoamine oxidase inhibitors
Quinidine
Serotonin / norepinephrine reuptake inhibitors
Sympathomimetics
Tricyclic antidepressants
Urinary acidifying agents
Since ephedrine is one of the active ingredients in Tedral, Tedral interacts with the following disease:
Cardiovascular disease
Benign prostatic hyperplasia (BPH)
Diabetes
Phenobarbital
Since Tedral contains phenobarbital, it interacts with:
Acetaminophen
Blood pressure lowering agents
Cannabinoid-containing products
CNS depressants
Doxycycline
Local anesthetics
Magnesium Sulfate
Procarbazine
Quinine
With phenobarbital being one of the active ingredients in Tedral, Tedral interacts with the following disease:
Acute alcohol intoxication
Drug dependence
Liver disease
Porphyria
Rash
Respiratory depression
Cardiovascular
Prolonged hypotension
Renal dysfunction
History
The history of Tedral can be traced back to the early 20th century when theophylline was first isolated from tea leaves and later found to have bronchodilator properties. In the 1920s and 1930s, ephedrine was introduced as a treatment for asthma and other respiratory conditions due to its bronchodilating effect and ability to increase blood flow to the lungs.
The combination of theophylline and ephedrine was first used in the 1940s as a treatment for asthma, and the addition of a barbiturate such as pentobarbital or phenobarbital was later added to enhance the sedative effects of the medication and improve patient compliance.
Tedral was first marketed by the pharmaceutical company Eli Lilly and Company in the 1950s as a treatment for asthma and other respiratory conditions, and later sold to Novartis Pharmaceuticals Corporation. It was widely used throughout the 1960s and 1970s, but its popularity declined in the 1980s due to the development of newer, more effective medications for asthma and COPD, such as inhaled corticosteroids, long-acting beta-agonists, leukotriene modifiers and immunomodulators.
Tedral was withdrawn from the US market in 2006 due to safety concerns related to the use of ephedrine. The US Food and Drug Administration (FDA) had previously issued warnings about the use of ephedrine-containing products due to their potential for serious side effects, including heart attack, stroke, and death. In response, many pharmaceutical companies voluntarily removed their ephedrine-containing products from the market. In the case of Tedral, its manufacturer, Novartis Pharmaceuticals Corporation, voluntarily withdrew the medication from the market in 2006 after the FDA issued a warning letter to the company citing safety concerns related to the use of ephedrine.
See also
Theophylline/ephedrine
References
Bronchodilators
Combination asthma drugs
Combination COPD drugs
Withdrawn drugs | Tedral | Chemistry | 2,933 |
5,499,671 | https://en.wikipedia.org/wiki/Soil%20production%20function | Soil production function refers to the rate of bedrock weathering into soil as a function of soil thickness. A general model suggests that the rate of physical weathering of bedrock (de/dt) can be represented as an exponential decline with soil thickness:
where h is soil thickness [m], P0 [mm/year] is the potential (or maximum) weathering rate of bedrock and k [m−1] is an empirical constant.
The reduction of weathering rate with thickening of soil is related to the exponential decrease of temperature amplitude with increasing depth below the soil surface, and also the exponential decrease in average water penetration (for freely-drained soils). Parameters P0 and k are related to the climate and type of parent material. The value of P0 was found to range from 0.08 to 2.0 mm/yr for sites in northern California, and 0.05–0.14 mm/yr for sites in southeastern Australia. Meanwhile values of k do not vary significantly, ranging from 2 to 4 m−1.
Several landscape evolution models have adopted the so-called humped model. This model dates back to G.K. Gilbert's Report on the Geology of the Henry Mountains (1877). Gilbert reasoned that the weathering of bedrock was fastest under an intermediate thickness of soil and slower under exposed bedrock or under thick mantled soil. This is because chemical weathering requires the presence of water. Under thin soil or exposed bedrock water tends to run off, reducing the chance of the decomposition of bedrock.
See also
Biorhexistasy
Hillslope evolution
Pedogenesis
Soil functions
Notes and references
Further reading
Mathematical modeling
Pedology | Soil production function | Mathematics | 339 |
416,830 | https://en.wikipedia.org/wiki/ONERA | The Office National d'Études et de Recherches Aérospatiales (English: National office for aerospace studies and research) or ONERA, dubbed The French Aerospace Lab in English, is the French national aerospace research center. Originally founded as the Office National d’Études et de Recherches Aéronautiques (National Office for Aeronautical Studies and Research) in 1946, it was relabeled in 1963.
It is France's leading research center in aerospace and defense. It covers all disciplines and technologies in the field. Numerous high-profile French and European aerospace programs have passed through the ONERA since its creation including the Ariane family of launch vehicles, the Concorde supersonic airliner, the Dassault Mirage family of fighter aircraft and the Rafale, the Dassault Falcon family of business jets, Aérospatiale and later Airbus projects, missiles, engines, radars and many more.
Under the supervision of the Ministry of the Armed Forces, it is a public industrial and commercial establishment employing around 2,000 people, the majority of whom are researchers, engineers and technicians, with half of its budget coming from government subsidies. The ONERA has extensive testing and computing resources, including the largest wind tunnel fleet in Europe. The ONERA's chairman is appointed by the Council of Ministers on the recommendation of the Minister of the Armed Forces.
History
ONERA's historic roots are in the Paris suburb of Meudon, south of Paris. As early as 1877, the Chalais-Meudon site hosted an aeronautical research center for military aerostats (balloons): Etablissement central de l’aérostation militaire.
ONERA was created in May 1946 to relaunch aeronautics research, an activity that had gone into hibernation during the Second World War and the German occupation. Its creation reflected the government's decision to recover the large wind tunnel in Ötztal, Austria, in the French administrative zone, and move it to France. Today, ONERA's extensive array of wind tunnels is one of its main assets. ONERA operates a world-class fleet of wind tunnels, the largest in Europe. The S1MA wind tunnel at Modane-Avrieux, developing 88 MW of total power, is Europe's largest transonic wind tunnel (tests at Mach 0.05 to Mach 1).
Organization
The Chairman and CEO of ONERA is appointed by the French Council of Ministers, acting on a proposal by the Minister of Defense. Since June 2014, the Chairman and CEO is Bruno Sainjon.
Sites of ONERA facilities
ONERA is organized in eight geographic areas. It has about 2,000 employees, with 1,500 engineers and scientists (including 230 doctoral candidates), as well as support staff.
Three centers in the greater Paris area (Ile-de-France):
ONERA Palaiseau, current headquarters
ONERA Châtillon
ONERA Meudon
Two centers in the Midi-Pyrenees region of southwest France:
ONERA Toulouse, near the leading aeronautical engineering schools ISAE-Sup’Aéro and ENAC
ONERA Fauga–Mauzac (wind tunnels), south of Toulouse.
Three other centers:
ONERA Lille, northern France (formerly the Lille Fluid Mechanics Institute)
ONERA Salon de Provence, southern France, on the site of the Ecole de l’air flying school
ONERA Modane-Avrieux (wind tunnels), in the Savoy region of southeast France.
Scientific departments
ONERA is organized in four scientific branches: Fluid Mechanics and Energetics; Materials and Structures; Physics; and Information Processing and Systems. Wind tunnel testing is managed in the GMT (Grands Moyens Techniques) department. Aerospace prospective depends on a specific department.
The Direction Technique et des Programmes (DTP) comprises the following departments:
DAAA - Aérodynamique, aéroélasticité, acoustique.
DEMR - Électromagnétisme et radar.
DMAS - Matériaux et structures.
DMPE - Multi-physique pour l'énergétique.
DOTA - Optique et techniques associées.
DPHY - Physique, instrumentation, environnement, espace.
DTIS - Traitement de l'information et systèmes.
Missions
Unlike NASA in the United States, ONERA is not an agency for space science and exploration. However, it carries out a wide range of research for space agencies, both CNES in France and the European Space Agency (ESA), as well as for the French defense agency, DGA (Direction générale de l’armement). ONERA also independently conducts its own long-term research to anticipate future technology needs. It focuses on scientific research, for example in aerodynamics for concrete applications on aircraft, the design of launchers and new defense technologies, such as drones or unmanned aerial systems (UAS).
ONERA also uses its research and innovation capabilities to support both French and European industry. ONERA has contributed to a number of landmark aerospace and defense programs in recent decades, including Airbus, Ariane, Rafale, Falcon, Mirage and Concorde.
Rockets
Various rockets have been developed through ONERA, some of which are listed below:
Daniel
Antarès
Bérénice
Tibère
Tacite
Titus
LEX
Commercial partnerships
ONERA's customer-partners include Airbus, Safran (Snecma, Turbomeca, Sagem), Dassault Aviation, Thales and other major industry players. Innovative small businesses are also encouraged to call on the expertise of ONERA's scientists and engineers, and to take advantage of technology transfer opportunities. The company Tefal was created by two ONERA engineers, the inventors of the “non-stick pan”. These products were produced and sold by Tefal S.A., which was subsequently acquired by Groupe SEB.
Notes and references
See also
French space program
CNES
Aerospace Valley
External links
Aviation organizations based in France
Research institutes in France
Space program of France
Aerospace engineering organizations
Computer science institutes in France
Scientific agencies of the government of France
French companies established in 1946
Onera sounding rockets | ONERA | Engineering | 1,251 |
67,719,992 | https://en.wikipedia.org/wiki/T.%20V.%20Rajan%20Babu | T.V. (Babu) RajanBabu is an organic chemist who holds the position of Distinguished Professor of Chemistry in the College of Arts and Sciences at the Ohio State University. His laboratory traditionally focuses on developing transition metal-catalyzed reactions. RajanBabu is known for helping develop the Nugent-RajanBabu reagent (Bis(cyclopentadienyl)titanium(III) chloride), a chemical reagent used in synthetic organic chemistry as a single electron reductant.
Education and professional experience
RajanBabu received his B. Sc (Special) from Kerala University in 1969 and M. Sc. degree from The Indian Institute of Technology (IIT, Madras) in 1971. He obtained his Ph. D. from The Ohio State University in 1979 working with Professor Harold Shechter, and was a postdoctoral fellow at Harvard University with Professor R. B. Woodward from 1978 to 1979. Notable work during his postdoctoral career includes the total synthesis of erythromycin. RajanBabu was a Member of Research Staff and Research Fellow at DuPont Central Research from 1980 to 1994 until joining the Ohio State University faculty as a Professor of Chemistry in 1995.
Research
Research in the RajanBabu lab is focused on development of new methodology for stereoselective synthesis. Major research areas include:
Asymmetric Hydrovinylation
RajanBabu developed methodology surrounding C-C bond formation via metal-catalyzed hydroformylation. They reported several asymmetric examples through the usage of chiral phosphine ligand with a hemilabile coordinating group. This method was applicable using vinylarenes, 1,3-dienes and strained olefins as substrates. Applications of this chemistry include a new synthesis of (S)-ibuprofen and a new approach to controlling the exocyclic side-chain stereochemistry in helioporin D and pseudopterocins. Related to this methodology, RajanBabu also developed a tandem [2+2] cycloaddition/asymmetric hydrovinylation reaction to allow conversion of simple precursors (ethylene, enynes) to structurally complex cyclobutanes.
Asymmetric Hydrocyanation
The RajanBabu group developed methodology in the area of hydrocyanation, leveraging the reaction of vinylarenes with HCN in the presence of Ni(0) complexes. Based on the phosphorus ligands within the Ni complex, the reaction can be rendered asymmetric. The enantioselectivity could be further improved by tuning the electronics of the phosphine ligands to electronically differentiate the phosphorus chelates. Electronic tuning was accomplished, for example, using widely available sugars such as D-glucose and D-fructose.
Radical Epoxide Opening
For further information on the Nugent-RajanBabu reagent, please see Bis(cyclopentadienyl)titanium(III) chloride.
Multicomponent Cyclization
One area of interest to the RajanBabu group is catalytic multicomponent addition/cyclization reactions. This methodology allows for formation of carbocyclic and heterocyclic compounds from acyclic precursors including unactivated olefins and acetylenes. This method leverages the reactivity of bifunctional reagents (X-Y) where X-Y in above scheme can represent R3Si−SiR‘3, R3Si−SnR‘3, R3Si−BR‘2, R3Sn−BR‘2, and trialkylsilicon- and trialkyltin- hydrides. The reactions are palladium-catalyzed, and incorporation of the X and Y species allows for vast diversification of the end products. Application of this methodology afforded syntheses of highly alkylated indolizidines such as IND-223A.
Additional Methods
RajanBabu has evaluated asymmetric aziridine openings with high enantioselectivity using yttrium- and lanthanide- salen complexes. The RajanBabu group has also developed water-soluble Rhodium(I) complexes, allowing for reactions to be run in aqueous media.
Publications
RajanBabu has over 160 publications to date and has co-authored several reviews and patents. His H-index is 56.
Notable publications include:
Group-transfer polymerization. 1. A new concept for addition polymerization with organosilicon initiators
Selective Generation of Free Radicals from Epoxides Using a Transition-Metal Radical. A Powerful New Tool for Organic Synthesis
Transition-metal-centered radicals in organic synthesis. Titanium(III)-induced cyclization of epoxy olefins
Ligand Electronic Effects in Asymmetric Catalysis: Enhanced Enantioselectivity in the Asymmetric Hydrocyanation of Vinylarenes
Beyond Nature's Chiral Pool - Enantioselective Catalysis in Industry
Honors
Arthur C. Cope Scholar Awards (2020)
Chemical Research Society of India Medal (2013)
Fellow of the American Association for the Advancement of Science (2012)
Distinguished Alumnus, Indian Institute of Technology, Madras (2008)
References
External links
Year of birth missing (living people)
Living people
Organic chemists
Ohio State University faculty
University of Kerala alumni
IIT Madras alumni
Ohio State University alumni | T. V. Rajan Babu | Chemistry | 1,115 |
7,093,376 | https://en.wikipedia.org/wiki/PVR-resistant%20advertising | PVR (DVR)-resistant advertising is a form of advertising which is designed specifically to remain viewable despite a user skipping through the commercials when using a device such as a TiVo or other digital video recorder. For instance, a black bar with a product's tagline and logo or the title of a promoted television program or film and its release date may appear on the top of the screen and remain visible much longer being fast-forwarded than a usual commercial.
This was used first by cable network FX's British network when advertising Brotherhood.
References
Advertising by medium
Digital video recorders | PVR-resistant advertising | Technology | 122 |
47,921,089 | https://en.wikipedia.org/wiki/Tmem110 |
Definition and cellular localization
TMEM110, also designated as STIMATE (for STIM-activating enhancer), is an ER-resident multi-transmembrane protein identified through a proteomic study on the ER-PM junctions. The ER-PM junctions are defined as specialized junctional sites, also known as membrane contact sites, that connect the endoplasmic reticulum (ER) and the plasma membrane (PM), and are closely implicated in controlling lipid and calcium homeostasis in mammalian cells.
Function
TMEM110 is a positive modulator of calcium flux mediated by the STIM-ORAI signaling in vertebrates. STIMATE can physically associate with STIM1 to promote conformational switch of STIM1 from inactive toward an activated state, thereby coupling to and gating the ORAI calcium channels on the plasma membrane.
Depletion of TMEM110 with RNAi knockdown or Cas9-mediated gene disruption substantially reduces the puncta formation of STIM1 at ER-PM junctions and remarkably inhibits the calcium/calcineurin/NFAT signaling axis. More genetic and biochemical studies are needed to uncover more uncharted functions of this ER-resident protein at ER-PM junctions.
References
Proteins | Tmem110 | Chemistry | 265 |
39,378,034 | https://en.wikipedia.org/wiki/Minimum%20rank%20of%20a%20graph | In mathematics, the minimum rank is a graph parameter for a graph G. It was motivated by the Colin de Verdière graph invariant.
Definition
The adjacency matrix of an undirected graph is a symmetric matrix whose rows and columns both correspond to the vertices of the graph. Its elements are all 0 or 1, and the element in row i and column j is nonzero whenever vertex i is adjacent to vertex j in the graph. More generally, a generalized adjacency matrix is any symmetric matrix of real numbers with the same pattern of nonzeros off the diagonal (the diagonal elements may be any real numbers). The minimum rank of is defined as the smallest rank of any generalized adjacency matrix of the graph; it is denoted by .
Properties
Here are some elementary properties.
The minimum rank of a graph is always at most equal to n − 1, where n is the number of vertices in the graph.
For every induced subgraph H of a given graph G, the minimum rank of H is at most equal to the minimum rank of G.
If a graph is disconnected, then its minimum rank is the sum of the minimum ranks of its connected components.
The minimum rank is a graph invariant: isomorphic graphs necessarily have the same minimum rank.
Characterization of known graph families
Several families of graphs may be characterized in terms of their minimum ranks.
For , the complete graph Kn on n vertices has minimum rank one. The only graphs that are connected and have minimum rank one are the complete graphs.
A path graph Pn on n vertices has minimum rank n − 1. The only n-vertex graphs with minimum rank n − 1 are the path graphs.
A cycle graph Cn on n vertices has minimum rank n − 2.
Let be a 2-connected graph. Then if and only if is a linear 2-tree.
A graph has if and only if the complement of is of the form for appropriate nonnegative integers with for all .
Notes
References
.
Algebraic graph theory
Graph invariants | Minimum rank of a graph | Mathematics | 408 |
10,367,565 | https://en.wikipedia.org/wiki/IEC%2061030 | D²B (Domestic Digital Bus, IEC 61030) is an IEC standard for a low-speed multi-master serial communication bus for home automation applications. It was originally developed by Philips in the 1980s. In 2006 it has been withdrawn by IEC because another standard was proposed, JTC1 SC 83/WG1. There remain many IEC61030-compliant devices, such as some Philips-branded head units and CD changers from car stereos.
The SCART connector provides a D²B connection for inter-device communication.
See also
I²C
ISO/IEC JTC 1
References
External links
ANSI eStandards bookstore page for IEC 61030
Home automation | IEC 61030 | Technology | 139 |
23,833,597 | https://en.wikipedia.org/wiki/Gymnopilus%20echinulisporus | Gymnopilus echinulisporus is a species of agaric fungus in the family Hymenogastraceae. It was first formally described by American mycologist William Alphonso Murrill in 1912.
Description
The convex to flattened cap is up to in diameter.
Habitat and distribution
Gymnopilus echinulisporus has been found growing on wood in Oregon in November.
See also
List of Gymnopilus species
References
External links
Images at Mushroom Observer
echinulisporus
Fungi described in 1912
Fungi of North America
Taxa named by William Alphonso Murrill
Fungus species | Gymnopilus echinulisporus | Biology | 125 |
24,904,544 | https://en.wikipedia.org/wiki/List%20of%20bombings%20during%20the%20Troubles | This is a list of notable bombings related to the Northern Ireland "Troubles" and their aftermath. It includes bombings that took place in Northern Ireland, the Republic of Ireland, and Great Britain since 1968. There were at least 10,000 bomb attacks during the conflict (1968–1998).
1969
5 August - RTÉ Studio bombing: The Ulster Volunteer Force (UVF) detonated a bomb at Raidió Teilifís Éireann (RTÉ) headquarters in Donnybrook, Dublin, Republic of Ireland, causing significant damage.
1970
11 August – 1970 Crossmaglen bombing: Two Royal Ulster Constabulary (RUC) officers were killed by a booby-trap car bomb exploded in Crossmaglen, County Armagh. They were the first RUC victims of the IRA.
1971
2 November – Red Lion Pub bombing: Three Protestant civilians were killed and dozens injured by an IRA bomb attack on a Protestant bar on Ormeau Road, Belfast.
4 December – McGurk's Bar bombing: There were 15 civilians killed and 17 injured by a UVF bomb attack on a Catholic bar in Belfast.
11 December – 1971 Balmoral Furniture Company bombing: Three Protestant civilians—two of them children—and a Roman Catholic civilian were killed. 19 people were injured in the attack. No group claimed credit for the attack but it was believed to have been carried out by the IRA.
1972
22 February – Aldershot bombing: Seven people were killed by an Official IRA bomb at Aldershot Barracks in England, thought to be in retaliation for Bloody Sunday. Six of those killed were female ancillary workers and the seventh was a Roman Catholic military chaplain.
4 March – Abercorn Restaurant bombing: A bomb exploded without warning in the Abercorn restaurant on Castle Lane, Belfast. Two were killed and another 130 were injured.
23 March – Donegall Street bombing: The IRA detonated a massive car bomb in Lower Donegall Street in Belfast's city centre. Seven people were killed in the explosion, including two members of the RUC. 148 people were injured.
21 July – Bloody Friday: The IRA exploded 35 bombs across Northern Ireland, and three large car bombs exploded in Derry, causing no injuries. The Belfast–Dublin train line was also bombed. The IRA detonated 22 bombs in Belfast's city center; nine people were killed (including two British soldiers and one Ulster Defence Association (UDA) member) from two bombs while 130 were injured.
31 July – Claudy bombing: Nine civilians were killed by a car bomb in Claudy, County Londonderry. No group has claimed responsibility, though the IRA was suspected.
22 August – Newry customs bombing: A bomb planted by the IRA detonated prematurely at a customs office in Newry. Three IRA members killed six civilians and themselves in the explosion.
14 September – Imperial Hotel bombing 1972: The UVF detonated a car bomb outside a hotel near Antrim Road, Belfast, which killed three people and injured 50 others. 91-year-old Martha Smilie, a Protestant civilian, was the oldest person killed during the Troubles.
31 October – Benny's Bar bombing: The UDA exploded a bomb outside a pub in Belfast, killing two Catholic children and injuring 12 people.
1 December – 1972 and 1973 Dublin bombings: Two civilians were killed and 127 were injured by two Ulster loyalist car bombs in Dublin, Republic of Ireland.
28 December – Belturbet bombing: Loyalist paramilitaries exploded a bomb in Belturbet, County Cavan, Ireland, which killed two teenagers and injured 8 other people, at the same time a bomb exploded in Clones, County Monaghan, injuring two other people.
1973
8 March – 1973 Old Bailey bombing: A civilian died from the Old Bailey courthouse bombing in London; over 200 were injured, and a simultaneous explosion happened at the Ministry of Agriculture in Westminster. On the same day as the London bombings, 11 bombs exploded in Northern Ireland: five bombs exploded in Belfast, which included a bomb at the Merville Inn pub; five other bombs exploded in Derry in less than an hour. The first bomb exploded at Ebrington Barracks, and another detonated beside the RUC Waterside station. Another bomb exploded in Lurgan, County Armagh. Only one person was injured in the attacks.
12 June – 1973 Coleraine bombings. Six Protestant civilians were killed by an IRA bomb in Coleraine, County Londonderry. The warning given prior to the explosion was inadequate.
10 September – King's Cross station and Euston station bombings: 13 people were injured when the IRA exploded two bombs at railway stations in Central London.
18 December – 1973 Westminster bombing: A car bomb exploded on Thorney Street near Millbank in the City of Westminster, London, injuring 60 people.
1974
4 February – M62 coach bombing: 12 people were killed by an IRA bomb planted on a coach on the M62 in the West Riding of Yorkshire carrying British soldiers and their families.
2 May - Rose & Crown Bar bombing - Six Catholic civilians were killed and 18 injured by a UVF bomb at a bar on Ormeau Road, Belfast.
17 May – Dublin and Monaghan bombings: the UVF detonated four bombs (three in Dublin, one in Monaghan) in the Republic of Ireland. They killed 33 civilians including a pregnant woman.
17 June – 1974 Houses of Parliament bombing: The IRA bombed the Houses of Parliament in London, injuring 11 people and causing extensive damage.
17 July – 1974 Tower of London bombing: The IRA detonated a bomb at the Tower of London, killing a civilian and injuring 41 people.
5 October – Guildford pub bombings: four soldiers and one civilian were killed and 65 people injured by IRA bombs at two pubs in Guildford, England.
22 October – Brooks's Club bombing: The IRA threw a bomb into a conservative club in London, injuring three staff members.
7 November – Woolwich pub bombing: A soldier and a civilian were killed and 35 people injured when the IRA threw a bomb into the Kings Arms public house in Woolwich
14 November - James Patrick McDade, Lieutenant in the Birmingham Battalion, of the Provisional Irish Republican Army (IRA) was killed in a premature explosion whilst planting a bomb at the Coventry telephone exchange in 1974. Along with the death of McDade another IRA Volunteer named Raymond McLaughlin was arrested near the scene of the bombing & was subsequently convicted for the Coventry bomb. There were three further IRA bomb explosions in England that day. The RAF Club in Northampton was badly gutted by a firebomb, the Conservative Club in Solihull was damaged by a bomb, and a timber yard in Ladywood in Birmingham was also subjected to a bombing.
21 November – Birmingham pub bombings: 21 civilians were killed and 182 injured by IRA bombs at pubs in Birmingham, England.
25 & 27 November – 1974 London pillar box bombings: The IRA exploded several bombs over a two-day period, injuring 40 people in total.
17 December – Telephone Exchange bombings: The IRA exploded three time bombs in west London at the telephone exchange, killing one civilian and injuring six others.
18 December – 1974 Bristol bombing: The IRA detonated two bombs in Bristol, injuring 20 people.
19 December – 1974 Oxford Street bombing: The IRA detonated a 100 lb. car bomb outside a Selfridges store on Oxford Street, injuring 9 people and causing over £1.5 million in damages.
22 December – The IRA announced a Christmas ceasefire after carrying out a bomb attack on the home of former prime minister Edward Heath. Heath was not in the building at the time and no one was injured.
1975
13 March – 1975 Conway's Bar attack: A UVF member blew himself up along with a Catholic civilian woman while attempting to plant a bomb in a Belfast pub.
5 April – Mountainview Tavern attack: A group calling itself the Republican Action Force bombed a pub in Belfast, killing four Protestant civilians and a UDA member, and injured 50 people.
12 April – Strand Bar bombing: The Red Hand Commando (a UVF-linked group) bombed a Belfast pub, killing six Catholic civilians and injuring 50 others.
27 August – Caterham Arms pub bombing: The IRA bombed a pub in Surrey, injuring 33 people.
9 October – 1975 Piccadilly bombing: The IRA bombed a tube station in London, killing a civilian and injuring 20 others.
29 October – Trattoria Fiore bombing: The IRA bombed a Mayfair restaurant, injuring 18 people.
12 November – Scott's Oyster Bar bombing: The IRA bombed a bar in London, killing one civilian and injuring 15 people.
18 November – Walton's Restaurant bombing: The IRA bombed a restaurant in Knightsbridge, killing two civilians and injured over 20.
29 November – Dublin Airport bombing: The UDA bombed Dublin Airport, killing a civilian staff member and injuring 10 people.
19 December – Donnelly's Bar and Kay's Tavern attacks: Bombings killed two civilians. The attack was linked to the Glenanne gang.
20 December – Biddy Mulligan's pub bombing: The UDA bombed a popular Irish pub in London, injuring five people.
31 December – Central Bar bombing: Members of the Irish National Liberation Army (INLA) using a cover name, Armagh People's Republican Army, bombed a pub in Portadown, killing three Protestant civilians and injuring 30 people.
1976
7 March – Castleblayney bombing: The UVF detonated a car bomb in County Monaghan, killing a civilian and injuring 17 others.
17 March – Hillcrest Bar bombing: The UVF detonated a car bomb outside a pub in Tyrone, killing four people and injuring 50.
27 March – 1976 Olympia bombing: An IRA bomb exploded in London, killing one civilian and injuring 85 others in the blast. Due to the outrage over this bombing, the IRA temporarily suspended attacks in England.
15 May – Charlemont pub attacks: Five Catholic civilians were killed and many injured by two UVF bomb attacks in Belfast and Charlemont, County Armagh.
21 July – Christopher Ewart-Biggs, the British Ambassador to Ireland, and his secretary Judith Cook, were killed in Dublin by a bomb planted in Biggs's car.
16 August – 1976 Step Inn pub bombing: The UVF detonated a bomb in Keady, South Armagh, killing two civilians and injuring 20.
16 October – Garryhinch ambush: The IRA detonated a bomb at a farmhouse in Garryhinch, killing a member of the Garda and badly wounding four others.
1978
17 February – La Mon restaurant bombing: 12 civilians were killed and 30 injured by an IRA incendiary bomb at the La Mon Restaurant near Belfast.
1979
30 March – Airey Neave, Conservative MP for Abingdon, was assassinated. A bomb exploded in his car as he left the Palace of Westminster in London. The INLA later claimed responsibility for the assassination.
27 August – Warrenpoint ambush: 18 British soldiers were killed by an IRA bomb in Warrenpoint. A gun battle ensued between the IRA and the British Army, in which one civilian was killed. On the same day, four people (including the Queen's cousin Lord Louis Mountbatten) were killed by an IRA bomb on board a boat near the coast of County Sligo.
28 August – 1979 Brussels bombing: British Army bandsmen were targeted at the Grand-Place. The bombing injured seven bandsmen and eleven civilians.
1980
17 January – Dunmurry train bombing: An IRA bomb prematurely detonated on a passenger train near Belfast, killing three civilians and injuring five others.
7 March – an INLA active service unit planted two 10 lb. bombs at Netheravon British Army camp in the Salisbury Plain Training Area. Only one bomb detonated and caused damage, started a fire, and injured two soldiers.
2 December – A device planted by the IRA exploded injuring five people at Kensington Regiment (Princess Louise's) Territorial Army Centre, Hammersmith Road, London.
1981
6 February – Attacks on shipping in Lough Foyle (1981–82): The IRA bombed and sank the British coal ship the Nellie M. An estimated £1 million was lost from the cargo.
19 May – Bessbrook landmine attack: The Provisional IRA South Armagh Brigade killed five British soldiers in a landmine attack at Bessbrook, Armagh.
10 October – Chelsea Barracks bombing: Two civilians were killed and over 20 British soldiers were injured in an IRA bombing outside the Chelsea Barracks.
17 October – Lieutenant-general Sir Steuart Pringle was injured in an explosion at his home in Dulwich, London, by a car bomb planted by the IRA. He lost a leg in the bombing.
26 October – The IRA bombed a Wimpy bar on Oxford Street, killing Kenneth Howorth, the Metropolitan Police explosives officer attempting to defuse it.
1982
23 February – Attacks on shipping in Lough Foyle (1981–82): The IRA sank the St. Bedan, a British coal ship at Lough Foyle.
20 July – Hyde Park and Regents Park bombings: 11 British soldiers and seven military horses died in IRA bomb attacks in Regent's Park and Hyde Park, London. Many spectators were badly injured.
16 September – 1982 Divis Flats bombing: the INLA detonated a remote-control bomb hidden in a drainpipe as a British patrol passed Cullingtree Walk, Divis Flats, Belfast. Three people were killed: a British soldier, Kevin Waller; and two Catholic children, Stephen Bennett and Kevin Valliday. Three others, including two more British soldiers and a Catholic civilian, were injured in the attack.
6 December – Droppin Well bombing: 11 British soldiers and six civilians were killed by an INLA bomb at the Droppin' Well Bar, County Londonderry.
1983
10 December – 1983 Royal Artillery Barracks bombing: A bomb exploded at the Royal Artillery Barracks in Woolwich, South East London. The explosion injured five people and caused minor damage to the building. The IRA claimed they carried out the attack.
17 December – Harrods bombing: an IRA car bomb killed three policemen and three civilians, and injured ninety outside a department store in London.
1984
12 October – Brighton hotel bombing: the IRA carried out a bomb attack on the Grand Brighton Hotel, which was being used as a base for the Conservative Party Conference. Five people, including MP Anthony Berry, were killed. Margaret and Denis Thatcher were at the scene but unharmed.
1985
28 February – 1985 Newry mortar attack: an IRA mortar attack on the Newry RUC station killed nine officers and injured thirty-seven.
7 December – Attack on Ballygawley barracks: the IRA launched an assault on the RUC barracks in Ballygawley, County Tyrone. Two RUC officers were killed and the barracks was completely destroyed.
1986
11 August – Attack on RUC Birches barracks: The East Tyrone Brigade destroyed the RUC barracks at The Birches with a 200 lb. bomb driven in a JCB digger, near Portadown.
1987
8 November – Remembrance Day bombing: 11 civilians were killed and sixty-three injured by an IRA bomb during a Remembrance Day service in Enniskillen, County Fermanagh. One of those killed was Marie Wilson; in a BBC interview, her father Gordon (who was injured in the attack) expressed forgiveness towards his daughter's killer, and asked Loyalists not to seek revenge. He became a leading peace campaigner and was later elected to the Irish Senate. He died in 1995.
1988
15 June – Lisburn van bombing: Six off-duty British soldiers were killed by an IRA bomb on their minibus in Lisburn.
23 July – Robert James Hanna, his wife Maureen Patricia Hanna (both 44), and their son David (aged 7) were killed and 3 people were left injured in Killean, County Armagh after a 1,000 lb bomb exploded upon their Jeep Shogun passing by. The roadside bomb was thought to be intended for High Court Judge Eoin Higgins. The Provisional IRA issued a statement after the attack claiming responsibility, and going on to describe the Hanna's as "Unfortunate victims of mistaken identity", adding that "This bomb, which was to be detonated by remote control, exploded prematurely, tragically killing three civilians."
1 August – Inglis Barracks bombing: A British soldier was killed and another nine injured when the IRA detonated a time bomb outside Inglis Barracks in Mill Hill, London.
20 August – Ballygawley bus bombing: eight British soldiers were killed and 28 wounded by an IRA roadside bomb near Ballygawley.
1989
20 February – Clive Barracks bombing: The Clive Barracks were bombed by the IRA. Only 2 people were injured in the attack but a fair amount of structural damage was done.
22 September – Deal barracks bombing: Eleven Royal Marines bandsmen were killed by the IRA at Deal Barracks in Kent, England.
19 November – An IRA bomb planted on the car of a staff sergeant in the Royal Military Police, Andy Mudd, exploded in Colchester. The explosion injured his wife and Mudd, who lost both his legs and two fingers.
1990
25 June – Carlton Club bombing: A bomb exploded at the Carlton Club in London, injuring 20 people. Donald Kaberry died of his injuries on 13 March 1991.
20 July – The IRA bombed the London Stock Exchange.
30 July – Conservative MP Ian Gow was killed by a car bomb outside his house near Eastbourne.
6 September – RFA Fort Victoria bombing: The IRA planted two bombs aboard the Royal Fleet Auxiliary replenishment ship RFA Fort Victoria. One of them exploded, disabling the ship that had been constructed in Belfast and launched some weeks before. The second bomb failed to go off and was found and defused 15 days later.
24 October – The IRA delivered three proxy bombs to British Army checkpoints. Three men (who were working with the British Army) were tied into cars loaded with explosives and ordered to drive to each checkpoint. Each bomb was remotely detonated. The first exploded at a checkpoint in Coshquin, killing the driver and five soldiers; the second exploded at a checkpoint in Killean, with the driver narrowly escaping and a soldier killed; and the third failed to detonate.
1991
7 February – Downing Street mortar attack: The IRA launched a mortar attack on 10 Downing Street during a cabinet meeting with one mortar shell exploding in the garden, causing minor injuries to two people and two further shells landing nearby.
31 May – Glenanne barracks bombing: The IRA launched a large truck bomb attack on a UDR barracks in County Armagh. Three soldiers were killed, while ten soldiers and four civilians were wounded.
1992
17 January – Teebane bombing: A ( per another source) roadside bomb detonated by the IRA destroyed a van and killed eight construction workers (one of them a Territorial Army soldier) on their way back from Lisanelly British Army barracks in Omagh, County Tyrone, where they were making repairs. Another eight were wounded.
10 April – Baltic Exchange bombing: A van loaded with one ton of home-made explosives went off outside the building of the Baltic Exchange company, at 30 St Mary Axe, London, killing three people and injuring another 91. The bomb caused £800 million worth of damage. Three hours later, a similar sized bomb exploded at the junction of the M1 and the North Circular Road at Staples Corner in north London, causing substantial damage but no injuries. Both bombs were placed in vans and were home-made rather than Semtex; each weighed several hundred pounds.
1 May – Attack on Cloghoge checkpoint: The IRA used a modified van that ran on railway tracks to launch an unconventional bomb attack on a British Army checkpoint in South Armagh. The checkpoint was obliterated when the 1,000 kg bomb exploded, killing one soldier and injuring 23.
12 May – 1992 Coalisland riots: After a small IRA bomb attack on a British Army patrol in the village of Cappagh, in which a paratrooper lost both legs, British soldiers raided two public houses and caused considerable damage in the nearby town of Coalisland. Five days later, the conflict became a fist-fight between soldiers and local inhabitants. Shortly thereafter, another group of British paratroopers arrived and fired on a crowd of civilians and injured seven. Two soldiers were hospitalized, communication equipment was shattered and a rifle and a GPMG were stolen.
19 September – Forensic Science Laboratory bombing: The IRA detonated a 3,700 lb bomb at the Northern Ireland forensic science laboratory in south Belfast. The laboratory was obliterated, 700 houses were damaged, and 20 people were injured. 490 owners and occupiers claimed damages.
1993
20 March – Warrington bombings: after a vague telephoned warning, the IRA detonated two bombs in Cheshire, England. Two children were killed and 56 people were wounded. There were widespread protests in Britain and the Republic of Ireland following the deaths.
24 April – 1993 Bishopsgate bombing: After a telephoned warning, the IRA detonated a large bomb in Bishopsgate, London. It killed one civilian, wounded 30 others, and caused an estimated £350 million in damage.
2 October – 1993 Finchley Road bombings: Three IRA time bombs exploded on Finchley Road in north London.
23 October – Shankill Road bombing: eight civilians, one UDA member, and one IRA member were killed, and another IRA member was injured when an IRA bomb prematurely exploded at a fish shop on Shankill Road, Belfast.
1994
5 January – Two members of the Irish Army bomb disposal unit were injured when a parcel bomb sent by the UVF to the Sinn Féin offices in Dublin exploded during examination at the Cathal Brugha Barracks.
24 January – Incendiary devices that had been planted by the UFF were found at a school in Dundalk in County Louth and at a postal sorting office in Dublin.
9–13 March – Heathrow mortar attacks: On 9, 11, and 13 March, the IRA fired improvised mortar bombs on to the runway at Heathrow Airport. There were no deaths or injuries.
20 April – The Provisional IRA Derry Brigade fired a mortar bomb at a RUC Land Rover, killing one RUC officer and injuring two others.
14 May – the IRA detonated an explosive device next to a British Army sangar at a permanent vehicle checkpoint in Castleblaney Road, Keady, County Armagh. One British soldier was killed and another wounded.
29 July – More than 40 people were injured when the IRA fired three mortar bombs into the Newry RUC base. 30 civilians, seven RUC officers and three British soldiers were among those injured.
13 August Two bombs were planted in bags placed on bicycles in Brighton and Bognor Regis. The Bognor one detonated damaging shops but no casualties; the Brighton one was defused.
12 September – 1994 Dublin-Belfast train bombing: The UVF planted a bomb on the Belfast-Dublin train. At Connolly station, the bomb only partially exploded, slightly injuring two women.
19 December - The Continuity IRA (CIRA) detonated a semtex bomb in a furniture store in Enniskillen. This was the first action carried out by the CIRA.
1996
9 February – 1996 Docklands bombing: The bomb killed two civilians.
18 February – Aldwych bus bombing: Edward O'Brien, an IRA volunteer, died when an improvised explosive device he was carrying detonated prematurely on a number 171 bus in Aldwych, central London. The 2 kg semtex bomb detonated as he stood near the door of the bus. A pathologist found O'Brien was killed "virtually instantaneously", while other passengers and the driver (left permanently deaf) were injured in the explosion.
15 June – 1996 Manchester bombing: the IRA detonated a bomb in Manchester, England. It destroyed a large part of the city centre and injured over 200 people. To date, it is the largest bomb to be planted on the British mainland since World War II. Several buildings were damaged beyond repair and had to be demolished.
7 October – Thiepval barracks bombing: The IRA detonated two car bombs at the British Army headquarters in Thiepval Barracks, Lisburn. One soldier was killed and 31 injured.
1998
24 June – Newtownhamilton bombing: The INLA detonated a 200 lb car bomb in Newtownhamilton, injuring six people and causing substantial damage estimated at £2 million.
1 August – 1998 Banbridge bombing: A dissident republican group calling itself the Real Irish Republican Army (RIRA) detonated a bomb in Banbridge, County Down, injuring 35 people and causing extensive damage.
15 July – A package addressed to a Dublin hotel, which was believed to have been sent by the LVF, exploded while it was being examined at the Garda Technical Bureau in Dublin. Two were injured in the blast.
15 August – Omagh bombing: the RIRA detonated a bomb in Omagh, County Tyrone. It killed 29 civilians.
1999
15 March – Solicitor Rosemary Nelson, who had represented the Catholic and nationalist residents in the Drumcree conflict, was assassinated by a booby trapped car bomb in Lurgan, County Armagh. A loyalist group, Red Hand Defenders, claimed responsibility.
2001
4 March – 2001 BBC bombing: Television Centre, causing some damage to the building.
3 August – 2001 Ealing bombing: an RIRA car bomb injured seven civilians in Ealing, west London.
3 November - 2001 Birmingham bombing: a RIRA car bomb partially exploded outside a night club near New Street Station in Birmingham.
See also
Directory of the Northern Ireland Troubles
List of Irish police officers killed in the line of duty
Operation Banner
References
External links
Explosions in the United Kingdom
Explosions in Northern Ireland
Explosions in Ireland
Bombings
Lists of explosions | List of bombings during the Troubles | Chemistry | 5,192 |
2,001,760 | https://en.wikipedia.org/wiki/National%20Imagery%20Transmission%20Format | The National Imagery Transmission Format Standard (NITFS) is a U.S. Department of Defense (DoD) and Federal Intelligence Community (IC) suite of standards for the exchange, storage, and transmission of digital-imagery products and image-related products.
DoD policy is that other image formats can be used internally within a single system; however, NITFS is the default format for interchange between systems. NITFS provides a package containing information about the image, the image itself, and optional overlay graphics. (i.e. a "package" containing an image(s), subimages, symbols, labels, and text as well as other information related to the image(s)) NITFS supports the dissemination of secondary digital imagery from overhead collection platforms.
Guidance on applying the suite of standards composing NITFS can be found in MIL-HDBK-1300A, National Imagery Transmission Format Standard (NITFS), 12 October 1994. The NITFS allows for Support Data Extensions (SDEs), which are a collection of data fields that provide space within the NITFS file structure for adding functionality. Documented and controlled separately from the NITFS suite of standards, SDEs extend NITF functionality with minimal impact on the underlying standard document. SDEs may be incorporated into an NITF file while maintaining backward compatibility because the identifier and byte count mechanisms allow applications developed prior to the addition of newly defined data to skip over extension fields they are not designed to interpret. These SDEs are described in the Compendium of Controlled Extensions (CE). This standard is mandated in the DoD for imagery product dissemination.
NITF has been implemented and fielded since the early 1990s. Its content evolved over the years to embrace new technology in support of emerging operational requirements. NITF has adopted the ISO/IEC 15444-1 standard for imagery compression, JPEG 2000. Commercial implementations of the standard are largely driven by marketability to the DoD and IC.
The current standard that defines NITF 2.1 is the Joint BIIF Profile (JBP), version 2024.1 (JBP-2024.1), dated 13 June 2023, which superseded MIL-STD-2500C CN2 in June 2024. The JBP is a profile of ISO/IEC 12087-5, Basic Image Interchange Format, in lieu of the previous military standards.
The following documents define the standard:
JBP-2024.1 — ISO/IEC Joint BIIF Profile (JBP) 2024.1, 13 June 2023, Superseding MIL-STD-2500C/CN2
STDI-0002 — STDI-0002 NITF Extensions 2.1 — The Compendium of Controlled Support Data Extensions (SDE) for the National Imagery Transmission Format (NITF) Version 2024.1
MIL-STD-188-199(1)— Vector Quantization Decompression for the National Imagery Transmission Format Standard, 27 June 1994 with Notice 1, 27 June 1996.
Do not confuse this with the British National Transfer Format.
References
External links
NITFS Technical Board (NTB) Public Page
Reference Library for NITFS Users
NITRO
United States Department of Defense information technology
Raster graphics file formats
Graphics standards | National Imagery Transmission Format | Technology | 678 |
69,609,424 | https://en.wikipedia.org/wiki/Cetacean%20microbiome | The cetacean microbiome is the group of communities of microorganisms that reside within whales.
Microbiomes play an important role in individual health and ecology and in particular in the discovery of different microbiomes in gut, skin and nose permitted to analyze their conditions and the condition of the Microbiome environment in which they live.
Gut
The access of microbial samples from the gut out of marine mammals is limited because most species are rare, endangered, and deep divers. There are different techniques for sampling the cetacean's gut microbiome. The most common is collecting fecal samples from the environment and taking a probe from the center that is non-contaminated. Besides there are studies from rectal swabs and rare studies from stranded dead or living animals direct from the intestine.
The intestinal microbiome of Cetaceans is a complex ecosystem that plays an important role in the metabolism, health, and immunity of the host. The microbial communities of marine mammals are diverse and distinct from terrestrial mammals, and the community depends on different factors like kind of diet, phylogeny, health, and age.
As the microbiome is involved in the decomposition of food, diet is a predominant factor for the microbial community. Different studies have shown that members of Bacteroidetes and Firmicutes are the most abundant phyla of gut microorganisms in animals that are cephalopod predators or zooplankton predators like in short-finned pilot whales and baleen whales. Especially the genus Bacteroides (phyla Bacteroidetes) seems to play a major role in the decomposition of the chitin-rich diet of these species and were also found in the gut microbiome of baleen whales.
In toothed cetacean species which food consumption is mainly piscivore the most abundant phyla are Firmicutes, Fusobacteria, and Proteobacteria. Proteobacteria are classified as a minor important group for marine mammals that consume cephalopods and zooplankton but are highly abundant in piscivorous predators like bottlenose dolphins, East Asian finless porpoises, and belugas. These findings could mirror the different dietary niches of these species.
Besides the dietary also the age seems to determine the differences in the microbial community between cetaceans. Maron et al. have shown that the microbial community is changing in right whale caves during their development. Interestingly the genera Bilophila, Peptococcus, and Treponema are more abundant in older calves. The higher abundance of Bilophila might be a response to the greater milk intake of the older calves.
Skin
The skin is the first barrier that protects the individual from the outside world and the epidermal microbiome on it is considered an indicator not only of the health of the animal but is also considered an ecological indicator that shows the state of the surrounding environment. Knowing the microbiome of the skin of marine mammals under ''normal'' conditions has allowed us to understand how these communities are different from the free microbial communities found in the sea and how they can change according to abiotic and biotic variations, and also ''communities vary between healthy and sick individuals''.
Different studies on migratory marine mammals in particular Megaptera novaeangliae, killer whales, Orcinus orca, and Beluga whales, which are exposed to different habitats host different communities of Bacterioplankton and in many cases diatoms growing on the backs of migrating killer whales.
Although studies on the microbiome of the skin of these marine mammals are quite limited, thanks to the amplification of SSU rRNA genes, were discovered communities belonging to the phylum Bacteroidetes, in particular of the family Flavobacteriaceae, the genus Tenacibaculum dicentrarchi, and it seems that the role of these bacteria is to regulate the microbiome present on the skin of marine mammals, acting as predators and limit the exponential growth of other communities.
Another type of bacterium found on the skin of cetaceans is Phychrobacter, able to tolerate low temperatures and therefore present during migratory routes to high latitudes, it was also discovered that this bacterium is one of those controlled by T. dicentrarchi; while in skin lesions the bacterium spp. Moraxella was found, but not only also in healthy skin such as blowholes and mouths of dolphins
It is not well known whether these communities of microorganisms are transient colonizers of the skin surface or have adapted to that environment, thus subjecting themselves to variations in extrinsic and intrinsic factors that go to change the communities of the skin microbiome, such as UV rays, skin detachment, which seems to be involved in the change of the microbial communities, the change of pressure and temperature, which influences a regional and temporal variability of the skin microbiome, the sex, the age and the health status of the individual, all influence the microbiome and the change of the skin communities. In conjunction with these factors, climate change has been shown to further influence the growth and presence of certain bacterial communities as well as the health status of these cetaceans.
Respiratory system
Impact of cetacean respiratory system microbiome
The cetaceans are in danger because they are affected by multiple stress factors, especially anthropogenature, which make them more vulnerable to various diseases. These animals have been noted to show high susceptibility to airway infections, but very little is known about their respiratory microbiome. Therefore, the sampling of the exhaled breath or "blow" of the cetaceans can provide an assessment of the state of health. Blow is composed of a mixture of microorganisms and organic material, including lipids, proteins, and cellular debris derived from the linings of the airways which, when released into the relatively cooler outdoor air, condense to form a visible mass of vapor, which can be collected. There are various methods for collecting exhaled breath samples, one of the most recent is through the use of aerial drones. This method provides a safer, quieter, and less invasive alternative and often a cost-effective option for monitoring fauna and flora. The use of aerial drones has been more successful with large cetaceans due to slow swim speeds and larger blow sizes.
In all the studies carried out, in addition to exhaled breath samples, seawater and air samples were collected to more accurately identify the specific microorganisms for exhaled breath.
Through various studies carried out on different cetaceans, among which, Humpback whales (Megaptera novaeangliae), Blue whale (Balænoptera musculus), Gray whale (Eschrichtius robustus), Sperm whale (Physeter macrocephalus), Killer whale ( Orcinus orca) and bottlenose dolphins (Tursiops truncatus), the respiratory microbiome has begun to be defined, i.e., a microbial community formed by a complex diversity of common microorganisms to all the specimens examined. These are very recent studies, so knowledge is very limited, only some microorganisms are known while others have not yet been identified and little is known about their functional role within these animals. Overall, the most common bacteria identified at the phylum level included Pseudomonadota, Bacillota, Actinomycetota, and Bacteroidota.
Types of bacteria found in the respiratory systems of cetaceans
Among the Pseudomonadota, bacteria belonging to the families Brucellaceae and Enterobacteriaceae and to the genera "Candidatus Pelagibacter", Acidovorax, Cardiobacterium, Pseudomonas, Burkholderia, and Psychrobacter have been recognized.
Among the Bacillota, bacteria belonging to the Clostridia and Erysipelotrichia classes and to the genera Anoxybacillus, Paenibacillus and Leptotrichia have been recognized.
Bacteria belonging to the Acidimicrobiia class, to the Microbacteriaceae family, and to the genera Corynebacterium, Mycobacterium and Propionibacterium (Cutibacterium), have been recognized among the Actinomycetota.
Among the Bacteroidota, bacteria belonging to the genus Tenacibaculum have been recognized.
To these are added bacteria belonging to the phylum Fusobacteriota and Mycoplasmatota.
Finally, potential respiratory pathogens were also detected, such as Balneatrix (proteobacteria) and a range of Gram-positive Clostridia and Bacilli, such as Staphylococcus and Streptococcus (both firmicutes).
Furthermore, one of the most common bacteria in the various cetacean species is the Haemophilus bacterium. These are opportunistic gram-negative coccobacilli, also found in the respiratory tract of humans and other animals, which tend to colonize but without causing the onset of infection. But during periods of immunosuppression these organisms can cause damage by generating meningitis and pneumonia.
In addition to bacteria, some viruses have also been identified in whale exhaled breath. Among the most abundant bacteriophages were the Siphoviridae and Myoviridae, while among the viral families there were small single-stranded DNA viruses (ss), in particular the Circoviridae, members of the Parvoviridae, and a family of RNA viruses, the Tombusviridae.
References
Microbiomes
Cetaceans | Cetacean microbiome | Environmental_science | 2,047 |
26,606,575 | https://en.wikipedia.org/wiki/Chlorobi-1%20RNA%20motif | The Chlorobi-1 RNA motif is a conserved RNA secondary structure identified by bioinformatics. It is predicted to be used only by Chlorobiota (formerly Chlorobi), a phylum of bacteria. The motif consists of two stem-loops that are followed by an apparent rho-independent transcription terminator. The motif is presumed to function as an independently transcribed non-coding RNA.
A number of other RNAs were identified in the same study, including:
Bacteroidales-1 RNA motif
CrcB RNA Motif
Gut-1 RNA motif
JUMPstart RNA motif
Lactis-plasmid RNA motif
Lacto-usp RNA motif
MraW RNA motif
Ocean-V RNA motif
PsaA RNA motif
Pseudomon-Rho RNA motif
Rne-II RNA motif
References
External links
Non-coding RNA | Chlorobi-1 RNA motif | Chemistry | 175 |
21,438,855 | https://en.wikipedia.org/wiki/Mooning | Mooning is the act of displaying one's bare buttocks by removing clothing, e.g., by lowering the backside of one's trousers and underpants, usually bending over, and also potentially exposing the genitals. Mooning is used in the English-speaking world to express protest, scorn, disrespect, or for provocation, but mooning can be done for shock value, for fun, as a joke or as a form of exhibitionism. The Māori have a form of mooning known as that is a form of insult.
Some jurisdictions regard mooning to be indecent exposure, sometimes depending on the context.
Word history
Moon has been a common shape metaphor for the buttocks in English since 1743, and the verb to moon has meant "to expose to (moon)light" since 1601. As documented by McLaren, "'mooning', or exposing one's butt to shame an enemy ... had a long pedigree in peasant culture" throughout the Middle Ages, and in many nations.
Although the practice of mooning was widespread by the 19th century, the Oxford English Dictionary dates the use of "moon" and "mooning" to describe the act to student slang of the 1960s, when the gesture became increasingly popular among students at universities in the United States.
In various countries and cultures
Australia
In Australian idiom, "chuck a browneye" is synonymous with the act of mooning.
Victoria
In January 2016, mooning in a public place in Victoria was made a criminal offence.
Northern Territory
A group of locals, called "Noonamah Moonies", mooned the Ghan at Livingstone Airstrip in 2004 and 2024. The next exhibition can be expected in 2034, with the mooning happening every 10 years.
Latvia
In Latvian legends, two maidens went naked from the sauna with carrying poles to the well. While collecting water, one of the women noted how beautiful the Moon is. The other was unimpressed, saying her own butt is prettier and proceeded to moon the Moon. As a punishment, either Dievs or Mēness (a lunar deity) put the woman along with a carrying pole on the Moon, with her butt now being visible to everyone.
New Zealand
is the Māori practice of baring one's buttocks with the intent to offend. It symbolises the birthing act and renders the recipient noa ("base").
United States
Maryland
In January 2006, a Maryland state circuit court determined that mooning is a form of artistic expression protected by the First Amendment as a form of speech.
The court ruled that indecent exposure relates only to exposure of the genitals, adding that even though mooning was a "disgusting" and "demeaning" act to engage in, and had taken place in the presence of a minor, "If exposure of half of the buttocks constituted indecent exposure, any woman wearing a thong at the beach at Ocean City would be guilty."
Defense attorneys had cited a case from 1983 of a woman who was arrested after protesting in front of the U.S. Supreme Court building wearing nothing but a cardboard sign that covered the front of her body. In that case, the District of Columbia Court of Appeals had ruled that indecent exposure is limited to a person's genitalia. No review of the case by a higher court took place since prosecutors dropped the case after the ruling.
California
In December 2000, in California, the California Court of Appeal found that mooning does not constitute indecent exposure (and therefore does not subject the defendant to sex offender registration laws), unless it can be proven beyond reasonable doubt that the conduct was sexually motivated.
United Kingdom
The idiom "to pull a moonie" is commonly used to describe the activity.
Notable incidents
In 80 AD, Flavius Josephus recorded the first known incident of mooning. Josephus recorded that in the procuratorship of Ventidius Cumanus (48-52 AD), at around the beginning of the First Roman–Jewish War, a soldier in the Roman army mooned Jewish pilgrims at the Jewish Temple in Jerusalem who had gathered for Passover, and "spake such words as you might expect upon such a posture" causing a riot in which youths threw stones at the soldiers, who then called in reinforcements—the pilgrims panicked, and the ensuing stampede resulted in the death of 30,000 (τρισμυρίους) Jews.
In the Siege of Constantinople in 1204, the Greeks exposed their bare buttocks to the Crusaders after they repulsed them from the walls.
At the Siege of Nice, in the summer of 1543, Catherine Ségurane, a common washerwoman, led the townspeople into battle. Legend has it that she took the lead in defending the city by standing before the invading forces and exposing her bare bottom.
At the Conference of Badajoz–Elvas of 1524, where Portugal and Spain discussed the location of the meridian that divided their respective hemispheres, a young boy one day asked the delegates if they were trying to divide the world. The adults answered they were. The boy then responded by baring his backside and suggesting that they draw their line through his intergluteal cleft.
A number of early explorers of the Atlantic coastline noted that the Etchemin tribe of Maine practiced this custom.
Since 1979, The Annual Mooning of Amtrak has been an annual tradition in Laguna Niguel, California on the second Saturday of July, where many people spend the day mooning passing Amtrak trains; some passengers ride the trains that day to witness the event. This has inspired a chain of "train moonings" throughout the country.
An example of whakapohane was performed by Dun Mihaka to Diana, Princess of Wales and Prince Charles during the royal tour of 1983 of New Zealand.
In 1986, a Maori man mooned the motorcade of Queen Elizabeth and Prince Philip in Napier, New Zealand, as a protest over the 1840 Treaty of Waitangi.
On November 22, 1987, an intruder interrupted the broadcast signal of Chicago PBS affiliate, WTTW with a strange video of himself dressed to resemble Max Headroom. He exposed his buttocks to the camera.
A tradition of Appalachian Trail thru-hikers, Mooning the Cog has developed on Mount Washington in New Hampshire.
In June 2000, a mass mooning event was organised outside of Buckingham Palace in the United Kingdom by the Movement Against the Monarchy (MAM). A large police presence prevented a large-scale mooning, but a few individuals did so. This event is known as the Moon Against the Monarchy.
On 7 June 2002, Macy Gray mooned the crowd during her performance at the Manchester Apollo concert in Ardwick Green, Manchester, England.
On January 9, 2005, Randy Moss of the Minnesota Vikings mimed pulling down his trousers and bent over toward Green Bay Packers fans following a touchdown he scored. He was fined $10,000 by the NFL for the incident.
At the 2005 UK Music Hall of Fame awards ceremony, musician Ozzy Osbourne mooned the crowd after a set he played.
In October 2006, English Premiership footballer Joey Barton was fined £2,000 for mooning Everton fans.
At the Patch Adams Full Moon Festival three-day event to raise money for his Gesundheit! Institute and Albuquerque, 200,000 people pay $100 each to moon as a group and lend a hand with local projects.
On 10 May 2007, Yvette Fielding mooned photographers from inside a Soho restaurant window on the final episode of the reality television series Deadline.
On 24 October 2011, economic inequality protester Liam Warriner of Sydney ran alongside the motorcade of Queen Elizabeth II and a waving Prince Philip for 50 metres with an Australian flag clenched between his exposed buttocks, before being arrested by police.
On 13 May 2017, during an interval act at the Eurovision Song Contest 2017, a man wrapped in an Australian flag snuck on stage and mooned the audience. It was later reported that the man was Ukrainian journalist and prankster Vitalii Sediuk.
In January 2023, 79-year-old Australian music personality Molly Meldrum climbed onstage at an Elton John concert and mooned the audience. The incident garnered significant media attention.
See also
References
External links
Queen 'mooned': Robbie Williams, Joey Barton, Gerard Butler, Anne Hathaway and more celeb backside exposure. Daily Mirror. October 24, 2011
Wickman, Forrest. Mooning: A history. When did people start baring their butts as an insult?. Slate. June 27, 2012
Cochrane, Kira. Mooning is back – and here's why. The Guardian. August 15, 2012
Wilkes, Sam. Mooning In Movies: A Supercut Everyone Can Get Behind (NSFW VIDEO). The Huffington Post. July 17, 2013
Buttocks
Civil disobedience
Gestures
Modesty
Moon in culture
Exhibitionism
Protest tactics
Recreation
1960s neologisms | Mooning | Biology | 1,845 |
68,807,529 | https://en.wikipedia.org/wiki/Br%C3%BCderschaft | The ( in German) or () is a drinking ritual, or a rite of passage, to consolidate friendship. Two people simultaneously drink a glass of the same alcoholic beverage each, with their arms intertwined at the elbows. A "brotherly kiss" is customary after emptying the glasses, which then seals the ritual. Thence they are considered good friends and address each other informally.
A symbolic act that establishes a closer bond between two—usually male—individuals, it has been associated with an end of formality between them and addressing each other (the informal you in German) at least since Jus Potandi oder ZechRecht, a legal-parodic text published in 1616. In the 17th and 18th centuries, the expression was also common, allegedly based on the assumption that "drinking together would bind and oblige".
References
Notes
Citations
External links
Etiquette
Drinking culture
German traditions | Brüderschaft | Biology | 185 |
4,067,466 | https://en.wikipedia.org/wiki/Sort%20%28C%2B%2B%29 | sort is a generic function in the C++ Standard Library for doing comparison sorting. The function originated in the Standard Template Library (STL).
The specific sorting algorithm is not mandated by the language standard and may vary across implementations, but the worst-case asymptotic complexity of the function is specified: a call to must perform no more than comparisons when applied to a range of elements.
Usage
The function is included from the header of the C++ Standard Library, and carries three arguments: . Here, is a templated type that must be a random access iterator, and and must define a sequence of values, i.e., must be reachable from by repeated application of the increment operator to . The third argument, also of a templated type, denotes a comparison predicate. This comparison predicate must define a strict weak ordering on the elements of the sequence to be sorted. The third argument is optional; if not given, the "less-than" () operator is used, which may be overloaded in C++.
This code sample sorts a given array of integers (in ascending order) and prints it out.
#include <algorithm>
#include <iostream>
int main() {
int array[] = { 23, 5, -10, 0, 0, 321, 1, 2, 99, 30 };
std::sort(std::begin(array), std::end(array));
for (size_t i = 0; i < std::size(array); ++i) {
std::cout << array[i] << ' ';
}
std::cout << '\n';
}
The same functionality using a container, using its and methods to obtain iterators:
#include <algorithm>
#include <iostream>
#include <vector>
int main() {
std::vector<int> vec = { 23, 5, -10, 0, 0, 321, 1, 2, 99, 30 };
std::sort(vec.begin(), vec.end());
for (size_t i = 0; i < vec.size(); ++i) {
std::cout << vec[i] << ' ';
}
std::cout << '\n';
}
Genericity
is specified generically, so that it can work on any random-access container and any way of determining that an element of such a container should be placed before another element .
Although generically specified, is not easily applied to all sorting problems. A particular problem that has been the subject of some study is the following:
Let and be two arrays, where there exists some relation between the element and the element for all valid indices .
Sort while maintaining the relation with , i.e., apply the same permutation to that sorts .
Do the previous without copying the elements of and into a new array of pairs, sorting, and moving the elements back into the original arrays (which would require temporary space).
A solution to this problem was suggested by A. Williams in 2002, who implemented a custom iterator type for pairs of arrays and analyzed some of the difficulties in correctly implementing such an iterator type. Williams's solution was studied and refined by K. Åhlander.
Complexity and implementations
The C++ standard requires that a call to performs comparisons when applied to a range of elements.
In previous versions of C++, such as C++03, only average complexity was required to be . This was to allow the use of algorithms like (median-of-3) quicksort, which are fast in the average case, indeed significantly faster than other algorithms like heap sort with optimal worst-case complexity, and where the worst-case quadratic complexity rarely occurs. The introduction of hybrid algorithms such as introsort allowed both fast average performance and optimal worst-case performance, and thus the complexity requirements were tightened in later standards.
Different implementations use different algorithms. The GNU Standard C++ library, for example, uses a 3-part hybrid sorting algorithm: introsort is performed first (introsort itself being a hybrid of quicksort and heap sort), to a maximum depth given by 2×log2 n, where n is the number of elements, followed by an insertion sort on the result.
Other types of sorting
sort is not stable: equivalent elements that are ordered one way before sorting may be ordered differently after sorting. stable_sort ensures stability of result at expense of worse performance (in some cases), requiring only quasilinear time with exponent 2 – O(n log2 n) – if additional memory is not available, but linearithmic time O(n log n) if additional memory is available. This allows the use of in-place merge sort for in-place stable sorting and regular merge sort for stable sorting with additional memory.
Partial sorting is implemented by , which takes a range of elements and an integer , and reorders the range so that the smallest elements are in the first positions in sorted order (leaving the remaining in the remaining positions, in some unspecified order). Depending on design this may be considerably faster than complete sort. Historically, this was commonly implemented using a heap-based algorithm that takes worst-case time. A better algorithm called quickselsort is used in the Copenhagen STL implementation, bringing the complexity down to .
Selection of the nth element is implemented by nth_element, which actually implements an in-place partial sort: it correctly sorts the nth element, and also ensures that this element partitions so elements before it are less than it, and elements after it are greater than it. There is the requirement that this takes linear time on average, but there is no worst-case requirement; these requirements are exactly met by quickselect, for any choice of pivot strategy.
Some containers, among them list, provide specialised version of sort as a member function. This is because linked lists don't have random access (and therefore can't use the regular sort function); and the specialised version also preserves the values list iterators point to.
Comparison to qsort
Aside from , the C++ standard library also includes the function from the C standard library. Compared to , the templated is more type-safe since it does not require access to data items through unsafe pointers, as does. Also, accesses the comparison function using a function pointer, necessitating large numbers of repeated function calls, whereas in , comparison functions may be inlined into the custom object code generated for a template instantiation. In practice, C++ code using is often considerably faster at sorting simple data like integers than equivalent C code using .
References
External links
C++ reference for std::sort
Another C++ reference for std::sort
C++ Standard Library
Sorting algorithms | Sort (C++) | Mathematics | 1,448 |
1,460,235 | https://en.wikipedia.org/wiki/Indeterminate%20%28variable%29 | In mathematics, an indeterminate or formal variable is a variable (a symbol, usually a letter) that is used purely formally in a mathematical expression, but does not stand for any value.
In analysis, a mathematical expression such as is usually taken to represent a quantity whose value is a function of its variable , and the variable itself is taken to represent an unknown or changing quantity. Two such functional expressions are considered equal whenever their value is equal for every possible value of within the domain of the functions. In algebra, however, expressions of this kind are typically taken to represent objects in themselves, elements of some algebraic structure – here a polynomial, element of a polynomial ring. A polynomial can be formally defined as the sequence of its coefficients, in this case , and the expression or more explicitly is just a convenient alternative notation, with powers of the indeterminate used to indicate the order of the coefficients. Two such formal polynomials are considered equal whenever their coefficients are the same. Sometimes these two concepts of equality disagree.
Some authors reserve the word variable to mean an unknown or changing quantity, and strictly distinguish the concepts of variable and indeterminate. Other authors indiscriminately use the name variable for both.
Indeterminates occur in polynomials, rational fractions (ratios of polynomials), formal power series, and, more generally, in expressions that are viewed as independent objects.
A fundamental property of an indeterminate is that it can be substituted with any mathematical expressions to which the same operations apply as the operations applied to the indeterminate.
Some authors of abstract algebra textbooks define an indeterminate over a ring as an element of a larger ring that is transcendental over . This uncommon definition implies that every transcendental number and every nonconstant polynomial must be considered as indeterminates.
Polynomials
A polynomial in an indeterminate is an expression of the form , where the are called the coefficients of the polynomial. Two such polynomials are equal only if the corresponding coefficients are equal. In contrast, two polynomial functions in a variable may be equal or not at a particular value of .
For example, the functions
are equal when and not equal otherwise. But the two polynomials
are unequal, since 2 does not equal 5, and 3 does not equal 2. In fact,
does not hold unless and . This is because is not, and does not designate, a number.
The distinction is subtle, since a polynomial in can be changed to a function in by substitution. But the distinction is important because information may be lost when this substitution is made. For example, when working in modulo 2, we have that:
so the polynomial function is identically equal to 0 for having any value in the modulo-2 system. However, the polynomial is not the zero polynomial, since the coefficients, 0, 1 and −1, respectively, are not all zero.
Formal power series
A formal power series in an indeterminate is an expression of the form , where no value is assigned to the symbol . This is similar to the definition of a polynomial, except that an infinite number of the coefficients may be nonzero. Unlike the power series encountered in calculus, questions of convergence are irrelevant (since there is no function at play). So power series that would diverge for values of , such as , are allowed.
As generators
Indeterminates are useful in abstract algebra for generating mathematical structures. For example, given a field , the set of polynomials with coefficients in is the polynomial ring with polynomial addition and multiplication as operations. In particular, if two indeterminates and are used, then the polynomial ring also uses these operations, and convention holds that .
Indeterminates may also be used to generate a free algebra over a commutative ring . For instance, with two indeterminates and , the free algebra includes sums of strings in and , with coefficients in , and with the understanding that and are not necessarily identical (since free algebra is by definition non-commutative).
See also
Indeterminate equation
Indeterminate form
Indeterminate system
Notes
References
Abstract algebra
Polynomials
Mathematical series | Indeterminate (variable) | Mathematics | 835 |
792,998 | https://en.wikipedia.org/wiki/Bertil%20Lindblad | Bertil Lindblad (Örebro, 26 November 1895 – Saltsjöbaden, outside Stockholm, 25 June 1965) was a Swedish astronomer.
After finishing his secondary education at Örebro högre allmänna läroverk, Lindblad matriculated at Uppsala University in 1914. He received his filosofie magister degree in 1917, his filosofie licentiat degree in 1918 and completed his doctorate and became a docent at the university in 1920. From 1927 he was professor and astronomer of the Royal Swedish Academy of Sciences and head of the Stockholm Observatory. In the latter capacity he was responsible for the observatory's move from the old building
in the centre of Stockholm to a newly built facility in Saltsjöbaden Observatory, which was opened in 1931.
Lindblad studied the theory of the rotation of galaxies. By making careful observations of the apparent motions of stars, he was able to study the rotation of the Milky Way. He deduced that the rate of rotation of the stars in the outer part of the galaxy, where the Sun is located, decreased with distance from the galactic core. This deduction was soon confirmed by Jan Oort in 1927. A certain
class of resonances in rotating stellar or gaseous disks are named Lindblad resonances, after Bertil Lindblad.
His son, Per Olof Lindblad, also became an astronomer.
Honors
Awards
Janssen Medal from the French Academy of Sciences (1938)
Prix Jules Janssen, the highest award of the Société astronomique de France, the French astronomical society (1949)
Gold Medal of the Royal Astronomical Society (1948)
Bruce Medal (1954)
Named after him
Lindblad (crater) on the Moon
Asteroid 1448 Lindbladia
References
1895 births
1965 deaths
20th-century Swedish astronomers
Uppsala University alumni
Recipients of the Gold Medal of the Royal Astronomical Society
Foreign associates of the National Academy of Sciences
Presidents of the International Astronomical Union | Bertil Lindblad | Astronomy | 396 |
1,708,126 | https://en.wikipedia.org/wiki/Pitch%20detection%20algorithm | A pitch detection algorithm (PDA) is an algorithm designed to estimate the pitch or fundamental frequency of a quasiperiodic or oscillating signal, usually a digital recording of speech or a musical note or tone. This can be done in the time domain, the frequency domain, or both.
PDAs are used in various contexts (e.g. phonetics, music information retrieval, speech coding, musical performance systems) and so there may be different demands placed upon the algorithm. There is as yet no single ideal PDA, so a variety of algorithms exist, most falling broadly into the classes given below.
A PDA typically estimates the period of a quasiperiodic signal, then inverts that value to give the frequency.
General approaches
One simple approach would be to measure the distance between zero crossing points of the signal (i.e. the zero-crossing rate). However, this does not work well with complicated waveforms which are composed of multiple sine waves with differing periods or noisy data. Nevertheless, there are cases in which zero-crossing can be a useful measure, e.g. in some speech applications where a single source is assumed. The algorithm's simplicity makes it "cheap" to implement.
More sophisticated approaches compare segments of the signal with other segments offset by a trial period to find a match. AMDF (average magnitude difference function), ASMDF (Average Squared Mean Difference Function), and other similar autocorrelation algorithms work this way. These algorithms can give quite accurate results for highly periodic signals. However, they have false detection problems (often "octave errors"), can sometimes cope badly with noisy signals (depending on the implementation), and - in their basic implementations - do not deal well with polyphonic sounds (which involve multiple musical notes of different pitches).
Current time-domain pitch detector algorithms tend to build upon the basic methods mentioned above, with additional refinements to bring the performance more in line with a human assessment of pitch. For example, the YIN algorithm and the MPM algorithm are both based upon autocorrelation.
Frequency-domain approaches
Frequency domain, polyphonic detection is possible, usually utilizing the periodogram to convert the signal to an estimate of the frequency spectrum
. This requires more processing power as the desired accuracy increases, although the well-known efficiency of the FFT, a key part of the periodogram algorithm, makes it suitably efficient for many purposes.
Popular frequency domain algorithms include: the harmonic product spectrum; cepstral analysis and maximum likelihood which attempts to match the frequency domain characteristics to pre-defined frequency maps (useful for detecting pitch of fixed tuning instruments); and the detection of peaks due to harmonic series.
To improve on the pitch estimate derived from the discrete Fourier spectrum, techniques such as spectral reassignment (phase based) or Grandke interpolation (magnitude based) can be used to go beyond the precision provided by the FFT bins. Another phase-based approach is offered by Brown and Puckette
Spectral/temporal approaches
Spectral/temporal pitch detection algorithms, e.g. the YAAPT pitch tracking algorithm, are based upon a combination of time domain processing using an autocorrelation function such as normalized cross correlation, and frequency domain processing utilizing spectral information to identify the pitch. Then, among the candidates estimated from the two domains, a final pitch track can be computed using dynamic programming. The advantage of these approaches is that the tracking error in one domain can be reduced by the process in the other domain.
Speech pitch detection
The fundamental frequency of speech can vary from 40 Hz for low-pitched voices to 600 Hz for high-pitched voices.
Autocorrelation methods need at least two pitch periods to detect pitch. This means that in order to detect a fundamental frequency of 40 Hz, at least 50 milliseconds (ms) of the speech signal must be analyzed. However, during 50 ms, speech with higher fundamental frequencies may not necessarily have the same fundamental frequency throughout the window.
See also
Auto-Tune
Beat detection
Frequency estimation
Linear predictive coding
MUSIC (algorithm)
Sinusoidal model
References
External links
Alain de Cheveigne and Hideki Kawahara: YIN, a fundamental frequency estimator for speech and music
AudioContentAnalysis.org: Matlab code for various pitch detection algorithms
Audio engineering
Digital signal processing | Pitch detection algorithm | Engineering | 887 |
69,840,223 | https://en.wikipedia.org/wiki/CKLF%20like%20MARVEL%20transmembrane%20domain-containing%201 | CKLF like MARVEL transmembrane domain-containing 1 (i.e. CMTM1), formerly termed chemokine-like factor superfamily 1 (i.e. CKLFSF1), has 23 known isoforms, the CMTM1-v1 to CMTM1-v23 proteins. Protein isoforms are variant products that are made by alternative splicing of a single gene. The gene for these isoforms, CMTM1 (formerly termed CKLFSF1), is located in band 22 on the long (i.e. "q") arm of chromosome 16. The CMTM1 gene and its 23 isoforms belong to the CKLF-like MARVEL transmembrane domain-containing family of structurally and functionally related genes and proteins. CMTM1 (isoforms not specified) proteins are weakly express in a wide range of normal tissues but are far more highly expressed in normal testes as well as the malignant cells of certain types of cancer.
Studies have reported that the levels of CMTM1 (typically the CMTM1–v17 isoform) are more highly expressed in breast, kidney, lung, ovary, liver (i.e. hepatocellular carcinoma), and salivary gland adenoid cystic carcinoma malignant tissues than the nearby normal tissues of these respective organs. According to the Human Protein Atlas, higher levels of CMTM1 expression in hepatocellular carcinoma tissues are associated with shorter survival times. Another study found that the levels of CMTM1 mRNA (which directs the production of CMTM1 protein) were higher in stomach cancer compared to nearby normal stomach tissues. And, studies of glioblastoma found no significant difference between the levels of CMTM1 in this brain tumor's tissues versus nearby normal brain tissues but higher levels of tumor tissue CMTM1 were associated with poorer prognoses. In addition, the forced overexpression of CMTM1 in cultured glioblastoma cell lines increased their proliferation and invasiveness. These findings suggest that CMTM1 proteins may act to promote the cited cancers and support further studies to determine if these proteins contribute to the development and/or progression of the cited cancers, can be used as markers of disease severity and/or prognosis, or are targets for treating these cancers.
In contrast to the findings in the cancers just cited, cell culture studies indicated that the forced overexpression of the CMTM1-v5 isoform induced apoptosis (i.e. cell death due to the activation of cell death-inducing signaling pathways) in two types of lymphoma cell lines, Jurkat cells (a human T cell leukemia cell line) and Raji cells (a human non-Hodgkin's lymphoma cell line). Simple addition of CMTM1-v5 protein to cultures of Daudi or Ramos cells (both are Burkitt's lymphoma cell lines) or Jurkat cells likewise caused these cells to become apoptotic. Various other cultured hematological tumor cell lines had no such response to the CMTM1-v5 protein. Finally, the injection of CMTM1-v5 into mice containing Raji cell tumors in a xenotransplantation model of cancer inhibited the spread of these tumors and prolonged the survival of the mice. These findings suggest that CMTM1-v5 protein may act to suppress certain types of lymphoma in humans and support initial studies to define the CMTM1-v5 levels in the malignant cells of humans with these lymphomas. Further studies are also needed to determine the basis for the CMTM1 proteins' promoting actions in the cited cancers versus suppressing actions in the cited lymphomas.
References
Human proteins
Gene expression | CKLF like MARVEL transmembrane domain-containing 1 | Chemistry,Biology | 798 |
24,144,415 | https://en.wikipedia.org/wiki/C29H52O | {{DISPLAYTITLE:C29H52O}}
The molecular formula C29H52O (molar mass: 416.72 g/mol, exact mass: 416.4018 u) may refer to:
24-Ethyl coprostanol
Stigmastanol (sitostanol)
Poriferastanol
Molecular formulas | C29H52O | Physics,Chemistry | 75 |
39,722,957 | https://en.wikipedia.org/wiki/Mitsubishi%20MCA | Mitsubishi MCA stands for Mitsubishi Clean Air, a moniker used in Japan to identify vehicles built with emission control technology. The term was first introduced in Japan, with later introductions internationally. The technology first appeared in January 1973 on the Mitsubishi 4G32A gasoline-powered inline four cylinder engine installed in all Mitsubishi vehicles using the 4G32 engine, and the Saturn-6 6G34 six-cylinder gasoline-powered engine installed in the Mitsubishi Debonair. The technology was installed so that their vehicles would be in compliance with Japanese Government emission regulations passed in 1968.
Emission reducing technology began with the installation of a positive crankcase ventilation (PCV) valve (MCA-I), followed by the addition of a thermo reactor air pump and catalytic converter in addition to an exhaust gas recirculation (EGR) valve (MCA-II) and a solenoid controlled automatic choke installed on the carburetor.
The MCA-Jet system has a small third valve separate from the intake and exhaust valves. Separate passages in the intake manifold feed each MCA-Jet valve. Since these passages are smaller than the main intake manifold passages, the air/fuel mixture must move faster. When the faster moving air/fuel mixture from the MCA-Jet valve hits the slower moving air/fuel mixture from the intake valve, a strong air swirling effect occurs that promotes more complete combustion. With MCA-Jet it was found that stable combustion could be obtained even with large amounts of exhaust gas recirculation (EGR), NOx could be reduced, and combustion improved. Honda's CVCC Stratified charge engine approach also used a small third valve, but sent a richer air/fuel mixture to a small pre-combustion chamber near the spark plug, to help ignite a leaner air/fuel mixture in the main combustion chamber. MCA-Jet was a simpler system that sent the same air/fuel mixture to all intake and MCA-Jet valves. Each MCA-Jet valve is quite small and may be prone to carbon build-up, causing the MCA-Jet valve(s) to stick open. If a Mitsubishi-designed engine has low compression, the MCA-Jet valve(s) could be the cause. Each MCA-Jet valve and valve seat are a self-contained cylinder-shaped unit that screws into the cylinder head for easy replacement. Aftermarket MCA-Jet valves are available. With the advent of 4-valve-per-cylinder engines, manufacturers typically design the camshaft(s) to open one intake valve slightly before the other to create a swirling effect. This has made the MCA-Jet system obsolete. The MCA-Jet system was used in certain Mitsubishi-designed engines installed in both Mitsubishi-branded and Chrysler/Dodge/Plymouth-branded vehicles during the late 1970s to late 1980s.
References
Engine technology
Engines
Automotive technology tradenames
Mitsubishi Motors | Mitsubishi MCA | Physics,Technology | 584 |
29,674,553 | https://en.wikipedia.org/wiki/Share%20icon | A share icon is a user interface icon intended to convey to the user a button for performing a share action. Content platforms such as YouTube often include a share icon so that users can forward the content onto social media platforms or embed videos into their websites, thus increasing its view count.
Share Icon
WordPress developer Alex King created the original Share Icon in 2006. ShareThis acquired the rights to this icon a year later, and eventually licensed it under four licenses: the share-alike GPL and LGPL, and the permissive BSD license and Creative Commons Attribution 2.5. ShareThis produces widgets for accessing social networking services from a single pop-up menu. This icon is trademarked and was cause for controversy due to it being subject to legal take-down notices despite its license.
Open Share Icon
The Open Share Icon (or Shareaholic icon) is designed to help users easily identify shareable content. The icon aims to convey the act of sharing visually by representing one hand passing an object to another hand, while also representing an eye meaning "look at this." The icon was designed by the company Shareaholic, and made available under a Creative Commons share-alike license, with the restriction that "clear attribution and a hyperlink back to this page in a prominent location".
The Open Share Icon is supported by Ken Rossi (creator of the widely used OPML icons) and Bruce McKenzie (GeoTag icons), and is used by hundreds of websites and applications, including SmugMug, the Shareaholic Firefox addon, Wikia, NetworkWorld, Weather Underground, Princeton University. It is also proposed for use in the Mozilla Add-ons Directory.
Rightward arrow icon
Facebook uses a share icon showing an arrow pointing up and then right.
The "share" button on Facebook covers several ways of sending the content with optional privacy settings to others. The "shares" button can generate more non-fans, and can result in fewer fans on a public Facebook page as a “brake effect of viral reach". The algorithmic content ranking on Facebook might decrease the fan reach in response to the notable increase in nonfan reach. Therefore, the "share" button may have a higher impact on content ranking than the previous "page like" button.
YouTube uses a similar icon.
Upward arrow icon
Apple's products use an icon showing a box with an upward arrow. This icon is used on several Apple products:
In iOS's iWork, for sharing work on iWork.com.
In Mobile Safari, to show a menu of options: "Add Bookmark," "Add to Home Screen," "Mail Link to this Page," and "Print."
In QuickTime X, to show the iTunes, MobileMe Gallery, YouTube, and trim options
Twitter shows a similar icon next to each tweet. This opens a menu with three options: Send via Direct Message, Add Tweet to Bookmarks, or Copy link to Tweet.
See also
Feed icon
Like button
References
Graphical user interface elements
Computer icons
Social information processing | Share icon | Technology | 635 |
1,544,931 | https://en.wikipedia.org/wiki/Bonnefantenmuseum | The Bonnefanten Museum is a museum of historic, modern and contemporary art in Maastricht, Netherlands.
History
The museum was founded in 1884 as the historical and archaeological museum for the Dutch province of Limburg. The name Bonnefanten Museum is derived from the French 'bons enfants' ('good children'), the popular name of a former convent that housed the museum from 1951 until 1978.
In 1995, the museum moved to its present location, a former industrial site named 'Céramique'. The new building was designed by Italian architect Aldo Rossi. With its rocket-shaped cupola overlooking the river Maas, it is one of Maastricht's most prominent modern buildings.
Since 1999, the museum has become exclusively an art museum. The historical and archaeological collections were housed elsewhere, partially at the Limburg Museum in Venlo. The museum is largely funded by the province of Limburg.
In 2009, the museum celebrated its 125th anniversary with the exhibition Exile on Main Street, celebrating modern and contemporary American art. Stijn Huijts has been director since 2012.
Collection
The combination of historic art and contemporary art under one roof gives the Bonnefanten Museum a distinctive character. The department of old masters is located on the first floor and displays highlights of early Italian, Flemish and Dutch painting. Exhibited on the same floor is the museum's extensive collection of medieval sculpture. The contemporary art collection is usually exhibited on the second floor and focuses on American Minimalism, Italian Arte Povera and Concept Art. The second and third floors are also used for temporary exhibitions.
Historic Art
The collection of historic paintings and sculptures of the Bonnefanten Museum consists of four main sections:
Wooden sculptures dating from the 13th to the 16th century, notably by Jan van Steffeswert (e.g. The Virgin and Child with St. Anne) and the );
The Neutelings Collection of medieval art, consisting of artefacts made of wood, bronze, marble, alabaster and ivory from the Southern Netherlands, France, England and the German Lower Rhine region;
Italian paintings from the 14th and 15th centuries: Giovanni del Biondo, Domenico di Michelino, Jacopo del Casentino, Sano di Pietro, Pietro Nelli;
Flemish and Dutch paintings from the 16th and 17th centuries: Colijn de Coter, Jan Mandyn, Jan Provost, Roelandt Savery, Pieter Coecke van Aelst, Pieter Aertsen, Pieter Brueghel the Younger, David Teniers II, Peter Paul Rubens, Jacob Jordaens, Hendrik van Steenwijk II, Gérard de Lairesse, Wallerant Vaillant, Melchior d'Hondecoeter, Jan van Goyen and Cornelis de Bryer.
Contemporary art
Since Alexander van Grevenstein became director in 1986, the Bonnefanten Museum has focused mainly on contemporary art. The main focus of the permanent collection is on:
Conceptual art: Jan Dibbets, Marcel Broodthaers, Joseph Beuys, Bruce Nauman, Gilbert and George, Ai Weiwei;
Minimal Art: Sol LeWitt, Robert Ryman, Robert Mangold, Richard Serra;
Arte Povera: Luciano Fabro, Mario Merz, Jannis Kounellis;
Neo-expressionism: Neo Rauch, Peter Doig, Gary Hume, Grayson Perry, Luc Tuymans, Marlene Dumas.
The collection also features video art and room-size installations by younger artists: Atelier Van Lieshout, Francis Alÿs, David Claerbout, Patrick Van Caeckenbergh, Roman Signer, Franz West, Pawel Althamer.
In 2011, a deal was negotiated between the collectors Jo and Marlies Eyck and the province of Limburg. The result was that the Eyck collection of postwar art and the castle of Wijlre and its grounds, are now part of the museum.
Visitor numbers
All figures are from museum year reports.
Governance
The current director is Stijn Huijts. He replaced Alexander van Grevenstein, who became director in 1986. As of 2023, there are 53 permanant staff at the museum. The budget, in 2023, was around €9.9m, of which €6.5m was received in funding from the province of Limburg.
Gallery
See also
Google Arts & Culture
Bibliography, references and notes
Szénássy, I.L. (ed.), Bonnefantenmuseum. Het gebouw. Het museum. De verzamelingen. Maastricht, 1984
Szénássy, I.L. (ed.), Kunst in het Bonnefantenmuseum. Maastricht, 1984
Szénássy, I.L. (ed.), Oudheden in het Bonnefantenmuseum. Maastricht, 1984
Poel, P. te, Bonnefantenmuseum. Collectie Middeleeuws Houtsnijwerk. Maastricht, 2007
Poel, P. te, Bonnefantenmuseum. Collectie Neutelings. Maastricht, 2007
Quik, T., Bonnefantenmuseum. De geschiedenis. Maastricht, 2007
Timmers, J.J.M., Catalogus van schilderijen en beeldhouwwerken. Maastricht, 1958
Wegen, R. van, and T. Quik (ed.), Bonnefantenmuseum Maastricht. Maastricht, 1995
External links
Bonnefantenmuseum within Google Arts & Culture
Modern art museums
Art museums and galleries in the Netherlands
Postmodern architecture
Museums in Maastricht
Art museums and galleries established in 1884
1884 establishments in the Netherlands
19th-century architecture in the Netherlands | Bonnefantenmuseum | Engineering | 1,212 |
41,999,000 | https://en.wikipedia.org/wiki/Fonderie%20Nationale%20des%20Bronzes | Fonderie Nationale des Bronzes (established as J. Petermann fondeur Bruxelles) was a 19th– and 20th–century artistic studio and foundry in Brussels, Belgium, that specialized in bronze sculptures. It became known for casting the works of Auguste Rodin, Rembrandt Bugatti, Paul Delvaux, and many others.
Works
Several works by various artists are located in noted museums around the world, including the Musée d'Orsay in Paris.
A statue of William the Silent (1920) on the campus of Rutgers University in New Brunswick, New Jersey, cast by Toon Dupuis from a mould by Lodewyk Royer.
References
Foundries
Studios in Belgium
Buildings and structures in Brussels
Cultural organisations based in Belgium
19th century in Brussels
20th century in Brussels
Organisations based in Brussels
Belgian art
European sculpture
History of sculpture
Metal companies of Belgium | Fonderie Nationale des Bronzes | Chemistry | 178 |
46,655,707 | https://en.wikipedia.org/wiki/Pinatuzumab%20vedotin | Pinatuzumab vedotin (INN; development codes DCDT2980S and FCU2703) is a monoclonal antibody designed for the treatment of B-cell malignancies.
This drug was developed by Genentech/Roche.
References
Monoclonal antibodies for tumors
Antibody-drug conjugates
Experimental cancer drugs | Pinatuzumab vedotin | Biology | 73 |
36,581,725 | https://en.wikipedia.org/wiki/Stephen%20Hawking%3A%20Master%20of%20the%20Universe | Stephen Hawking: Master of the Universe is a documentary television series produced by the television broadcaster Channel 4. The subject of the series is British theoretical physicist Stephen Hawking, known for his work on black holes, who is also the presenter of the series. The series includes interviews with astrophysicist Kim Weaver, Bernard Carr, a student of Hawking's, and three theoretical physicists: Michio Kaku, Edward Witten, known for his work on superstring theory, and Lisa Randall. The first episode premiered in 2008, twenty years after the publication of Hawking's bestselling popular science book A Brief History of Time. The title is derived from a Newsweek cover.
The series consisted of two episodes. The first describes Hawking's personal life, his challenges in overcoming his motor neurone disease, and his career in physics. It covers his childhood, education, marriage, family life, and his work on the Big Bang and black holes. The second episode discusses string theory and supersymmetry. Both episodes are 48 minutes long, and premiered as half-hour-long programmes. The series was released on CD by Channel 4 in the UK as a Region 2, one disc-DVD (B00140SGOM) on 18 March 2008.
Reception
The premiere of the first episode attracted 1.9 million viewers, and was considered a success. The second episode had 1.7 million viewers. James Walton of The Daily Telegraph wrote a positive review of the first episode, saying that it "hadn't done a bad job of trying to explain advanced physics to the science novice," even if it was "extremely difficult stuff." Philip Wakefield, a television critic for stuff.co.nz, listed the first episode in his "Top TV picks", calling it "the neatest illustration of Einstein's theory of relativity I’ve ever seen." Sam Wollastan of The Guardian was more critical of the series, but did praise it for showing "the little glimpses of Prof Hawking's private life, like sharing a takeaway curry with a group of adoring young disciples."
References
External links
2008 British television series debuts
2008 British television series endings
2000s British documentary television series
Channel 4 documentaries
Science education television series
British English-language television shows
Cultural depictions of Stephen Hawking | Stephen Hawking: Master of the Universe | Astronomy | 476 |
2,029,635 | https://en.wikipedia.org/wiki/Variational%20perturbation%20theory | In mathematics, variational perturbation theory (VPT) is a mathematical method to convert divergent power series in a small expansion parameter, say
,
into a convergent series in powers
,
where is a critical exponent (the so-called index of "approach to scaling" introduced by Franz Wegner). This is possible with the help of variational parameters, which are determined by optimization order by order in . The partial sums are converted to convergent partial sums by a method developed in 1992.
Most perturbation expansions in quantum mechanics are divergent for any small coupling strength . They can be made convergent by VPT (for details see the first textbook cited below). The convergence is exponentially fast.
After its success in quantum mechanics, VPT has been developed further to become an important mathematical tool in quantum field theory with its anomalous dimensions. Applications focus on the theory of critical phenomena. It has led to the most accurate predictions of critical exponents.
More details can be read here.
References
External links
Kleinert H., Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3. Auflage, World Scientific (Singapore, 2004) (readable online here) (see Chapter 5)
Kleinert H. and Verena Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapur, 2001); Paperback (readable online here) (see Chapter 19)
Asymptotic analysis
Perturbation theory | Variational perturbation theory | Physics,Mathematics | 313 |
1,801,877 | https://en.wikipedia.org/wiki/Saccharic%20acid | Saccharic acid, is a chemical compound with the formula C6H10O8. It is derived by oxidizing a sugar such as glucose with nitric acid.
The salts of saccharic acid are called saccharates or glucarates.
See also
Saccharide
Disaccharides
Monosaccharides
Mucic acid
Gluconic acid
Isosaccharinic acid
References
Sugar acids
Monosaccharides | Saccharic acid | Chemistry | 93 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.