source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Cyclically%20adjusted%20price-to-earnings%20ratio | The cyclically adjusted price-to-earnings ratio, commonly known as CAPE, Shiller P/E, or P/E 10 ratio, is a valuation measure usually applied to the US S&P 500 equity market. It is defined as price divided by the average of ten years of earnings (moving average), adjusted for inflation. As such, it is principally used to assess likely future returns from equities over timescales of 10 to 20 years, with higher than average CAPE values implying lower than average long-term annual average returns.
The ratio was invented by American economist Robert J. Shiller.
The ratio is used to gauge whether a stock, or group of stocks, is undervalued or overvalued by comparing its current market price to its inflation-adjusted historical earnings record.
It is a variant of the more popular price to earning ratio and is calculated by dividing the current price of a stock by its average inflation-adjusted earnings over the last 10 years.
Using average earnings over the last decade helps to smooth out the impact of business cycles and other events and gives a better picture of a company's sustainable earning power.
It is not intended as an indicator of impending market crashes, although high CAPE values have been associated with such events.
Background
Value investors Benjamin Graham and David Dodd argued for smoothing a firm's earnings over the past five to ten years in their classic text Security Analysis. Graham and Dodd noted one-year earnings were too volatile to offer a good idea of a firm's true earning power. In a 1988 paper economists John Y. Campbell and Robert Shiller concluded that "a long moving average of real earnings helps to forecast future real dividends" which in turn are correlated with returns on stocks. The idea is to take a long-term average of earnings (typically 5 or 10 year) and adjust for inflation to forecast future returns. The long term average smooths out short term volatility of earnings and medium-term business cycles in the general economy a |
https://en.wikipedia.org/wiki/Osa%20Wildlife%20Sanctuary |
Osa Wildlife Sanctuary
The Osa Wildlife Sanctuary () or Caña Blanca Wildlife Sanctuary, is an animal rescue center located in Osa Peninsula in southwestern Costa Rica. The Sanctuary is accessible only by boat and is completely surrounded by Piedras Blancas National Park. The center is dedicated to the rehabilitation of mistreated, injured, orphaned, and/or confiscated animals. The animals that are received by the sanctuary include a variety of monkeys, anteaters, exotic birds, sloths, and wildcats. Once the animals are fully rehabilitated, they are reintroduced into their natural habitats in protected areas within Costa Rica, including the Corcovado National Park. The Osa Wildlife Sanctuary is a nonprofit organization that receives funds from volunteers, donations, and tours.
History
The Osa Wildlife Sanctuary was originally a eco-lodge owned by a woman named Carol Patrick, its purpose being to house guests. While Carol was running the eco-lodge she was asked by locals to take care of injured and abandoned animals. After accepting a few at first, the animals under her care continued to grow, until in 2003 she opened up the wildlife sanctuary. The property has continued to develop towards animal care and release since then, the once eco-lodge has now been developed to include animal enclosures and housing and dining for the staff.
See also
List of zoos by country: Costa Rica zoos |
https://en.wikipedia.org/wiki/Mechanism%20%28philosophy%29 | Mechanism is the belief that natural wholes (principally living things) are similar to complicated machines or artifacts, composed of parts lacking any intrinsic relationship to each other.
The doctrine of mechanism in philosophy comes in two different flavors. They are both doctrines of metaphysics, but they are different in scope and ambitions: the first is a global doctrine about nature; the second is a local doctrine about humans and their minds, which is hotly contested. For clarity, we might distinguish these two doctrines as universal mechanism and anthropic mechanism.
Mechanical philosophy
The mechanical philosophy is a form of natural philosophy which compares the universe to a large-scale mechanism (i.e. a machine). The mechanical philosophy is associated with the scientific revolution of early modern Europe. One of the first expositions of universal mechanism is found in the opening passages of Leviathan by Thomas Hobbes, published in 1651.
Some intellectual historians and critical theorists argue that early mechanical philosophy was tied to disenchantment and the rejection of the idea of nature as living or animated by spirits or angels. Other scholars, however, have noted that early mechanical philosophers nevertheless believed in magic, Christianity and spiritualism.
Mechanism and determinism
Some ancient philosophies held that the universe is reducible to completely mechanical principles—that is, the motion and collision of matter. This view was closely linked with materialism and reductionism, especially that of the atomists and to a large extent, stoic physics. Later mechanists believed the achievements of the scientific revolution of the 17th century had shown that all phenomena could eventually be explained in terms of "mechanical laws": natural laws governing the motion and collision of matter that imply a determinism. If all phenomena can be explained entirely through the motion of matter under physical laws, as the gears of a clock deter |
https://en.wikipedia.org/wiki/Staggered%20conformation | In organic chemistry, a staggered conformation is a chemical conformation of an ethane-like moiety abcX–Ydef in which the substituents a, b, and c are at the maximum distance from d, e, and f; this requires the torsion angles to be 60°. It is the opposite of an eclipsed conformation, in which those substituents are as close to each other as possible.
Such a conformation exists in any open chain single chemical bond connecting two sp3-hybridised atoms, and is normally a conformational energy minimum. For some molecules such as those of n-butane, there can be special versions of staggered conformations called gauche and anti; see first Newman projection diagram in Conformational isomerism.
Staggered/eclipsed configurations also distinguish different crystalline structures of e.g. cubic/hexagonal boron nitride, and diamond/lonsdaleite.
See also
Alkane stereochemistry
Eclipsed conformation |
https://en.wikipedia.org/wiki/Ortman%20key | An Ortman key is a coupling device used to secure two adjacent cylindrical segments of a pressure vessel common in tactical rocket motors. An Ortman key is made of elongated rectangular metal bar stock, such as steel, and is inserted into juxtaposed annular grooves around the circumference of the mating parts. The Ortman key assembly is used in high-pressure applications where packaging, strength and mass are important.
The Edmund key is a common variant of the Ortman key which is similar except has a feature added to the end of the key to aid in extraction of the key from the assembly. |
https://en.wikipedia.org/wiki/Antibody-dependent%20enhancement | Antibody-dependent enhancement (ADE), sometimes less precisely called immune enhancement or disease enhancement, is a phenomenon in which binding of a virus to suboptimal antibodies enhances its entry into host cells, followed by its replication. The suboptimal antibodies can result from natural infection or from vaccination. ADE may cause enhanced respiratory disease, but is not limited to respiratory disease. It has been observed in HIV, RSV virus and Dengue virus and is monitored for in vaccine development.
Technical description
In ADE, antiviral antibodies promote viral infection of target immune cells by exploiting the phagocytic FcγR or complement pathway. After interaction with a virus, the antibodies bind Fc receptors (FcR) expressed on certain immune cells or complement proteins. FcγRs bind antibodies via their fragment crystallizable region (Fc).
The process of phagocytosis is accompanied by virus degradation, but if the virus is not neutralized (either due to low affinity binding or targeting to a non-neutralizing epitope), antibody binding may result in virus escape and, therefore, more severe infection. Thus, phagocytosis can cause viral replication and the subsequent death of immune cells. Essentially, the virus “deceives” the process of phagocytosis of immune cells and uses the host's antibodies as a Trojan horse.
ADE may occur because of the non-neutralizing characteristic of an antibody, which binds viral epitopes other than those involved in host-cell attachment and entry. It may also happen when antibodies are present at sub-neutralizing concentrations (yielding occupancies on viral epitopes below the threshold for neutralization), or when the strength of antibody-antigen interaction is below a certain threshold. This phenomenon can lead to increased viral infectivity and virulence.
ADE can occur during the development of a primary or secondary viral infection, as well as with a virus challenge after vaccination. It has been observed mainly wi |
https://en.wikipedia.org/wiki/VT100%20encoding | The VT100 code page is a character encoding used to represent text on the Classic Mac OS for compatibility with the VT100 terminal. It encodes 256 characters, the first 128 of which are identical to ASCII, with the remaining characters including mathematical symbols, diacritics, and additional punctuation marks. It is suitable for English and several other Western languages. It is similar to Mac OS Roman, but includes all characters in ISO 8859-1 except for the currency sign (which was superseded by the euro sign), the no-break space, and the soft hyphen. It also includes all characters in DEC Special Graphics (code page 1090), except for the new line and no-break space controls. The VT100 encoding is only used on the VT100 font on the Classic Mac OS, and is not an official Mac OS character encoding.
Codepage layout
The following table shows how characters are encoded in the VT100 character set. Each character is shown with its Unicode equivalent. |
https://en.wikipedia.org/wiki/Crab%20ice%20cream | Crab ice cream is a sweet flavour of ice cream with crab. It is offered in some food establishments, particularly ice cream parlours, such as Heston Blumenthal's The Fat Duck restaurant, and the Venezuelan Coromoto.
History and production
Crab ice cream is a Japanese creation. The island of Hokkaido, Japan, is known for manufacturing crab ice cream.
Preparation and description
Heston Blumenthal's recipe for crab ice cream involves freezing for half an hour a mixture of stock made mainly from crab (or prawns as an alternative) and a little milk powder (skimmed), a dozen yolks of egg, and some sugar. Crab ice cream is described to be sweet in flavour to the majority of people, although some may taste otherwise as it has been proven that taste is affected by the brain's expectations. Blumenthal has compared crab ice cream to "frozen crab bisque".
Notable uses
Crab ice cream is made by Heston Blumenthal, a renowned chef and owner of the Fat Duck, in Berkshire, England, as a dessert item on his restaurant's food menu, although he admits that it is difficult to convince customers to try it. He also made it once for testing by food scientists in June 2011, and for an ice cream occasion at the Royal Institution in June 2001. Heladería Coromoto, a Mérida-based ice cream shop, which has the biggest variety of ice cream sold in the world, offers "Cream of Crab" as one of its ice cream flavours. In its country of origin, Japan, crab ice cream is marketed in Japanese as Kani aisu. A Delaware-based ice cream parlor once attempted to make their own crab ice cream, but the end product was deemed a failure.
See also
List of ice cream flavors
List of crab dishes
List of seafood dishes |
https://en.wikipedia.org/wiki/Angustific%20acid | Angustific acid A and angustific acid B are antiviral compounds isolated from Kadsura angustifolia. They are triterpenoids.
See also
Angustifodilactone
Neokadsuranin |
https://en.wikipedia.org/wiki/Endless%20Forms%20Most%20Beautiful%20%28book%29 | Endless Forms Most Beautiful: The New Science of Evo Devo and the Making of the Animal Kingdom is a 2005 book by the molecular biologist Sean B. Carroll. It presents a summary of the emerging field of evolutionary developmental biology and the role of toolkit genes. It has won numerous awards for science communication.
The book's somewhat controversial argument is that evolution in animals (though no doubt similar processes occur in other organisms) proceeds mostly by modifying the way that regulatory genes, which do not code for structural proteins (such as enzymes), control embryonic development. In turn, these regulatory genes turn out to be based on a very old set of highly conserved genes which Carroll nicknames the toolkit. Almost identical sequences can be found across the animal kingdom, meaning that toolkit genes such as Hox must have evolved before the Cambrian radiation which created most of the animal body plans that exist today. These genes are used and reused, occasionally by duplication but far more often by being applied unchanged to new functions. Thus the same signal may be given at a different time in development, in a different part of the embryo, creating a different effect on the adult body. In Carroll's view, this explains how so many body forms are created with so few structural genes.
The book has been praised by critics, and called the most important popular science book since Richard Dawkins's The Blind Watchmaker.
Author
Sean B. Carroll is a professor of molecular biology and genetics at the University of Wisconsin–Madison. He studies the evolution of cis-regulatory elements (pieces of non-coding DNA) which help to regulate gene expression in developing embryos, using the fruit fly Drosophila as the model organism. He has won the Shaw Scientist Award and the Stephen Jay Gould Prize for his work.
Book
Context
The book's title quotes from the last sentence of Charles Darwin's 1859 The Origin of Species, in which he described the evol |
https://en.wikipedia.org/wiki/Sebright%20chicken | The Sebright (IPA: ) is a British breed of bantam chicken. It is a true bantam – a miniature bird with no corresponding large version – and is one of the oldest recorded British bantam breeds. It is named after Sir John Saunders Sebright, who created it as an ornamental breed by selective breeding in the early nineteenth century.
The first poultry breed to have its own specialist club for enthusiasts, Sebrights were admitted to poultry exhibition standards not long after their establishment. Today, they are among the most popular of bantam breeds. Despite their popularity, Sebrights are often difficult to breed, and the inheritance of certain unique characteristics the breed carries has been studied scientifically. As a largely ornamental chicken, they lay tiny, white eggs and are not kept for meat production.
History
Background
Sir John Saunders Sebright (1767–1846) was the 7th Sebright Baronet, and a Member of Parliament for Hertfordshire. In addition to breeding chickens, cattle and other animals, Sir John wrote several influential pamphlets on animal keeping and breeding: The Art of Improving the Breeds of Domestic Animals (1809), Observations upon Hawking (1826), and Observations upon the Instinct of Animals (1836).
Charles Darwin read Sir John's 1809 pamphlet, and was impressed with a passage that elaborated on how "the weak and the unhealthy do not live to propagate their infirmities". These writings, along with Darwin's correspondence via their mutual friend William Yarrell, aided Darwin in the inception of Darwin's theory of natural selection. Darwin's seminal work On the Origin of Species, first published in 1859, cited Sir John's experiments in pigeon breeding, and recalled "That most skilful breeder, Sir John Sebright, used to say, with respect to pigeons, that 'he would produce any given feather in three years, but it would take him six years to obtain head and beak.'" Darwin also cited Sir John extensively regarding the Sebright bantam, as well as |
https://en.wikipedia.org/wiki/Computron%20tube | The Computron was an electron tube designed to perform the parallel addition and multiplication of digital numbers. It was conceived by Richard L. Snyder, Jr., Jan A. Rajchman, Paul Rudnick and the digital computer group at the laboratories of the Radio Corporation of America under the direction of Vladimir Zworykin. Development began in 1941 under contract OEM-sr-591 to Division 7 of the National Defense Research Committee of the United States Office of Research and Development.
The numerical function of the Computron was to solve the equation where A, B, C, and D are 14 bit inputs and S is a 28 bit output. This function was key to the RCA attempt to produce a non-analog computer based fire-control system for use in artillery aiming during WWII.
A simple way to describe the physically complex Computron is to begin with a cathode ray tube structure in the form of a right-circular cylinder with a central vertical cathode structure. The cylinder is composed of 14 discrete planes, each plane having 14 individual radial outward projecting beams. Each of the 196 individual beams is steered by multiple deflection plates toward its two targets. Some deflection plates are connected to circuitry external to the Computron and are the data inputs. The balance of the plates are connected to internal targets and are the partial sums and products from other stages within the tube. Some of the targets are connected to circuitry outside the tube and represent the result.
The electronic function of the Computron design incorporated steered, rather than gated, multiple electron beams. Additionally, the Computron was based on the ability of a secondary electron emission target, under electron bombardment, to assume the potential of the nearest collector electrode. The Additron Tube design by Josef Kates gated electron beams of a fixed trajectory with several control grids which either passed or blocked a current. The Computron was a complex cathode ray tube while t |
https://en.wikipedia.org/wiki/Paprika%20oleoresin | Paprika oleoresin (also known as paprika extract and oleoresin paprika) is an oil-soluble extract from the fruits of Capsicum annuum or Capsicum frutescens, and is primarily used as a colouring and/or flavouring in food products. It is composed of vegetable oil (often in the range of 97% to 98%), capsaicin, the main flavouring compound giving pungency in higher concentrations, and capsanthin and capsorubin, the main colouring compounds (among other carotenoids). It is much milder than capsicum oleoresin, often containing no capsaicin at all.
Extraction is performed by percolation with a variety of solvents, primarily hexane, which are removed prior to use. Vegetable oil is then added to ensure a uniform color saturation.
Uses
Foods colored with paprika oleoresin include cheese, orange juice, spice mixtures, sauces, sweets, ketchup, soups, fish fingers, chips, pastries, fries, dressings, seasonings, jellies, bacon, ham, ribs, and among other foods even cod fillets. In poultry feed, it is used to deepen the colour of egg yolks.
In the United States, paprika oleoresin is listed as a color additive “exempt from certification”. In Europe, paprika oleoresin (extract), and the compounds capsanthin and capsorubin are designated by E160c.
Names and CAS nos |
https://en.wikipedia.org/wiki/De%20Donder%E2%80%93Weyl%20theory | In mathematical physics, the De Donder–Weyl theory is a generalization of the Hamiltonian formalism in the calculus of variations and classical field theory over spacetime which treats the space and time coordinates on equal footing. In this framework, the Hamiltonian formalism in mechanics is generalized to field theory in the way that a field is represented as a system that varies both in space and in time. This generalization is different from the canonical Hamiltonian formalism in field theory which treats space and time variables differently and describes classical fields as infinite-dimensional systems evolving in time.
De Donder–Weyl formulation of field theory
The De Donder–Weyl theory is based on a change of variables known as Legendre transformation. Let xi be spacetime coordinates, for i = 1 to n (with n = 4 representing 3 + 1 dimensions of space and time), and ya field variables, for a = 1 to m, and L the Lagrangian density
With the polymomenta pia defined as
and the De Donder–Weyl Hamiltonian function H defined as
the De Donder–Weyl equations are:
This De Donder-Weyl Hamiltonian form of field equations is covariant and it is equivalent to the Euler-Lagrange equations when the Legendre transformation to the variables pia and H is not singular. The theory is a formulation of a covariant Hamiltonian field theory which is different from the canonical Hamiltonian formalism and for n = 1 it reduces to Hamiltonian mechanics (see also action principle in the calculus of variations).
Hermann Weyl in 1935 has developed the Hamilton-Jacobi theory for the De Donder–Weyl theory.
Similarly to the Hamiltonian formalism in mechanics formulated using the symplectic geometry of phase space
the De Donder-Weyl theory can be formulated using the multisymplectic geometry or polysymplectic geometry and the geometry
of jet bundles.
A generalization of the Poisson brackets to the De Donder–Weyl theory
and the representation of De Donder–Weyl equations in terms |
https://en.wikipedia.org/wiki/Mesophase | In chemistry and chemical physics, a mesophase is a state of matter intermediate between solid and liquid. Gelatin is a common example of a partially ordered structure in a mesophase. Further, biological structures such as the lipid bilayers of cell membranes are examples of mesophases.
Georges Friedel (1922) called attention to the "mesomorphic states of matter" in his scientific assessment of observations of the so-called liquid crystals. Conventionally a crystal is solid, and crystallization converts liquid to solid. The oxymoron of the liquid crystal is resolved through the notion of mesophases. The observations noted an optic axis persisting in materials that had been melted and had begun to flow. The term liquid crystal persists as a colloquialism, but use of the term was criticized in 1993: In The Physics of Liquid Crystals the mesophases are introduced from the beginning:
...certain organic materials do not show a single transition from solid to liquid, but rather a cascade of transitions involving new phases. The mechanical properties and the symmetry properties of these phases are intermediate between those of a liquid and those of a crystal. For this reason they have often been called liquid crystals. A more proper name is ‘mesomorphic phases’ (mesomorphic: intermediate form)
Further, "The classification of mesophases (first clearly set out by G. Friedel in 1922) is essentially based on symmetry."
Molecules that demonstrate mesophases are called mesogens.
In technology, molecules in which the optic axis is subject to manipulation during a mesophase have become commercial products as they can be used to manufacture display devices, known as liquid-crystal displays (LCDs). The susceptibility of the optical axis, called a director, to an electric or magnetic field produces the potential for an optical switch that obscures light or lets it pass. Methods used include the Freedericksz transition, the twisted nematic field effect and the in-plane switching |
https://en.wikipedia.org/wiki/Science%20for%20Life%20Laboratory | SciLifeLab (Science for Life Laboratory) is a world-leading Swedish national center for large-scale research and one of the largest molecular biology research laboratories in Europe at the forefront of innovation in life sciences research, computational biology, bioinformatics, training and services in molecular biosciences with focus on health and environmental research. The center combines frontline technical expertise with advanced knowledge of translational medicine and molecular bioscience.
SciLifeLab is a joint effort between four of the best ranked institutions in Sweden and Scandinavia (Karolinska Institutet—the institution that awards the Nobel Prize in Physiology or Medicine, KTH Royal Institute of Technology, Stockholm University and Uppsala University). The National Genomics Infrastructure (NGI) hosted at SciLifeLab offers large-scale DNA sequence data generation and analysis.
SciLifeLab was established in 2010 and was appointed a national center in 2013 by the Swedish government. More than 200 elite research groups composed by 1,500 researchers are associated and work at SciLifeLab's two campuses in Stockholm and Uppsala. The Stockholm campus is surrounded by one of the largest hospitals in Europe both the old and the new Karolinska University Hospital buildings, the Karolinska Institutet and the European Centre for Disease Prevention and Control.
SciLifeLab is provided with SEK 150 million per year in state funds separate from other national and European grants and infrastructure support in the fields of drug discovery, drug development and fundamental research. Together with the prestigious American journal Science, SciLifeLab awards a young researcher prize. From 2018, SciLifeLab nominates Sjöstrand Lecturer in Structural Biology that would particularly spend time with students and postdocs during a visit to Sweden
External links
SciLifeLab web site
KTH web site
Karolinska Institutet web site
Stockholm University web site
Uppsala University |
https://en.wikipedia.org/wiki/Restriction%20fragment | A restriction fragment is a DNA fragment resulting from the cutting of a DNA strand by a restriction enzyme (restriction endonucleases), a process called restriction. Each restriction enzyme is highly specific, recognising a particular short DNA sequence, or restriction site, and cutting both DNA strands at specific points within this site. Most restriction sites are palindromic, (the sequence of nucleotides is the same on both strands when read in the 5' to 3' direction of each strand), and are four to eight nucleotides long. Many cuts are made by one restriction enzyme because of the chance repetition of these sequences in a long DNA molecule, yielding a set of restriction fragments. A particular DNA molecule will always yield the same set of restriction fragments when exposed to the same restriction enzyme. Restriction fragments can be analyzed using techniques such as gel electrophoresis or used in recombinant DNA technology.
Applications
In recombinant DNA technology, specific restriction endonucleases are used that will isolate a particular gene and cleave the sugar phosphate backbones at different points (retaining symmetry), so that the double-stranded restriction fragments have single-stranded ends. These short extensions, called sticky ends, can form hydrogen bonded base pairs with complementary sticky ends on any other DNA cut with the same enzyme (such as a bacterial plasmid).
In agarose gel electrophoresis, the restriction fragments yield a band pattern characteristic of the original DNA molecule and restriction enzyme used, for example the relatively small DNA molecules of viruses and plasmids can be identified simply by their restriction fragment patterns. If the nucleotide differences of two different alleles occur within the restriction site of a particular restriction enzyme, digestion of segments of DNA from individuals with different alleles for that particular gene with that enzyme would produce different fragments and that will each yield di |
https://en.wikipedia.org/wiki/Earth%20auger | An earth auger, earth drill, or post-hole auger is a drilling tool used for making holes in the ground. It typically consists of a rotating vertical metal rod or pipe with one or more blades attached at the lower end, that cut or scrape the soil.
History
Metal augers have been in use since the Middle Ages to drill holes in wood. In the 19th century, the hand-operated earth auger became a common farm and construction tool in the US, and several inventors submitted patents for them. An example is the design of a certain M. Hubby of Maysfield, Texas, consisting of an open hollow cylinder with two blades at the bottom edge
The first known power earth auger was built in 1943 by John Habluetzel, a farmer in Wamego, Kansas, from parts scavenged from other equipment, including a 7-inch helical blade from a screw separator. It was attached to a tractor and could be operated by the driver from his seat. It dug one 2.5 foot deep hole every minute. His invention was featured in the Kansas State Board of Agriculture's 35th Biennial Report. He went on to dig holes for other farmers at 10 cents per hole, a side business that he operated well into the 1950s. He donated his invention to the Kansas Museum of History in 1999.
Types
Blade arrangement
The most common design of earth auger has a helical screw blade (the flighting) winding around lower part of the shaft. The lower edge of the screw blade scrapes dirt at the bottom of the hole, and the rest of the blade acts like a screw conveyor to lift the loose soil out of the way. When the hole reaches the desired depth and the tool is pulled out, the screw blade scoops out the remaining loose dirt.
The rod may end in a sharp point protruding below the screw blade. Its purpose is to push the dirt that lies just below the rod to the sides, where the blade can pick it up. It also helps keep the hole straight by prevent the auger from wandering off to the side. The lower edge of the screw blade may have teeth.
Anothe |
https://en.wikipedia.org/wiki/Fungal%20Diversity%20Survey | Fungal Diversity Survey, or FunDiS, is a nonprofit citizen science organization formerly known as North American Mycoflora Project, Inc. FunDiS aims to document the diversity and distribution of fungi across North America “in order to increase awareness of their critical role in the health of ecosystems and allow us to better protect them in a world of rapid climate change and habitat loss.” The project encourages amateurs, working with professionals, to contribute observations to online databases vetted by experts, and to collect and document fungi for DNA barcoding. Fungal Diversity Survey, Inc. is a Charitable 501(c)(3) organization registered in Indiana, USA.
History
FunDiS grew out of an academic initiative to create a modern, comprehensive funga for North America. In 2012, some 70 academic mycologists and scientific-oriented mushroom collectors met at Yale University and called the initiative the North American Mycoflora Project. In 2017 at a meeting in Athens, Georgia, the project was reframed as a citizen science initiative and subsequently a nonprofit organization, North American Mycoflora Project, Inc. (NAMP) was launched in December 2017. In August 2020 the organization changed its name to Fungal Diversity Survey to reflect the fact that “fungi are their own kingdom - as long as they get lumped in with plants they will not get the recognition, attention and protection they deserve.”
Programs
Fungal Diversity Database – Build a database of fungal observations on iNaturalist of sufficient scale and quality to be useful to scientists studying biodiversity and the effects of climate change and other impacts on distribution. Observations posted to a special iNaturalist project are reviewed first by triagers who give feedback to contributors of low quality observations and then by a team of expert identifiers.
Rare Fungi Challenges – A Conservation Working Group of experts identifies species of concern and their habitats, develops regional watchlists, and |
https://en.wikipedia.org/wiki/Incunabula%20Short%20Title%20Catalogue | The Incunabula Short Title Catalogue (ISTC) is an electronic bibliographic database maintained by the British Library which seeks to catalogue all known incunabula. The database lists books by individual editions, recording standard bibliographic details for each edition as well as giving a brief census of known copies, organised by location. It currently holds records of over 30,000 editions.
History
Previous efforts to comprehensively catalog 15th century printing include Georg Wolfgang Panzer's Annales Typographici ab Artis Inventae Origine ad Annum MD (1793–97) and Ludwig Hain's Repertorium Bibliographicum (1822). Hain's work was later supplemented by Copinger's Supplement and Reichling's Appendices, which would pave the way for the Gesamtkatalog der Wiegendrucke (1925). The Gesamtkatalog der Wiegendrucke (GW) was the most comprehensive catalog of incunables to date (and still offers more in-depth information than ISTC), but in recent decades work on the catalog has slowed to such a degree that the goal of cataloging all extant incunables under the GW'''s system is indefinitely far-off.
The ISTC was created to establish a system of incunable cataloging that was simple enough to be expanded quickly, bringing the goal of a complete incunable catalog back into focus. Furthermore, the ISTC would use standardized entries that could be entered into a machine-searchable database.
Work on the ISTC began in 1980 under the leadership of the British Library's Lotte Hellinga. Frederick R. Goff's Incunabula in American Libraries (1973) was the first pre-existing catalog to be keyed into ISTC's database. Besides providing the catalog's first 12,900 entries, Goff's system for classifying information about incunables formed the basis for the structure of ISTC's records. Entries for all of the incunables in British Library and the Italian union catalog (IGI) were added next, followed by other national incunable catalogs.
Records
ISTC records retain many characteristics |
https://en.wikipedia.org/wiki/Cape%20foot | A Cape foot is a unit of length defined as 1.0330 English feet (and equal to 12.396 English inches, or 0.31485557516 meters) found in documents of belts and diagrams relating to landed property. It was identically equal to the Rijnland voet and was introduced into South Africa by the Dutch settlers in the seventeenth and eighteenth century.
Its relationship to the English foot was clarified in 1859 by an Act of the government of the Cape Colony, South Africa. It was used for land surveying and title deeds in rural areas of South Africa apart from Natal and was also for urban surveying and title deeds in the Transvaal. There were 144 square Cape feet in one Cape rood and 600 Cape roods (86,400 square Cape feet) in one morgen.
Its use ceased when South Africa adopted the metric system in 1977, though it has not yet been entirely replaced in pre-existing title deeds. |
https://en.wikipedia.org/wiki/Dense%20breast%20tissue | Dense breast tissue, also known as dense breasts, is a condition of the breasts where a higher proportion of the breasts are made up of glandular tissue and fibrous tissue than fatty tissue. Around 40–50% of women have dense breast tissue and one of the main medical components of the condition is that mammograms are unable to differentiate tumorous tissue from the surrounding dense tissue. This increases the risk of late diagnosis of breast cancer in women with dense breast tissue. Additionally, women with such tissue have a higher likelihood of developing breast cancer in general, though the reasons for this are poorly understood.
Definition
Dense breast tissue is defined based on the amount of glandular and fibrous tissue as compared to the percentage of fatty tissue. The current mammography classifications split up the density of breasts into four categories. Approximately 10% of women have almost entirely fatty breasts, 40% with small pockets of dense tissue, 40% with even distribution of dense tissue throughout, and 10% with extremely dense tissue. The latter two groups are those included under the definition of dense breasts. These categories were officially determined as a part of the American College of Radiology's Breast Imaging Reporting and Data System (BI-RADS).
When undergoing a mammogram, tissue density is differentiated with bright and dark spots, with the radiolucent dark areas representing fatty tissue and the radioopaque bright spots representing combined fibroglandular tissue. Assessing the new growth of a tumor as a bright spot is the primary method radiologists use to identify early-stage cancer. However, women with dense breasts have an overall white coloration referred to as the "masking effect" that prevents the identification of new bright spots in the tissue.
History
The problem of dense breasts and mammography screenings was first identified by John Wolfe in 1976 where Wolfe laid out a new classification system based on the density of f |
https://en.wikipedia.org/wiki/Room-temperature%20densification%20method | The room-temperature densification method was developed for Li2MoO4 ceramics and is based on the water-solubility of Li2MoO4. It can be used for the fabrication of Li2MoO4 ceramics instead of conventional thermal sintering. The method utilizes a small amount of aqueous phase formed by moistening the Li2MoO4 powder. The densification occurs during sample pressing as the solution incorporates the pores between the powder particles and recrystallizes. The contact points of the particles provide a high pressure zone, where solubility is increased, whereas the pores act as a suitable place for the precipitation of the solution. Any residual water is removed by post-processing typically at 120 °C. The method is suitable also for Li2MoO4 composite ceramics with up to 30 volume-% of filler material, enabling the optimization of the dielectric properties. |
https://en.wikipedia.org/wiki/Deal%E2%80%93Grove%20model | The Deal–Grove model mathematically describes the growth of an oxide layer on the surface of a material. In particular, it is used to predict and interpret thermal oxidation of silicon in semiconductor device fabrication. The model was first published in 1965 by Bruce Deal and Andrew Grove of Fairchild Semiconductor, building on Mohamed M. Atalla's work on silicon surface passivation by thermal oxidation at Bell Labs in the late 1950s. This served as a step in the development of CMOS devices and the fabrication of integrated circuits.
Physical assumptions
The model assumes that the oxidation reaction occurs at the interface between the oxide layer and the substrate material, rather than between the oxide and the ambient gas. Thus, it considers three phenomena that the oxidizing species undergoes, in this order:
It diffuses from the bulk of the ambient gas to the surface.
It diffuses through the existing oxide layer to the oxide-substrate interface.
It reacts with the substrate.
The model assumes that each of these stages proceeds at a rate proportional to the oxidant's concentration. In the first step, this means Henry's law; in the second, Fick's law of diffusion; in the third, a first-order reaction with respect to the oxidant. It also assumes steady state conditions, i.e. that transient effects do not appear.
Results
Given these assumptions, the flux of oxidant through each of the three phases can be expressed in terms of concentrations, material properties, and temperature.
By setting the three fluxes equal to each other the following relations can be derived:
Assuming a diffusion controlled growth i.e. where determines the growth rate, and substituting and in terms of from the above two relations into and equation respectively, one obtains:
If N is the concentration of the oxidant inside a unit volume of the oxide, then the oxide growth rate can be written in the form of a differential equation. The solution to this equation gives the o |
https://en.wikipedia.org/wiki/IBM%20PCradio | The PCradio was a notebook computer released by International Business Machines (IBM) in late 1991. Designed primarily for mobile workers such as service technicians, salespersons and public safety workers, the PCradio featured a ruggedized build with no internal hard disk drive and was optioned with either a cellular or ARDIS RF modem, in addition to a standard landline modem.
Components
The internals of the PCradio were encased in a slate-gray, hardened plastic case, which IBM said was resistant to heat, moisture, impact and certain chemicals. Its port doors, connectors, and keyboard were designed to be water-resistant through the use of gaskets, seals, and O-rings. It featured a monochrome LCD capable of rendering graphics in CGA mode and text at 80 columns by 25 lines. The laptop was powered by either a nickel–cadmium battery or a wall or car power adapter.
To keep the PCradio ruggedized, IBM offered SRAM modules of various capacities up to 2 MB for file storage, in lieu of a mechanical hard disk drive. Special versions of Siega System's One-Button Mail, an e-mail client, Traveling Software's Battery Watch, a battery management application, and LapLink, a file transfer program, were developed with drivers to support the PCradio's special hardware. The latter, renamed to Notebook Manager, came bundled with the PCradio as a ROM module. Owing to its ruggedized nature, the PCradio could operate between 32 degrees and 132 degrees Fahrenheit. A thermal printer which accepted paper 3-1/8 inches in diameter was optional.
The cellular model was capable of sending and receiving faxes, at a rate of 9.6 KB per second—twice that of its cellular data speed of 4.8 KB per second. Meanwhile the landline model was capable of sending but not receiving faxes, and the ARDIS model could not receive faxes whatsoever. The cellular model could also be used for voice communications with the optional handset.
Development
The PCradio project was helmed by Robert A. Lundy, a director an |
https://en.wikipedia.org/wiki/Capacitance%20multiplier | A capacitance multiplier is designed to make a capacitor function like a much larger capacitor. This can be achieved in at least two ways.
An active circuit, using a device such as a transistor or operational amplifier
A passive circuit, using autotransformers. These are typically used for calibration standards. The General Radio / IET labs 1417 is one such example.
Capacitor multipliers make low-frequency filters and long-duration timing circuits possible that would be impractical with actual capacitors. Another application is in DC power supplies where very low ripple voltage (under load) is of paramount importance, such as in class-A amplifiers.
Transistor-based
Here the capacitance of capacitor C1 is multiplied by approximately the transistor's current gain (β).
Without Q, R2 would be the load on the capacitor. With Q in place, the loading imposed upon C1 is simply the load current reduced by a factor of (β + 1). Consequently, C1 appears multiplied by a factor of (β + 1) when viewed by the load.
Another way is to look at this circuit as an emitter follower with capacitor C1 holding voltage at base constant with load of input impedance of Q1: R2 multiplied by (1 + β), so the output current is stabilized much more against power line voltage noise.
Operational amplifier based
Here, the capacitance of capacitor C1 is multiplied by the ratio of resistances: C = C1 * R1 / R2 at the Vi node.
The synthesized capacitance also brings a series resistance approximately equal to R2, and a leakage current appears across the capacitance because of the input offsets of OP. These problems can be avoided by a circuit with two op amps. In this circuit the input to OP1 can be a.c.-coupled if necessary, and the capacitance can be made variable by making the ratio of R1 to R2 variable. C = C1 * (1 + (R2 / R1)).
In the circuits described above the capacitance is grounded, but floating capacitance multipliers are possible.
A negative capacitance multiplier can be create |
https://en.wikipedia.org/wiki/D-amino%20acid%20dehydrogenase | D-amino-acid dehydrogenase (EC 1.4.99.1) is a bacterial enzyme that catalyses the oxidation of D-amino acids into their corresponding oxoacids. It contains both flavin and nonheme iron as cofactors. The enzyme has a very broad specificity and can act on most D-amino acids.
D-amino acid + H2O + acceptor <=> a 2-oxo acid + NH3 + reduced acceptor
This reaction is distinct from the oxidation reaction catalysed by D-amino acid oxidase that uses oxygen as a second substrate, as the dehydrogenase can use many different compounds as electron acceptors, with the physiological substrate being coenzyme Q.
D-amino acid dehydrogenase is an enzyme that catalyzes NADPH from NADP+ and D- glucose to produce D- amino acids and glucose dehydrogenase. Some but not limited to these amino acids are D-leucine, D-isoleucine, and D-Valine, which are essential amino acids that humans cannot synthesize due to the fact that they are not included in their diet. Moreover, D- amino acids catalyzes the formation of 2-oxo acids to produce D- amino acids in the presence of DCIP which is an electron acceptor. D-amino acids are used as components of pharmaceutical products, such as antibiotics, anticoagulants, and pesticides, because they have been shown to be not only more potent than their L enantiomers, but also more resistant to enzyme degradation. D-amino acid dehydrogenase enzymes have been synthesized via mutagenesis with an ability to produce straight, branched, cyclic aliphatic and aromatic D-amino acids. Solubilized D-amino acid dehydrogenase tends to increase its affinity for D-alanine, D-asparagine, and D--amino-n-butyrate.
In E. coli K12 D-amino acid dehydrogenase is most active with D-alanine as its substrate, as this amino acid is the sole source of carbon, nitrogen, and energy. The enzyme works optimally at pH 8.9 and has a Michaelis constant for D-alanine equal to 30 mM. DAD discovered in gram-negative E. coli B membrane can convert L-amino acids into D-amino acids as well.
Addit |
https://en.wikipedia.org/wiki/Shockley%E2%80%93Ramo%20theorem | The Shockley–Ramo theorem is a method for calculating the electric current induced by a charge moving in the vicinity of an electrode. Previously named simply the "Ramo Theorem",
the modified name was introduced by D.S. McGregor et al. in 1998
to recognize the contributions of both Shockley and Ramo to understanding the influence of mobile charges in a radiation detector. The theorem appeared in William Shockley's 1938 paper titled "Currents to Conductors Induced by a Moving Point Charge" and in Simon Ramo's 1939 paper titled "Currents Induced by Electron Motion".
It is based on the concept that the current induced in the electrode is due to the instantaneous change of electrostatic flux lines that end on the electrode, rather than the amount of charge received by the electrode per second (net charge flow rate).
The Shockley–Ramo theorem states that the instantaneous current induced on a given electrode due to the motion of a charge is given by:
where
is the charge of the particle;
is its instantaneous velocity; and
is the component of the electric field in the direction of at the charge's instantaneous position, under the following conditions: charge removed, given electrode raised to unit potential, and all other conductors grounded.
The theorem has been applied to a wide variety of applications and fields, including semiconductor radiation detection, calculations of charge movement in proteins., or the detection of moving ions in vacuum for mass spectrometry or ion implantation. |
https://en.wikipedia.org/wiki/Hypertrichosis%201%20%28universalis%2C%20congenital%29 | Hypertrichosis 1 (universalis, congenital) is a protein that in humans is encoded by the HTC1 gene. |
https://en.wikipedia.org/wiki/LCD%20projector | An LCD projector is a type of video projector for displaying video, images or computer data on a screen or other flat surface. It is a modern equivalent of the slide projector or overhead projector. To display images, LCD (liquid-crystal display) projectors typically send light from a metal-halide lamp through a prism or series of dichroic filters that separates light to three polysilicon panelsone each for the red, green and blue components of the video signal. As polarized light passes through the panels (combination of polarizer, LCD panel and analyzer), individual pixels can be opened to allow light to pass or closed to block the light. The combination of open and closed pixels can produce a wide range of colors and shades in the projected image.
Metal-halide lamps are used because they output an ideal color temperature and a broad spectrum of color. These lamps also have the ability to produce an extremely large amount of light within a small area; current projectors average about 2,000 to 15,000 American National Standards Institute (ANSI) lumens.
Other technologies, such as Digital Light Processing (DLP) and liquid crystal on silicon (LCOS) are also becoming more popular in modestly priced video projection.
Projection surfaces
Because they use small lamps and the ability to project an image on any flat surface, LCD projectors tend to be smaller and more portable than some other types of projection systems. Even so, the best image quality is found using a blank white, grey, or black (which blocks reflected ambient light) surface, so dedicated projection screens are often used.
Perceived color in a projected image is a factor of both projection surface and projector quality. Since white is more of a neutral color, white surfaces are best suited for natural color tones; as such, white projection surfaces are more common in most business and school presentation environments.
However, darkest black in a projected image is dependent on how dark the screen is |
https://en.wikipedia.org/wiki/Brauer%E2%80%93Siegel%20theorem | In mathematics, the Brauer–Siegel theorem, named after Richard Brauer and Carl Ludwig Siegel, is an asymptotic result on the behaviour of algebraic number fields, obtained by Richard Brauer and Carl Ludwig Siegel. It attempts to generalise the results known on the class numbers of imaginary quadratic fields, to a more general sequence of number fields
In all cases other than the rational field Q and imaginary quadratic fields, the regulator Ri of Ki must be taken into account, because Ki then has units of infinite order by Dirichlet's unit theorem. The quantitative hypothesis of the standard Brauer–Siegel theorem is that if Di is the discriminant of Ki, then
Assuming that, and the algebraic hypothesis that Ki is a Galois extension of Q, the conclusion is that
where hi is the class number of Ki. If one assumes that all the degrees are bounded above by a uniform constant
N, then one may drop the assumption of normality - this is what is actually proved in Brauer's paper.
This result is ineffective, as indeed was the result on quadratic fields on which it built. Effective results in the same direction were initiated in work of Harold Stark from the early 1970s. |
https://en.wikipedia.org/wiki/Conjugate%20%28square%20roots%29 | In mathematics, the conjugate of an expression of the form is provided that does not appear in and . One says also that the two expressions are conjugate.
In particular, the two solutions of a quadratic equation are conjugate, as per the in the quadratic formula .
Complex conjugation is the special case where the square root is the imaginary unit.
Properties
As
and
the sum and the product of conjugate expressions do not involve the square root anymore.
This property is used for removing a square root from a denominator, by multiplying the numerator and the denominator of a fraction by the conjugate of the denominator (see Rationalisation). An example of this usage is:
Hence:
A corollary property is that the subtraction:
leaves only a term containing the root.
See also
Conjugate element (field theory), the generalization to the roots of a polynomial of any degree
Elementary algebra |
https://en.wikipedia.org/wiki/Master/Session | In cryptography, Master/Session is a key management scheme in which a pre-shared Key Encrypting Key (called the "Master" key) is used to encrypt a randomly generated and insecurely communicated Working Key (called the "Session" key). The Working Key is then used for encrypting the data to be exchanged. Its advantage is simplicity, but it suffers the disadvantage of having to communicate the pre-shared Key Exchange Key, which can be difficult to update in the event of compromise.
The Master/Session technique was created in the days before asymmetric techniques, such as Diffie-Hellman, were invented. This technique still finds widespread use in the financial industry, and is routinely used between corporate parties such as issuers, acquirers, switches. Its use in device communications (such as PIN pads), however, is in decline given the advantages of techniques such as DUKPT. |
https://en.wikipedia.org/wiki/Grand%20mean | The grand mean or pooled mean is the average of the means of several subsamples, as long as the subsamples have the same number of data points. For example, consider several lots, each containing several items. The items from each lot are sampled for a measure of some variable and the means of the measurements from each lot are computed. The mean of the measures from each lot constitutes the subsample mean. The mean of these subsample means is then the grand mean.
Example
Suppose there are three groups of numbers: group A has 2, 6, 7, 11, 4; group B has 4, 6, 8, 14, 8; group C has 8, 7, 4, 1, 5.
The mean of group A = (2+6+7+11+4)/5 = 6,
The mean of group B = (4+6+8+14+8)/5 = 8,
The mean of group C = (8+7+4+1+5)/5 = 5,
Therefore, the grand mean of all numbers = (6+8+5)/3 = 6.333.
Application
Suppose one wishes to determine which states in America have the tallest men. To do so, one measures the height of a suitably sized sample of men in each state. Next, one calculates the means of height for each state, and then the grand mean (the mean of the state means) as well as the corresponding standard deviation of the state means. Now, one has the necessary information for a preliminary determination of which states have abnormally tall or short men by comparing the means of each state to the grand mean ± some multiple of the standard deviation.
In ANOVA, there is a similar usage of grand mean to calculate sum of squares (SSQ), a measurement of variation. The total variation is defined as the sum of squared differences between each score and the grand mean (designated as GM), given by the equation
Discussion
The term grand mean is used for two different concepts that should not be confused, namely, the overall mean and the mean of means. The overall mean (in a grouped data set) is equal to the sample mean, namely, . The mean of means is literally the mean of the G (g=1,...,G) group means , namely, . If the sample sizes across the G groups are equal, then the t |
https://en.wikipedia.org/wiki/DESCHALL%20Project | DESCHALL, short for DES Challenge, was the first group to publicly break a message which used the Data Encryption Standard (DES), becoming the $10,000 winner of the first of the set of DES Challenges proposed by RSA Security in 1997. It was established by a group of computer scientists led by Rocke Verser assisted by Justin Dolske and Matt Curtin and involved thousands of volunteers who ran software in the background on their own machines, connected by the Internet. They announced their success on June 18, only 96 days after the challenge was announced on January 28.
Background
To search the 72 quadrillion possible keys of a 56-bit DES key using conventional computers was considered impractical even in the 1990s. Rocke Verser already had an efficient algorithm that ran on a standard PC and had the idea of involving the spare time on hundreds of other such machines that were connected to the internet. So they set up a server on a 486-based PS/2 PC with 56MB of memory and announced the project via Usenet towards the end of March. Client software was rapidly written for a large variety of home machines and eventually some more powerful 64 bit systems.
There were two other main contenders: SoINET (a Swedish group), and a group at Silicon Graphics, a manufacturer of high-performance computers, which was in the lead until late in the day. Other groups using supercomputers withdrew after SYN flood attacks on their networks.
The Project
With the software that was used, a single 200 MHz Pentium system was able to test approximately 1 million keys/second if it was doing nothing else. At this rate it would take around 2,285 years to search the entire key-space. The number of computers being used rose rapidly and in the end, a total of 78,000 different IP addresses had been recorded, with a maximum of 14,000 unique hosts in a 24-hour period. By the time the key was found, they had searched about a quarter of the key-space and were searching about 7 billion keys per second, |
https://en.wikipedia.org/wiki/Supertaster | A supertaster is a person whose sense of taste is of far greater intensity than the average person, having an elevated taste response.
History
The term originated with experimental psychologist Linda Bartoshuk, who has spent much of her career studying genetic variation in taste perception. In the early 1980s, Bartoshuk and her colleagues found that some individuals tested in the laboratory seemed to have an elevated taste response and called them supertasters.
This increased taste response is not the result of response bias or a scaling artifact but appears to have an anatomical or biological basis.
Phenylthiocarbamide
In 1931, Arthur L. Fox, a DuPont chemist, discovered that some people found phenylthiocarbamide (PTC) to be bitter while others found it tasteless. At the 1931 American Association for the Advancement of Science meeting, Fox collaborated with Albert F. Blakeslee, a geneticist, to have attendees taste PTC: 65% found it bitter, 28% found it tasteless, and 6% described other taste qualities. Subsequent work revealed that the ability to taste PTC is genetic.
Propylthiouracil
In the 1960s, Roland Fischer was the first to link the ability to taste PTC, and the related compound propylthiouracil (PROP), to food preference, diets, and calorie intake. Today, PROP has replaced PTC for research because of a faint sulfurous odor and safety concerns with PTC. Bartoshuk and colleagues discovered that the taster group could be further divided into medium tasters and supertasters. Research suggests 25% of the population are non-tasters, 50% are medium tasters, and 25% are supertasters.
Cause
The exact cause of heightened response to taste in humans has yet to be elucidated. A review found associations between supertasters and the presence of the TAS2R38 gene, the ability to taste PROP and PTC, and an increased number of fungiform papillae.
In addition, environmental causes may play a role in sensitive taste. The exact mechanisms by which these causes may ma |
https://en.wikipedia.org/wiki/Chesterford%20Park%20Research%20Station | Chesterford Park Research Station was a former crop protection research centre in Essex, and is now a science park with biotechnology companies.
History
The 808 acres of the Chesterford Park Estate was put up for sale in June 1950.
, owned by Dr Werner Göthe since around 1930.
Pest Control Ltd of Bourn in south-west Cambridgeshire, bought the Little Chesterford Park site in 1952. The Bourn site made crop spraying equipment. Fisons bought Pest Control Ltd in early 1954.
Elwyn Parry-Jones was the site's first technical director, who died in July 1965.
In September 1964 the site started research work with Boots.
Genetic resistance by insects to insecticides was increasing in the late 1960s. By the late 1960s, the site had around 220 staff.
From the 1970s, the director of the site was Charles Edwards.
By the late 1980s, there were around 500 staff.
The site is accessed via the B184 from junction 9 of the M11 motorway.
Ownership
Boots and Fisons joined divisions in 1980 to form FBC Limited. In 1982 Fisons sold its fertiliser division to a Norwegian company for £50m. On Monday 18 July 1983, Boots and Fisons sold FBC Ltd to Schering AG of West Germany for £120m with the sale completed on Wednesday 14 September 1983.
In late 1993, Schering's chemical division looked at merging with another German chemical company Hoechst, which formed AgrEvo on 3 March 1994.
In July 1999 AgrEvo UK looked at closing the site due to Hoechst merging, to become Aventis. The site briefly became part of Aventis CropScience UK. On 12 October 2001 Aventis CropScience was bought for 7.25 billion euros.
Construction
New buildings were opened on Tuesday 2 October 1956 by Sir William Slater. The new buildings included a Medical Laboratory for tests on laboratory animals. The buildings were built by Prime Ltd of Cambridge. The site was around 240 acres - there was 90 acres of woodland and a 139-acre farm.
In 1967 a new animal health unit opened, with a £30,000 pig unit, and £30,000 building |
https://en.wikipedia.org/wiki/Adjoint%20filter | In signal processing, the adjoint filter mask of a filter mask is reversed in time and the elements are complex conjugated.
Its name is derived from the fact that the convolution with the adjoint filter is the adjoint operator of the original filter, with respect to the Hilbert space of the sequences in which the inner product is the Euclidean norm.
The autocorrelation of a signal can be written as .
Properties |
https://en.wikipedia.org/wiki/Rprop | Rprop, short for resilient backpropagation, is a learning heuristic for supervised learning in feedforward artificial neural networks. This is a first-order optimization algorithm. This algorithm was created by Martin Riedmiller and Heinrich Braun in 1992.
Similarly to the Manhattan update rule, Rprop takes into account only the sign of the partial derivative over all patterns (not the magnitude), and acts independently on each "weight". For each weight, if there was a sign change of the partial derivative of the total error function compared to the last iteration, the update value for that weight is multiplied by a factor η−, where η− < 1. If the last iteration produced the same sign, the update value is multiplied by a factor of η+, where η+ > 1. The update values are calculated for each weight in the above manner, and finally each weight is changed by its own update value, in the opposite direction of that weight's partial derivative, so as to minimise the total error function. η+ is empirically set to 1.2 and η− to 0.5.
Rprop can result in very large weight increments or decrements if the gradients are large, which is a problem when using mini-batches as opposed to full batches. RMSprop addresses this problem by keeping the moving average of the squared gradients for each weight and dividing the gradient by the square root of the mean square.
RPROP is a batch update algorithm. Next to the cascade correlation algorithm and the Levenberg–Marquardt algorithm, Rprop is one of the fastest weight update mechanisms.
Variations
Martin Riedmiller developed three algorithms, all named RPROP. Igel and Hüsken assigned names to them and added a new variant:
RPROP+ is defined at A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm.
RPROP− is defined at Advanced Supervised Learning in Multi-layer Perceptrons – From Backpropagation to Adaptive Learning Algorithms. Backtracking is removed from RPROP+.
iRPROP− is defined in Rprop – Descript |
https://en.wikipedia.org/wiki/Ecoregion%20conservation%20status | Conservation status is a measure used in conservation biology to assess an ecoregion's degree of habitat alteration and habitat conservation. It is used to set priorities for conservation.
Conservation status and biological distinctiveness were the two measures used by the World Wildlife Fund (WWF) to develop the Global 200, a list of high-priority ecoregions for conservation, and for the WWF's conservation assessments at continent (or biogeographic realm) scale.
Ecoregions are classified into one of three broad categories: "critical/endangered" (CE), "vulnerable" (V), or "relatively stable/relatively intact" (RS). The WWF's conservation status index is determined by analyzing four factors:
Habitat loss is the percentage of an ecoregion's habitat that has been converted to agriculture or urban areas;
Habitat blocks measures of the size of remaining habitat blocks;
Habitat fragmentation is the degree to which remaining habitat is fragmented, measured as the ratio of the total perimeter of remaining habitat blocks to their total area;
Habitat protection measures the area of remaining habitat in protected areas, and the degree of protection provided (IUCN protected area categories).
Additional factors considered for the Global 200 include degree of habitat degradation, degree of protection needed, degree of urgency for conservation needs, and types of conservation practiced or required.
See also
List of global 200 ecoregions |
https://en.wikipedia.org/wiki/Tennessee%20statistical%20areas | The U.S. currently has 34 statistical areas that have been delineated by the Office of Management and Budget (OMB). On March 6, 2020, the OMB delineated 7 combined statistical areas, 10 metropolitan statistical areas, and 17 micropolitan statistical areas in Tennessee.
Statistical areas
The Office of Management and Budget (OMB) has designated more than 1,000 statistical areas for the United States and Puerto Rico. These statistical areas are important geographic delineations of population clusters used by the OMB, the United States Census Bureau, planning organizations, and federal, state, and local government entities.
The OMB defines a core-based statistical area (commonly referred to as a CBSA) as "a statistical geographic entity consisting of the county or counties (or county-equivalents) associated with at least one core of at least 10,000 population, plus adjacent counties having a high degree of social and economic integration with the core as measured through commuting ties with the counties containing the core." The OMB further divides core-based statistical areas into metropolitan statistical areas (MSAs) that have "a population of at least 50,000" and micropolitan statistical areas (μSAs) that have "a population of at least 10,000, but less than 50,000."
The OMB defines a combined statistical area (CSA) as "a geographic entity consisting of two or more adjacent core-based statistical areas with employment interchange measures of at least 15%." The primary statistical areas (PSAs) include all combined statistical areas and any core-based statistical area that is not a constituent of a combined statistical area.
Table
The table below describes the 34 United States statistical areas and 95 counties of the State of Tennessee with the following information:
The combined statistical area (CSA) as designated by the OMB.
The CSA population according to 2019 US Census Bureau population estimates.
The core based statistical area (CBSA) as designated by the OM |
https://en.wikipedia.org/wiki/Electrofusion%20welding | Electrofusion welding is a form of resistive implant welding used to join pipes. A fitting with implanted metal coils is placed around two ends of pipes to be joined, and current is passed through the coils. Resistive heating of the coils melts small amounts of the pipe and fitting, and upon solidification, a joint is formed. It is most commonly used to join polyethylene (PE) and polypropylene (PP) pipes. Electrofusion welding is the most common welding technique for joining PE pipes. Because of the consistency of the electrofusion welding process in creating strong joints, it is commonly employed for the construction and repair of gas-carrying pipelines. The development of the joint strength is affected by several process parameters, and a consistent joining procedure is necessary for the creation of strong joints.
Advantages and disadvantages
Advantages of electrofusion welding:
Simple process capable of producing consistent joints
Process is entirely contained, reducing the risk of joint contamination
Process allows repair without the need to remove pipes
Disadvantages of electrofusion welding:
A special sleeve is required, so it is more expensive than other pipe joining methods such as hot plate joining
Implanted coils make recycling of parts more difficult
Equipment
Electrofusion welds are performed by attaching a controlled power supply to the electrofusion fitting. There are typically two modes of operation.
Constant voltage
Constant current
Constant voltage is typically used for high pressure pipelines such as mains gas and water. Fittings are fitted with a barcode specified to an ISO standard.
Typically fittings will be welded at 39.5v, but manufacturers can choose voltages in whole numbers from 8 to 48v. The welding time is specified on the label in seconds or minutes
Accessories
Electrofusion welding employs fittings that are placed around the joint to be welded. Metal coils are implanted into the fittings, and electric current is ru |
https://en.wikipedia.org/wiki/IP%20fragmentation%20attack | IP fragmentation attacks are a kind of computer security attack based on how the Internet Protocol (IP) requires data to be transmitted and processed. Specifically, it invokes IP fragmentation, a process used to partition messages (the service data unit (SDU); typically a packet) from one layer of a network into multiple smaller payloads that can fit within the lower layer's protocol data unit (PDU). Every network link has a maximum size of messages that may be transmitted, called the maximum transmission unit (MTU). If the SDU plus metadata added at the link layer exceeds the MTU, the SDU must be fragmented. IP fragmentation attacks exploit this process as an attack vector.
Part of the TCP/IP suite is the Internet Protocol (IP) which resides at the Internet Layer of this model. IP is responsible for the transmission of packets between network end points. IP includes some features which provide basic measures of fault-tolerance (time to live, checksum), traffic prioritization (type of service) and support for the fragmentation of larger packets into multiple smaller packets (ID field, fragment offset). The support for fragmentation of larger packets provides a protocol allowing routers to fragment a packet into smaller packets when the original packet is too large for the supporting datalink frames. IP fragmentation exploits (attacks) use the fragmentation protocol within IP as an attack vector.
According to [Kurose 2013], in one type of IP fragmentation attack "the attacker sends a stream of small fragments to the target host, none of which has an offset of zero. The target can collapse as it attempts to rebuild datagrams out of the degenerate packets." Another attack involves sending overlapping fragments with non-aligned offsets, which can render vulnerable operating systems not knowing what to do, causing some to crash.
Process
IP packets are encapsulated in datalink frames, and, therefore, the link MTU affects larger IP packets and forces them to be s |
https://en.wikipedia.org/wiki/Durrell%20Institute%20of%20Conservation%20and%20Ecology | The Durrell Institute of Conservation and Ecology (DICE) is a subdivision and research centre of the School of Anthropology and Conservation at the University of Kent, started in 1989 and named in honour of the famous British naturalist Gerald Durrell. It was the first institute in the United Kingdom to award undergraduate and postgraduate degrees and diplomas in the fields of conservation biology, ecotourism, and biodiversity management. It consists of 22 academic staff, being six Professors, seven Readers and nine Lecturers and Senior Lecturers, as well as an advisory board consisting of 14 conservationists from government, business and the NGO sector.
History
DICE's graduate degree programme began in 1991 with a class of seven international students. Since then it has trained over 1,200 people from 101 countries, including 322 people from Lower- and Middle-Income countries in Africa, Asia, Oceania and South America. The founder of DICE is Professor Ian Swingland, who retired from the University of Kent in 1999, and the first Director was Dr. Mike Walkey, who retired in 2002.
Awards
In 2019 DICE was awarded a Queen's Anniversary Prize for "pioneering education, capacity building and research in global nature conservation to protect species and ecosystems and benefit people".
Alumni
Notable alumni include:
Bahar Dutt, Indian television journalist and environmental editor
Sanjay Gubbi, Indian conservation biologist
Rachel Ikemeh, Nigerian conservationist
Winnie Kiiru, Kenyan biologist and elephant conservationist
Patricia Medici, Brazilian conservation biologist
Jeanneney Rabearivony, Malagasy ecologist and herpetologist
Rajeev Raghavan, Indian conservation biologist
Alexandra Zimmermann, wildlife conservationist |
https://en.wikipedia.org/wiki/Doppler%20velocity%20sensor | A Doppler velocity sensor (DVS) is a specialized Doppler radar that uses the Doppler effect to measure the three orthogonal velocity components referenced to the aircraft. When aircraft true heading, pitch and roll are provided by other aircraft systems, it can function as a navigation sensor to perform stand-alone dead reckoning navigation calculations as a Doppler Navigation Set (DNS).
Doppler navigation systems are independent of surrounding conditions, perform with high accuracy over land and sea anywhere in the world, and are independent of ground-based aids and space-based satellite navigation systems.
Operational principles
To measure an aircraft three-dimensional velocity, a Doppler radar antenna is caused to radiate a minimum of three non-coplanar microwave electromagnetic beams toward the earth's surface. Some of the energy is backscattered to the radar by the earth surface. With knowledge of the beam angles, three or more beam-Doppler frequencies are combined to generate the components of aircraft velocity.
DVS transmission is performed at a center frequency of 13.325 GHz in the internationally authorized Ku band of 13.25 to 13.4 GHz.
Uses
DVS are used on helicopters for navigation, hovering, sonar dropping, target handover for weapon delivery and search and rescue. Because the Doppler radar measures velocity relative to surface, sea current and tidal effects create biases. However, for sonobuoys dropping and over water search and rescue, velocity of the aircraft relative to water movement is expected.
These radars were formally approved under the FAA TSO-65a until 2013, and are designed in accordance with the Radio Technical Commission for Aeronautics (RTCA) DO-158 standard titled Minimum Performance Standards − Airborne Doppler Radar Navigation Equipment.
Limitations
The functional operation and accuracy of Doppler velocity sensors is affected by many factors, including aircraft velocity, attitude and altitude above terrain. It is also affected b |
https://en.wikipedia.org/wiki/Deductive%20language | A deductive language is a computer programming language in which the program is a collection of predicates ('facts') and rules that connect them. Such a language is used to create knowledge based systems or expert systems which can deduce answers to problem sets by applying the rules to the facts they have been given.
An example of a deductive language is Prolog, or its database-query cousin, Datalog.
History
As the name implies, deductive languages are rooted in the principles of deductive reasoning; making inferences based upon current knowledge. The first recommendation to use a clausal form of logic for representing computer programs was made by Cordell Green (1969) at Stanford Research Institute (now SRI International). This idea can also be linked back to the battle between procedural and declarative information representation in early artificial intelligence systems. Deductive languages and their use in logic programming can also be dated to the same year when Foster and Elcock introduced Absys, the first deductive/logical programming language. Shortly after, the first Prolog system was introduced in 1972 by Colmerauer through collaboration with Robert Kowalski.
Components
The components of a deductive language are a system of formal logic and a knowledge base upon which the logic is applied.
Formal Logic
Formal logic is the study of inference in regards to formal content. The distinguishing feature between formal and informal logic is that in the former case, the logical rule applied to the content is not specific to a situation. The laws hold regardless of a change in context. Although first-order logic is described in the example below to demonstrate the uses of a deductive language, no formal system is mandated and the use of a specific system is defined within the language rules or grammar.
As input, a predicate takes any object(s) in the domain of interest and outputs either one of two Boolean values: true or false. For example, consider the se |
https://en.wikipedia.org/wiki/Pepper%20Pad | The Pepper Pad was a family of Linux-based mobile computers with Internet capability and which doubled as a handheld game console. They also served as a portable multimedia device. The devices used Bluetooth and Wi-Fi technologies for Internet connection. Pepper Pads are now obsolete, unsupported and the parent company has ceased operations.
The original prototype Pepper Pad was built in 2003 with an ARM-based PXA255 processor running at 400Mhz, an 8-inch touchscreen in portrait mode, a split QWERTY keyboard, and Wi-Fi. Only 6 were made, and it was never offered for sale. The Pepper Pad was a 2004 Consumer Electronics Show (CES) Innovations Awards Honoree in the Computer Hardware category.
The Pepper Pad 2 was introduced in 2004 with a faster 624Mhz PXA270 processor and the screen was rotated to a landscape format. The Pepper Pad 2 was the first Pepper Pad offered for commercial sale. The Pepper Pad and Pepper Pad 2 both ran Pepper's proprietary Pepper Keeper application on top of a heavily customized version of the Montavista Linux operating system.
The Pepper Pad 3 was announced in 2006 with as upgrade to a faster AMD Geode processor. The Pepper Pad 3 also used a smaller 7" screen for cost savings. Like previous versions, the Pepper Pad 3 had a split QWERTY button keyboard, built-in microphone, video camera, composite video output, and stereo speakers, Infra-Red receiver and transmitter, 800x480 7 inch LCD touchscreen (with stylus), SD/MMC Flash memory slot, 20 or 30 GB hard disk, 256MB RAM, 256KB ROM, and both Wi-Fi (b/g) and Bluetooth 2.0. The Pepper Pad 3 used a heavily customized version of the Fedora Linux operating system called Pepper Linux. Unlike the Pepper Pad 2 which was built and sold directly by Pepper, the Pepper Pad 3 was built and sold under license by Hanbit Electronics.
Support
Pepper Computer, Inc. has ceased operations and is no longer providing support or sales for Pepper Pad web computers or Pepper Linux.
Software
Pepper Pads ran Pepp |
https://en.wikipedia.org/wiki/Laguerre%20formula | The Laguerre formula (named after Edmond Laguerre) provides the acute angle between two proper real lines, as follows:
where:
is the principal value of the complex logarithm
is the cross-ratio of four collinear points
and are the points at infinity of the lines
and are the intersections of the absolute conic, having equations , with the line joining and .
The expression between vertical bars is a real number.
Laguerre formula can be useful in computer vision, since the absolute conic has an image on the retinal plane which is invariant under camera displacements, and the cross ratio of four collinear points is the same for their images on the retinal plane.
Derivation
It may be assumed that the lines go through the origin. Any isometry leaves the absolute conic invariant, this allows to take as the first line the x axis and the second line lying in the plane z=0. The homogeneous coordinates of the above four points are
respectively. Their nonhomogeneous coordinates on the infinity line of the plane z=0 are , , 0, . (Exchanging and changes the cross ratio into its inverse, so the formula for gives the same result.) Now from the formula of the cross ratio we have |
https://en.wikipedia.org/wiki/Pyramidal%20eminence | The pyramidal eminence is a hollow conical projection upon the posterior wall of the tympanic cavity of the middle ear. The stapedius muscle arises in the hollow of the eminence and its tendon exits through its apex.
The pyramidal eminence is situated inferior to the aditus to mastoid antrum, immediately inferior to the oval window (fenestra vestibuli), and anterior to the vertical portion of the facial canal. The apex of the eminence is directed anteriorly toward the oval window.
The cavity in the pyramidal eminence is prolonged inferoposteriorly anterior to the facial canal, with which it communicates by a minute aperture which transmits the nerve to the stapedius from the facial nerve (CN VII). |
https://en.wikipedia.org/wiki/Intel%20Quick%20Sync%20Video | Intel Quick Sync Video is Intel's brand for its dedicated video encoding and decoding hardware core. Quick Sync was introduced with the Sandy Bridge CPU microarchitecture on 9 January 2011 and has been found on the die of Intel CPUs ever since.
The name "Quick Sync" refers to the use case of quickly transcoding ("converting") a video from, for example, a DVD or Blu-ray Disc to a format appropriate to, for example, a smartphone. This becomes critically-important in the professional video workplace, in which source-material may have been shot in any number of video-formats, all of which must be brought into a common format (commonly H.264) for inter-cutting.
Unlike video encoding on a CPU or a general-purpose GPU, Quick Sync is a dedicated hardware core on the processor die. This allows for much more power-efficient video processing.
Availability
Quick Sync Video is available on Core i3, Core i5, Core i7, and Core i9 processors starting with Sandy Bridge, and Celeron & Pentium processors starting with Haswell.
Performance and quality
Like most desktop hardware-accelerated encoders, Quick Sync has been praised for its speed. The eighth annual MPEG-4 AVC/H.264 video codecs comparison showed that Quick Sync was comparable to x264 superfast preset in terms of speed, compression ratio and quality (SSIM); tests were performed on an Intel Core i7 3770 (Ivy Bridge) processor. However, Quick Sync could not be configured to spend more time to achieve higher quality, whereas x264 improved significantly when allowed to use more time using the recommended settings.
A 2012 evaluation by AnandTech showed that QuickSync on Intel's Ivy Bridge produced similar image quality compared to the NVENC encoder on Nvidia's GTX 680 while performing much better at resolutions lower than 1080p.
Development
Quick Sync was first unveiled at Intel Developer Forum 2010 (September 13) but, according to Tom's Hardware, Quick Sync had been conceptualized five years before that. The older Clarkdal |
https://en.wikipedia.org/wiki/Reverberation | Reverberation (commonly shortened to reverb), in acoustics, is a persistence of sound after it is produced. Reverberation is created when a sound or signal is reflected. This causes numerous reflections to build up and then decay as the sound is absorbed by the surfaces of objects in the space – which could include furniture, people, and air. This is most noticeable when the sound source stops but the reflections continue, their amplitude decreasing, until zero is reached.
Reverberation is frequency dependent: the length of the decay, or reverberation time, receives special consideration in the architectural design of spaces which need to have specific reverberation times to achieve optimum performance for their intended activity. In comparison to a distinct echo, that is detectable at a minimum of 50 to 100 ms after the previous sound, reverberation is the occurrence of reflections that arrive in a sequence of less than approximately 50 ms. As time passes, the amplitude of the reflections gradually reduces to non-noticeable levels. Reverberation is not limited to indoor spaces as it exists in forests and other outdoor environments where reflection exists.
Reverberation occurs naturally when a person sings, talks, or plays an instrument acoustically in a hall or performance space with sound-reflective surfaces. Reverberation is applied artificially by using reverb effects, which simulate reverb through means including echo chambers, vibrations sent through metal, and digital processing.
Although reverberation can add naturalness to recorded sound by adding a sense of space, it can also reduce speech intelligibility, especially when noise is also present. People with hearing loss, including users of hearing aids, frequently report difficulty in understanding speech in reverberant, noisy situations. Reverberation is also a significant source of mistakes in automatic speech recognition.
Dereverberation is the process of reducing the level of reverberation in a soun |
https://en.wikipedia.org/wiki/Salience%20%28neuroscience%29 | Salience (also called saliency) is that property by which some thing stands out. Salient events are an attentional mechanism by which organisms learn and survive; those organisms can focus their limited perceptual and cognitive resources on the pertinent (that is, salient) subset of the sensory data available to them.
Saliency typically arises from contrasts between items and their neighborhood. They might be represented, for example, by a red dot surrounded by white dots, or by a flickering message indicator of an answering machine, or a loud noise in an otherwise quiet environment. Saliency detection is often studied in the context of the visual system, but similar mechanisms operate in other sensory systems. Just what is salient can be influenced by training: for example, for human subjects particular letters can become salient by training. There can be a sequence of necessary events, each of which has to be salient, in turn, in order for successful training in the sequence; the alternative is a failure, as in an illustrated sequence when tying a bowline; in the list of illustrations, even the first illustration is a salient: the rope in the list must cross over, and not under the bitter end of the rope (which can remain fixed, and not free to move); failure to notice that the first salient has not been satisfied means the knot will fail to hold, even when the remaining salient events have been satisfied.
When attention deployment is driven by salient stimuli, it is considered to be bottom-up, memory-free, and reactive. Conversely, attention can also be guided by top-down, memory-dependent, or anticipatory mechanisms, such as when looking ahead of moving objects or sideways before crossing streets. Humans and other animals have difficulty paying attention to more than one item simultaneously, so they are faced with the challenge of continuously integrating and prioritizing different bottom-up and top-down influences.
Neuroanatomy
The brain component named the |
https://en.wikipedia.org/wiki/Peter%20Jagers | Peter Jagers (born 1941) is a Professor Emeritus of Mathematical Statistics at University of Gothenburg and Chalmers University of Technology who made lasting contributions in probability and general branching processes. Jagers was first vice president (2007–2010) of the Royal Swedish Academy of Sciences and Chair of the Royal Society of Arts and Sciences in Gothenburg (2012). He in an elected member of the International Statistical Institute, a Fellow of the Institute of Mathematical Statistics and past President of the Bernoulli Society (2005–2007). He also served as a member of the Scientific Advisory Board of Statistics Sweden.
Jagers has been coordinating editor of the Journal of Applied Probability and Advances in Applied Probability. He is on the Editorial Committee for Springer's books on probability theory and was editor of Stochastic Processes and their Applications 1989–1993.
Awards
Jagers received an honorary doctorate from the Bulgarian Academy of Sciences and in 2012 was awarded the Carver Medal by the Institute of Mathematical Statistics. In 2003 he received the Chalmers Medal and in 2005 was awarded King Carl XVI Gustaf 's gold medal for "services to mathematics in Sweden and internationally".
Publications
He has published a large number of scientific works, notably in branching process theory and its applications, and is the author of two books, Branching Processes with Biological Applications (J. Wiley 1975) and Branching Processes: Variation, Growth, and Extinction of Populations (Cambridge U. Press 2005). The Crump-Mode-Jagers (or generalized) branching process is named for Jagers. |
https://en.wikipedia.org/wiki/Rete%20ovarii | The rete ovarii is a structure formed from the primary sex cords in females.
It is the homologue of the rete testis in males. It is a narrow hilus, at which nerves and vessels enter the ovary. In the medulla of the mammalian ovary near the hilus are small masses of blind tubules or solid cords—the rete ovarii—which are homologous with the rete testis in the male. The microscopic right ovary of birds usually consists only of medullary tissue.
The rete ovarii is important in the control of meiosis in the maturing ovary. Cells of the rete ovarii also differentiate to form granulosa cells. The rete is also attributed with secretory capacity, a hypothesis supported by the observation of secretory material in the lumen of rete tubules in several species. |
https://en.wikipedia.org/wiki/V%20particle | In particle physics, V was a generic name for heavy, unstable subatomic particles that decay into a pair of particles, thereby producing a characteristic letter V in a bubble chamber or other particle detector. Such particles were first detected in cosmic ray interactions in the atmosphere in the late 1940s and were first produced using the Cosmotron particle accelerator at Brookhaven National Laboratory in the 1950s. Since all such particles have now been identified and given specific names, for instance Kaons or Sigma baryons, this term has fallen into disuse.
V0 is still used on occasion to refer generally to neutral particles that may confuse the B-tagging algorithms in a modern particle detector, as is used in Section 7 of
this ATLAS conference note. |
https://en.wikipedia.org/wiki/Renal%20medulla | The renal medulla (Latin: medulla renis 'marrow of the kidney') is the innermost part of the kidney. The renal medulla is split up into a number of sections, known as the renal pyramids. Blood enters into the kidney via the renal artery, which then splits up to form the segmental arteries which then branch to form interlobar arteries. The interlobar arteries each in turn branch into arcuate arteries, which in turn branch to form interlobular arteries, and these finally reach the glomeruli. At the glomerulus the blood reaches a highly disfavourable pressure gradient and a large exchange surface area, which forces the serum portion of the blood out of the vessel and into the renal tubules. Flow continues through the renal tubules, including the proximal tubule, the loop of Henle, through the distal tubule and finally leaves the kidney by means of the collecting duct, leading to the renal pelvis, the dilated portion of the ureter.
The renal medulla contains the structures of the nephrons responsible for maintaining the salt and water balance of the blood. These structures include the vasa rectae (both spuria and vera), the venulae rectae, the medullary capillary plexus, the loop of Henle, and the collecting tubule. The renal medulla is hypertonic to the filtrate in the nephron and aids in the reabsorption of water.
Blood is filtered in the glomerulus by solute size. Ions such as sodium, chloride, potassium, and calcium are easily filtered, as is glucose. Proteins are not passed through the glomerular filter because of their large size, and do not appear in the filtrate or urine unless a disease process has affected the glomerular capsule or the proximal and distal convoluted tubules of the nephron.
Though the renal medulla only receives a small percentage of the renal blood flow, the oxygen extraction is very high, causing a low oxygen tension and more importantly, a critical sensitivity to hypotension, hypoxia, and blood flow. The renal medulla extracts oxygen at |
https://en.wikipedia.org/wiki/Leibniz%20Institute%20for%20Astrophysics%20Potsdam | Leibniz Institute for Astrophysics Potsdam (AIP) is a German research institute. It is the successor of the Berlin Observatory founded in 1700 and of the Astrophysical Observatory Potsdam (AOP) founded in 1874. The latter was the world's first observatory to emphasize explicitly the research area of astrophysics. The AIP was founded in 1992, in a re-structuring following the German reunification.
The AIP is privately funded and member of the Leibniz Association. It is located in Babelsberg in the state of Brandenburg, just west of Berlin, though the Einstein Tower solar observatory and the great refractor telescope on Telegrafenberg in Potsdam belong to the AIP.
The key topics of the AIP are cosmic magnetic fields (magnetohydrodynamics) on various scales and extragalactic astrophysics. Astronomical and astrophysical fields studied at the AIP range from solar and stellar physics to stellar and galactic evolution to cosmology.
The institute also develops research technology in the fields of spectroscopy and robotic telescopes. It is a partner of the Large Binocular Telescope in Arizona, has erected robotic telescopes in Tenerife and the Antarctic, develops astronomical instrumentation for large telescopes such as the VLT of the ESO. Furthermore, work on several e-Science projects are carried out at the AIP.
History
Origin
The history of astronomy in Potsdam really began in Berlin in 1700. Initiated by Gottfried W. Leibniz, on July 11, 1700 the "Brandenburgische Societät" (later called the Prussian Academy of Sciences) was founded by the elector Friedrich III in Berlin. Two months earlier the national calendar monopoly provided the funding for an observatory. By May 18 the first director, Gottfried Kirch, had been appointed. This happened in a hurry, because the profits from the national basic calendar, calculated and sold by the observatory, should have been the financial source for the academy. This kind of financing existed until the beginning of the 19th |
https://en.wikipedia.org/wiki/Bethesda%20system | The Bethesda system (TBS), officially called The Bethesda System for Reporting Cervical Cytology, is a system for reporting cervical or vaginal cytologic diagnoses, used for reporting Pap smear results. It was introduced in 1988 and revised in 1991, 2001, and 2014. The name comes from the location (Bethesda, Maryland) of the conference, sponsored by the National Institutes of Health, that established the system.
Since 2010, there is also a Bethesda system used for cytopathology of thyroid nodules, which is called The Bethesda System for Reporting Thyroid Cytopathology (TBSRTC or BSRTC). Like TBS, it was the result of a conference sponsored by the NIH and is published in book editions (currently by Springer). Mentions of "the Bethesda system" without further specification usually refer to the cervical system, unless the thyroid context of a discussion is implicit.
Cervix
Abnormal results include:
Atypical squamous cells
Atypical squamous cells of undetermined significance (ASC-US)
Atypical squamous cells – cannot exclude HSIL (ASC-H)
Low-grade squamous intraepithelial lesion (LGSIL or LSIL)
High grade squamous intraepithelial lesion (HGSIL or HSIL)
Squamous cell carcinoma
Atypical Glandular Cells not otherwise specified (AGC-NOS)
Atypical Glandular Cells, suspicious for AIS or cancer (AGC-neoplastic)
Adenocarcinoma in situ (AIS)
The results are calculated differently following a Pap smear of the cervix.
Squamous cell abnormalities
LSIL: low-grade squamous intraepithelial lesion
A low-grade squamous intraepithelial lesion (LSIL or LGSIL) indicates possible cervical dysplasia. LSIL usually indicates mild dysplasia (CIN 1), more than likely caused by a human papillomavirus infection. It is usually diagnosed following a Pap smear.
CIN 1 is the most common and most benign form of cervical intraepithelial neoplasia and usually resolves spontaneously within two years. Because of this, LSIL results can be managed with a simple "watch and wait" philosophy. H |
https://en.wikipedia.org/wiki/UBIGEO | Ubigeo is the coding system for geographical locations (Spanish: Código Ubicacíon Geográfica) in Peru used by the National Statistics and Computing Institute (Spanish: Instituto Nacional de Estadística e Informática INEI) to code the first-level administrative subdivision: regions (Spanish: regiones, singular: región), the second-level administrative subdivision: provinces (Spanish: provincias, singular: provincia) and the third-level administrative subdivision: districts (Spanish: distritos, singular: distrito). There are 1874 different ubigeos in Peru.
Syntax
The coding system uses two-digit numbers for each level of subdivision. The first level starts numbering at 01 for the Amazonas Region and continues in alphabetical order up to 25 for the Ucayali Region. Additional regions will be added to the end of the list, starting with the first available number.
The second level starts with 0101 for the first province in the Amazonas region: Chachapoyas Province and continues up to 2504 for the last province Purús in the Ucayali Region. The provinces are numbered per region with the first province always being the one in which the regions capital is located. The remaining provinces are coded in alphabetical order. Additional provinces will be added per region to the end of the list, starting with the first available province number.
The third level; starts with 010101 for the first district in the first province in the Amazonas region: Chachapoyas District and continues up to 250401 for the last district in the last province of the Ucayali region: Purús District. The districts are numbered per province with the first district always being the one in which the province’ capital is located. The remaining districts are coded in alphabetical order. Additional districts will be added per province to the end of the list, starting with the first available district number.
Examples
Regions
01 Amazonas Region
02 Ancash Region
03 Apurímac Region
Provinces
0101 Chachapoyas |
https://en.wikipedia.org/wiki/Microcystinase | Microcystinase is a protease that selectively degrades Microcystin, an extremely potent cyanotoxin that results in marine pollution and human and animal food chain poisoning. The enzyme is naturally produced by a number of bacteria isolated in Japan and New Zealand. As of 2012, the chemical structure of this enzyme has not been scientifically determined. The enzyme degrades the cyclic peptide toxin microcystin into a linear peptide, which is 160 times less toxic. Other bacteria then further degrade the linear peptide.
Refs
Bacterial enzymes
Proteases
Bioremediation
EC 3.4 |
https://en.wikipedia.org/wiki/Bhaskara%27s%20lemma | Bhaskara's Lemma is an identity used as a lemma during the chakravala method. It states that:
for integers and non-zero integer .
Proof
The proof follows from simple algebraic manipulations as follows: multiply both sides of the equation by , add , factor, and divide by .
So long as neither nor are zero, the implication goes in both directions. (The lemma holds for real or complex numbers as well as integers.) |
https://en.wikipedia.org/wiki/Omptin | Omptins (, protease VII, protease A, gene ompT proteins, ompT protease, protein a, Pla, OmpT) are a family of bacterial proteases. They are aspartate proteases, which cleave peptides with the use of a water molecule. Found in the outer membrane of gram-negative enterobacteria such as Shigella flexneri, Yersinia pestis, Escherichia coli, and Salmonella enterica. Omptins consist of a widely conserved beta barrel spanning the membrane with 5 extracellular loops. These loops are responsible for the various substrate specificities. These proteases rely upon binding of lipopolysaccharide for activity.
Omptins have been linked to bacterial pathogenesis. |
https://en.wikipedia.org/wiki/Grease%20gun | A grease gun is a common workshop and garage tool used for lubrication. The purpose of the grease gun is to apply lubricant through an aperture to a specific point, usually from a grease cartridge to a grease fitting or 'nipple'. The channels behind the grease nipple lead to where the lubrication is needed. The aperture may be of a type that fits closely with a receiving aperture on any number of mechanical devices. The close fitting of the apertures ensures that lubricant is applied only where needed. There are four types of grease gun:
Hand-powered, where the grease is forced from the aperture by back-pressure built up by hand-cranking the trigger mechanism of the gun, which applies pressure to a spring mechanism behind the lubricant, thus forcing grease through the aperture.
Hand-powered, where there is no trigger mechanism, and the grease is forced through the aperture by the back-pressure built up by pushing on the butt of the grease gun, which slides a piston through the body of the tool, pumping grease out of the aperture.
Air-powered (pneumatic), where compressed air is directed to the gun by hoses, the air pressure serving to force the grease through the aperture. Russell Gray, inventor of the air-powered grease gun, founded Graco based on this invention.
Electric, where an electric motor drives a high pressure grease pump. These are often battery-powered for portability.
The grease gun is charged or loaded with any of the various types of lubricants, but usually a thicker heavier type of grease is used.
It was a close resemblance to contemporary hand-powered grease guns that gave the nickname to the World War II-era M3 submachine gun.
See also
Grease gun injury
Drum pump |
https://en.wikipedia.org/wiki/Diphenylchlorarsine | Diphenylchloroarsine (DA) is the organoarsenic compound with the formula (C6H5)2AsCl. It is highly toxic and was once used in chemical warfare. It is also an intermediate in the preparation of other organoarsenic compounds. The molecule consists of a pyramidal As(III) center attached to two phenyl rings and one chloride. It was also known as sneezing oil during World War I by the Allies.
Preparation and structure
It was first produced in 1878 by the German chemists August Michaelis (1847–1916) and Wilhelm La Coste (1854–1885). It is prepared by the reduction of diphenylarsinic acid with sulfur dioxide. An idealized equation is shown:
Ph2AsO2H + SO2 + HCl → Ph2AsCl + H2SO4
The process adopted by Edgewood Arsenal, the "sodium process", for the production of DA for chemical warfare purposes employed a reaction between chlorobenzene and arsenic trichloride in the presence of sodium.
The German process, used in the first war, applied at Hochstam-Main, used the Sandmeyer reaction between phenyldiazonium chloride and sodium arsenite. The acidified product was reduced and then neutralized. The salt was condensed again by the Sandmeyer reaction and reduced again, the final product was then acidified, resulting in DA.
The structure consists of pyramidal As centre. The As-Cl distance is 2.26 A and the Cl-As-C and C-As-C angles are 96 and 105°, respectively.
Uses
It is a useful reagent for the preparation of other diphenylarsenic compounds, e.g. by reactions with Grignard reagents:
RMgBr + (C6H5)2AsCl → (C6H5)2AsR + MgBrCl
(R = alkyl, aryl)
Chemical warfare
Diphenylchlorarsine was used as a chemical weapon on the Western front during the trench warfare of World War I. It belongs to the class of chemicals classified as vomiting agents. Other such agents are diphenylcyanoarsine (DC) and diphenylaminechlorarsine (DM, Adamsite). Diphenylchlorarsine was sometimes believed to penetrate the gas masks of the time and to cause violent sneezing, forcing removal of the protecting dev |
https://en.wikipedia.org/wiki/Wide%20area%20information%20server | Wide Area Information Server (WAIS) is a client–server text searching system that uses the ANSI Standard Z39.50 Information Retrieval Service Definition and Protocol Specifications for Library Applications" (Z39.50:1988) to search index databases on remote computers. It was developed in 1990 as a project of Thinking Machines, Apple Computer, Dow Jones, and KPMG Peat Marwick.
WAIS did not adhere to either the standard nor its OSI framework (adopting instead TCP/IP) but created a unique protocol inspired by Z39.50:1988.
History
The WAIS protocol and servers were promoted by Thinking Machines Corporation (TMC) of Cambridge, Massachusetts. TMC-produced WAIS servers ran on their massively parallel CM-2 (Connection Machine) and SPARC-based CM-5 MP supercomputers. WAIS clients were developed for various operating systems and windowing systems including Microsoft Windows, Macintosh, NeXT, X, GNU Emacs, and character terminals. TMC released a free open source software version of WAIS for Unix in 1991.
Inspired by the WAIS project on full-text databases and emerging SGML projects, Z39.50 version 2 (Z39.50:1992) was released. Unlike its 1988 predecessor, it was a compatible superset of the international ISO 10162/10163 standard.
With the advent of Z39.50:1992, the termination of support for free WAIS by Thinking Machines and the establishment of WAIS Inc as a commercial venture, the U.S. National Science Foundation funded the Clearinghouse for Networked Information Discovery and Retrieval (CNIDR) to promote Internet search and discovery systems, open source and standards. CNIDR created a new, free open-source WAIS. This was the first freeWAIS based on the wais-8-b5 codebase of TMC, with a wholly new software suite Isite based upon Z39.50:1992 using Isearch as its full-text search engine.
Ulrich Pfeifer and Norbert Gövert of the computer science department of the University of Dortmund extended the CNIDR freeWAIS code to become freeWAIS-sf with structured fields as i |
https://en.wikipedia.org/wiki/Australian%20meat%20substitution%20scandal | The Australian meat substitution scandal of 1981 involved the widespread substitution of horse meat and kangaroo meat for beef in Australia. While the substitution primarily affected meat exported overseas, particularly to the United States, further investigations revealed that donkey meat and pet food had been packaged for human consumption and non-halal meat sold as halal meat domestically in Australia as well.
Background
Claims that meat processing companies in Victoria were substituting other types of meat for beef were brought before the Parliament of Australia by Cyril Primmer in 1977. Ian Sinclair, then Minister of Primary Industry, later announced that a police investigation of Primmer's claims found no evidence of meat substitution. Allegations of meat substitution were made in both the Australian parliament and the Parliament of Victoria over the following years. However, these claims were not seriously investigated by police.
Investigation
The scandal unraveled on 27 July 1981 after a food inspector in San Diego, California, became suspicious of three blocks of frozen Australian boneless beef that were "darker and stringier" than beef should be. Tests revealed that they were horse meat, not beef, and further testing elsewhere in the United States revealed that some Australian "beef" contained kangaroo meat. Some of the meat had found its way into burgers at the Jack in the Box chain, leading to jokes about "Skippy burgers".
In response, the Food and Drug Administration imposed stringent inspections on Australian beef imports. In 1982, the Australian government launched a Royal Commission into the Meat Industry to investigate. Exports of meat from processing plants in Victoria were also suspended for 30 days. The commission investigated 35 companies and released a report on its findings in September 1982. The commission found that "malpractices in the nature of commercial cheating have been widespread in the export industry." The report stated that com |
https://en.wikipedia.org/wiki/IDoc | IDoc, short for Intermediate Document, is a SAP document format for business transaction data transfers.
Non SAP-systems can use IDocs as the standard interface (computing) for data transfer.
IDoc is similar to XML in purpose, but differs in syntax. Both serve the purpose of data exchange and automation in computer systems, but the IDoc-Technology takes a different approach.
While XML allows having some metadata about the document itself, an IDoc is obliged to have information at its header like its creator, creation time etc. While XML has a tag-like tree structure containing data and meta-data, IDocs use a table with the data and meta-data. IDocs also have a session that explains all the processes which the document passed or will pass, allowing one to debug and trace the status of the document.
Different IDoc types are available to handle different types of messages. For example, the IDoc format ORDERS01 may be used for both purchase orders and order confirmations.
IDoc technology offers many tools for automation, monitoring and error handling. For example, if the IDocs are customised that way on a particular server, then a user of SAP R/3 system creates a purchase order; this is automatically sent via an IDoc and a sales order is immediately created on the vendor's system.
When this order cannot be created because of an application error (for example: The price per piece is lower than allowed for this material), then the administrator on the vendor's system sees this IDoc among the erroneous ones and can solve the situation. If the error is in the master data at the vendor's system, he can correct them and order the IDoc to be processed again.
Because of the flexibility and transparency of IDoc technology, some non-SAP technologies use them as well.
Structure of the IDoc
An IDoc consists of
Control record (it contains the type of IDoc, port of the partner, release of SAP R/3 which produced the IDoc etc.)
Data records of different types. The number and t |
https://en.wikipedia.org/wiki/Techreturns | Techreturns Nederland BV is a Dutch company founded with the aim to reduce electronic waste (e-waste). It does this by buying used electronics, especially mobile phones, from consumers and companies to repair them and give them a second life. Some choose to donate mobile phone through organisations such as Masterpeace and Artis Zoo. The company's revenues comes from selling the second hand devices mainly in Asia and Africa where there is a market for affordable, quality electronics. Sales also take place in Europe, especially in the Netherlands through its sister company BeatsNew.
Techreturns is connected to Closing the Loop a foundation collecting e-waste in developing countries for recycling. Involved in this project are also the organisations Fairphone and Text to Change.
Recently, Techreturns has appeared multiple times in the Dutch media, for instance in a discussion about sustainability initiatives by the Dutch government, an annual summary of Dutch sustainable businesses 2013 by MVO Netherlands and in the consumer television programme Kassa on 14-01-14 in a segment about reuse versus recycling in the electronic industry.
Techreturns was founded in 2009, is based in Amsterdam, Netherlands and is a part of Social Enterprise Nederland. Additionally Techreturns has a department in Genova, Italy. |
https://en.wikipedia.org/wiki/List%20of%20Chilean%20flags | This is a list of flags used in Chile. For more information about the national flag, visit the article Flag of Chile.
National flags
Presidential standard
Ambassador flag
Military flags
Chilean Army
Chilean Navy
Chilean Air Force
Police flags
Vexillology Association flags
Regions
Unofficial regional flags
Communes
Political flags
Indigenous groups flags
Mapuche territories
Historical flags
House flags of Chilean freight companies
Burgees of Chile
Antarctic base flags
Political flags
Sources
The Flags of Chile. Flags of the World
National symbols of Chile. Chilean Government Official Website
Orígenes, mitos y hechos interesantes sobre los símbolos patrios chilenos
Decree 1534 of 1967 about National Symbols of Chile
Reino de Araucanía y Patagonia - Portal Mapuche
Chile
Flags of Chile
Flags
Chilean culture |
https://en.wikipedia.org/wiki/Netgear | Netgear, Inc. is an American computer networking company based in San Jose, California, with offices in about 22 other countries. It produces networking hardware for consumers, businesses, and service providers. The company operates in three business segments: retail, commercial, and as a service provider.
Netgear's products cover a variety of widely used technologies such as wireless (Wi-Fi, LTE and 5G), Ethernet and powerline, with a focus on reliability and ease-of-use. The products include wired and wireless devices for broadband access and network connectivity, and are available in multiple configurations to address the needs of the end-users in each geographic region and sector in which the company's products are sold.
As of 2020, Netgear products are sold in approximately 24,000 retail locations around the globe, and through approximately 19,000 value-added resellers, as well as multiple major cable, mobile and wireline service providers around the world.
History
Netgear was founded by Patrick Lo in 1996. Lo graduated from Brown University with a B.S. degree in electrical engineering. Prior to founding Netgear, Lo was a manager at Hewlett-Packard. Netgear received initial funding from Bay Networks.
The company was listed on the NASDAQ stock exchange in 2003.
Product range
Netgear's focus is primarily on the networking market, with products for home and business use, as well as pro-gaming, including wired and wireless technology.
Netgear also has a wide range of Wifi Range Extenders
ProSAFE switches
Netgear markets network products for the business sector, most notably the ProSAFE switch range. , Netgear provides limited lifetime warranties for ProSAFE products for as long as the original buyer owns the product. Currently focusing on Multimedia segment and business product.
Network appliances
Netgear also markets network appliances for the business sector, including managed switches and wired and wireless VPN firewalls. In 2016, Netgear released |
https://en.wikipedia.org/wiki/Moria%20%281983%20video%20game%29 | The Dungeons of Moria, usually referred to as simply Moria, is a computer game inspired by J. R. R. Tolkien's novel The Lord of the Rings. The objective of the game is to dive deep into the Mines of Moria and kill the Balrog. Moria, along with Hack (1984) and Larn (1986), is considered to be the first roguelike game, and the first to include a town level.
Moria was the basis of the better known Angband roguelike game, and influenced the preliminary design of Blizzard Entertainment's Diablo.
Gameplay
The player's goal is to descend to the depths of Moria to defeat the Balrog, akin to a boss battle. As with Rogue, levels are not persistent: when the player leaves the level and then tries to return, a new level is procedurally generated. Among other improvements to Rogue, there is a persistent town at the highest level where players can buy and sell equipment.
Moria begins with creation of a character. The player first chooses a "race" from the following: Human, Half-Elf, Elf, Halfling, Gnome, Dwarf, Half-Orc, or Half-Troll. Racial selection determines base statistics and class availability. One then selects the character's "class" from the following: Warrior, Mage, Priest, Rogue, Ranger, or Paladin. Class further determines statistics, as well as the abilities acquired during gameplay. Mages, Rangers, and Rogues can learn magic; Priests and Paladins can learn prayers. Warriors possess no additional abilities.
The player begins the game with a limited number of items on a town level consisting of six shops: (1) a General Store, (2) an Armory, (3) a Weaponsmith, (4) a Temple, (5) an Alchemy shop, and (6) a Magic-Users store. A staircase on this level descends into a series of randomly generated underground mazes. Deeper levels contain more powerful monsters and better treasures. Each time the player ascends or descends a staircase, a new level is created and the old one discarded; only the town persists throughout the game.
As in most roguelikes, it is impossible |
https://en.wikipedia.org/wiki/OpenSSH | OpenSSH (also known as OpenBSD Secure Shell) is a suite of secure networking utilities based on the Secure Shell (SSH) protocol, which provides a secure channel over an unsecured network in a client–server architecture.
OpenSSH started as a fork of the free SSH program developed by Tatu Ylönen; later versions of Ylönen's SSH were proprietary software offered by SSH Communications Security. OpenSSH was first released in 1999 and is currently developed as part of the OpenBSD operating system.
OpenSSH is not a single computer program, but rather a suite of programs that serve as alternatives to unencrypted protocols like Telnet and FTP. OpenSSH is integrated into several operating systems, namely Microsoft Windows, macOS and most Linux operating systems, while the portable version is available as a package in other systems.
History
OpenBSD Secure Shell was created by OpenBSD developers as an alternative to the original SSH software by Tatu Ylönen, which is now proprietary software. Although source code is available for the original SSH, various restrictions are imposed on its use and distribution. OpenSSH was created as a fork of Björn Grönvall's OSSH that itself was a fork of Tatu Ylönen's original free SSH 1.2.12 release, which was the last one having a license suitable for forking. The OpenSSH developers claim that their application is more secure than the original, due to their policy of producing clean and audited code and because it is released under the BSD license, the open-source license to which the word open in the name refers.
OpenSSH first appeared in OpenBSD 2.6. The first portable release was made in October 1999. Developments since then have included the addition of ciphers (e.g., ChaCha20-Poly1305 in 6.5 of January 2014), cutting the dependency on OpenSSL (6.7, October 2014) and an extension to facilitate public-key discovery and rotation for trusted hosts (for transition from DSA to Ed25519 public host keys, version 6.8 of March 2015).
On 19 O |
https://en.wikipedia.org/wiki/Rhythm%20of%20the%20Rain | "Rhythm of the Rain" is a song performed by The Cascades, released in November 1962. It was written by Cascades band member John Claude Gummoe. On March 9, 1963, it rose to number 3 on the Billboard Hot 100, and spent two weeks at number 1 on Billboards Easy Listening chart. Billboard ranked the record as the number 4 song of 1963.
In March 1963, the song was a top 5 hit in the United Kingdom and, in May that same year, was a number 1 single in Ireland. In Australia it rose to number 2. In Canada, the song was on the CHUM Chart for a total of 12 weeks and reached number 1 in March 1963. In 1999 BMI listed the song as the 9th most performed song on radio/TV in the 20th century.
The Cascades' recording was used in the soundtrack of the 1979 film Quadrophenia, and included in its soundtrack album.
The song arrangement features distinctive use of a celesta played by arranger, Perry Botkin Jr.
The sound of rain and thunder are heard at the beginning and at the end of the song.
Theme
The lyrics are sung by a man whose lover has left him; the rain falling reminds him 'what a fool' he has been. He rhetorically asks the rain for answers, but ultimately he wishes it would 'go away' and let him cry alone.
Chart performance
Weekly charts
Year-end charts
Sylvie Vartan version (in French)
The song was adapted into French (under the title "En écoutant la pluie", meaning "Listening to the Rain") by Richard Anthony. It was recorded by Sylvie Vartan, who released it as a single in 1963.
According to the charts published by the U.S. magazine Billboard (in its "Hits of the World" section), the song "En écoutant la pluie" reached number one in France.
Track listings
7-inch single RCA Victor 45.277 (1963, France)
A. "En écoutant la pluie" (Rhythm of the Rain)
B. "Jamais" (Late Date Baby)
7-inch EP Sylvie à l'Olympia RCA 86.007 (1963, France)
A1. "En écoutant la pluie"
A2. "Jamais"
B1. "Avec moi"
B2. "Mon ami"
Charts
Other charting versions
Dutch teen idol Rob de Ni |
https://en.wikipedia.org/wiki/Abortive%20flower | Abortion in flowers and developing fruits is a common occurrence in plants.
An abortive flower is a flower that has a stamen but an under developed, or no pistil. It falls without producing fruit or seeds, due to its inability to fructify. Flowers require both male and female organs to reproduce, and the pistils and ovary serve as female organs, while the stamens are considered male organs. Illustrative examples include Urginea nagarjunae and Trichilogaster acaciaelongifoliae.
Studies have shown that hermaphrodites or bisexual flowers have higher rates of fruit abortions than unisexual flowers.
Causes of fruit & flower abortion
Pollinated flowers and fruits can abort selectively. It could be because of the order of pollination, the number of developing seeds, pollen source, or some combination of these. Flowers and fruits can also abort because of outside causes like insufficient light, unsuitable photo-period, high temperature, nutrient deficiency, ethylene, drought stress.
Evidently, the abortion of fruits and flowers can also increase the fitness of a plant.
There is research to suggest that random selective abortions based on the timing of fertilization could increase the genetic quality of seeds. The age of the flower also has an effect on the percentage of abortion. The older the flower, the more likely it is to be aborting the fruit or pollen.
Plants have also been found to selectively abort seeds and fruit as a means of defense against herbivorous insects.
Flower
Stamen |
https://en.wikipedia.org/wiki/Voronoi%20diagram | In mathematics, a Voronoi diagram is a partition of a plane into regions close to each of a given set of objects. It can be classified also as a tessellation. In the simplest case, these objects are just finitely many points in the plane (called seeds, sites, or generators). For each seed there is a corresponding region, called a Voronoi cell, consisting of all points of the plane closer to that seed than to any other. The Voronoi diagram of a set of points is dual to that set's Delaunay triangulation.
The Voronoi diagram is named after mathematician Georgy Voronoy, and is also called a Voronoi tessellation, a Voronoi decomposition, a Voronoi partition, or a Dirichlet tessellation (after Peter Gustav Lejeune Dirichlet). Voronoi cells are also known as Thiessen polygons. Voronoi diagrams have practical and theoretical applications in many fields, mainly in science and technology, but also in visual art.
The simplest case
In the simplest case, shown in the first picture, we are given a finite set of points in the Euclidean plane. In this case each site is one of these given points, and its corresponding Voronoi cell consists of every point in the Euclidean plane for which is the nearest site: the distance to is less than or equal to the minimum distance to any other site . For one other site , the points that are closer to than to , or equally distant, form a closed half-space, whose boundary is the perpendicular bisector of line segment . Cell is the intersection of all of these half-spaces, and hence it is a convex polygon. When two cells in the Voronoi diagram share a boundary, it is a line segment, ray, or line, consisting of all the points in the plane that are equidistant to their two nearest sites. The vertices of the diagram, where three or more of these boundaries meet, are the points that have three or more equally distant nearest sites.
Formal definition
Let be a metric space with distance function . Let be a set of indices and let be a tuple |
https://en.wikipedia.org/wiki/Latent%20heat | Latent heat (also known as latent energy or heat of transformation) is energy released or absorbed, by a body or a thermodynamic system, during a constant-temperature process—usually a first-order phase transition.
Latent heat can be understood as hidden energy which is supplied or extracted to change the state of a substance (for example, to melt or vaporize it) without changing its temperature or pressure. This includes the latent heat of fusion (solid to liquid), the latent heat of vaporization (liquid to gas) and the latent heat of sublimation (solid to gas).
The term was introduced around 1762 by Scottish chemist Joseph Black. Black used the term in the context of calorimetry where a heat transfer caused a volume change in a body while its temperature was constant.
In contrast to latent heat, sensible heat is energy transferred as heat, with a resultant temperature change in a body.
Usage
The terms sensible heat and latent heat refer to energy transferred between a body and its surroundings, defined by the occurrence or non-occurrence of temperature change; they depend on the properties of the body. Sensible heat is sensed or felt in a process as a change in the body's temperature. Latent heat is energy transferred in a process without change of the body's temperature, for example, in a phase change (solid/liquid/gas).
Both sensible and latent heats are observed in many processes of transfer of energy in nature. Latent heat is associated with the change of phase of atmospheric or ocean water, vaporization, condensation, freezing or melting, whereas sensible heat is energy transferred that is evident in change of the temperature of the atmosphere or ocean, or ice, without those phase changes, though it is associated with changes of pressure and volume.
The original usage of the term, as introduced by Black, was applied to systems that were intentionally held at constant temperature. Such usage referred to latent heat of expansion and several other related |
https://en.wikipedia.org/wiki/Tidal%20disruption%20event | A tidal disruption event (TDE) is an astronomical phenomenon that occurs when a star approaches sufficiently close to a supermassive black hole (SMBH) to be pulled apart by the black hole's tidal force, experiencing spaghettification. A portion of the star's mass can be captured into an accretion disk around the black hole (if the star is on a parabolic orbit), resulting in a temporary flare of electromagnetic radiation as matter in the disk is consumed by the black hole. According to early papers, tidal disruption events should be an inevitable consequence of massive black holes' activity hidden in galaxy nuclei, whereas later theorists concluded that the resulting explosion or flare of radiation from the accretion of the stellar debris could be a unique signpost for the presence of a dormant black hole in the center of a normal galaxy. Sometimes a star can survive the encounter with an SMBH, and a remnant is formed. These events are termed partial TDEs.
History
Physicist John A. Wheeler suggested that the breakup of a star in the ergosphere of a rotating black hole could induce acceleration of the released gas to relativistic speeds by the so-called "tube of toothpaste effect". Wheeler succeeded in applying the relativistic generalization of the classical Newtonian tidal disruption problem to the neighbourhood of a Schwarzschild or Kerr black hole. However, these early works restricted their attention to incompressible star models or to stars penetrating slightly into the Roche radius, conditions in which the tides would have small amplitude.
In 1976, astronomers Juhan Frank and Martin J. Rees of the Cambridge Institute of Astronomy explored the possibility of black holes at the centers of galaxies and globular clusters, defining a critical radius under which stars are disturbed and swallowed by the black hole, suggesting that it is possible to observe these events in certain galaxies. But at the time, the English researchers did not propose any precise model o |
https://en.wikipedia.org/wiki/Robert%20Ridgway%20Award | The ABA Robert Ridgway Award for Publications in Field Ornithology is an award given by the American Birding Association to an individual who has made significant contributions to the field ornithology literature in the areas of North American bird distribution and field identification. The award may honor a writer or an artist.
One of five awards presented by the ABA for contributions to ornithology, the award is named in honor of Robert Ridgway, initiator of a monumental work of bird systematics, as well as one of the first color nomenclature systems for bird identification.
The award was first bestowed on Harold Mayfield.
List of recipients
Source:
See also
List of ornithology awards |
https://en.wikipedia.org/wiki/Aleksandr%20Kurosh | Aleksandr Gennadyevich Kurosh (; January 19, 1908 – May 18, 1971) was a Soviet mathematician, known for his work in abstract algebra. He is credited with writing The Theory of Groups, the first modern and high-level text on group theory, published in 1944.
He was born in Yartsevo, in the Dukhovshchinsky Uyezd of the Smolensk Governorate of the Russian Empire and died in Moscow. He received his doctorate from the Moscow State University in 1936 under the direction of Pavel Alexandrov. In 1937 he became a professor there, and from 1949 until his death he held the Chair of Higher Algebra at Moscow State University. In 1938, he was the PhD thesis adviser to his fellow group theory scholar Sergei Chernikov, with whom he would develop important relationships between finite and infinite groups, discover the Kurosh-Chernikov class of groups, and publish several influential papers over the next decades. In all, he had 27 PhD students, including also Vladimir Andrunakievich, Mark Graev, and Anatoly Shirshov.
Selected publications
Teoriya Grupp (Теория групп), 2 vols., Nauk, 1944, 2nd edition 1953.
German translation: Gruppentheorie. 2 vols., 1953, 1956, Akademie Verlag, Berlin, 2nd edition 1970, 1972.
English translation: The Theory of Groups, 2 vols., Chelsea Publishing Company, the Bronx, tr. by K. A. Hirsch, 1950, 2nd edition 1955.
Vorlesungen über Allgemeine Algebra. Verlag Harri Deutsch, Zürich 1964.
Zur Theorie der Kategorien. Deutscher Verlag der Wissenschaften, Berlin 1963.
Kurosch: Zur Zerlegung unendlicher Gruppen. Mathematische Annalen vol. 106, 1932.
Kurosch: Über freie Produkte von Gruppen. Mathematische Annalen vol. 108, 1933.
Kurosch: Die Untergruppen der freien Produkte von beliebigen Gruppen. Mathematische Annalen, vol. 109, 1934.
A. G. Kurosh, S. N. Chernikov, “Solvable and nilpotent groups”, Uspekhi Mat. Nauk, 2:3(19) (1947), 18–59.
A. G. Kurosh, "Curso de Álgebra Superior", Editorial Mir, Moscú 1997, traducción de Emiliano Aparicio Bernardo (in |
https://en.wikipedia.org/wiki/Edward%20Routh | Edward John Routh (; 20 January 18317 June 1907) was an English mathematician, noted as the outstanding coach of students preparing for the Mathematical Tripos examination of the University of Cambridge in its heyday in the middle of the nineteenth century. He also did much to systematise the mathematical theory of mechanics and created several ideas critical to the development of modern control systems theory.
Biography
Early life
Routh was born of an English father and a French-Canadian mother in Quebec, at that time the British colony of Lower Canada. His father's family could trace its history back to the Norman conquest when it acquired land at Routh near Beverley, Yorkshire. His mother's family, the Taschereau family, was well-established in Quebec, tracing their ancestry back to the early days of the French colony. His parents were Sir Randolph Isham Routh (1782–1858) and his second wife, Marie Louise Taschereau (1810–1891). Sir Randolph was Commissary General of the British Army 1826, Chairman of the Irish Famine Relief Commission (1845–48) and Deputy Commissary General, the senior Commissariat officer at the Battle of Waterloo, and Marie Louise was the daughter of Judge Jean-Thomas Taschereau and the sister of Judge Jean-Thomas and Cardinal Elzéar-Alexandre Taschereau.
Routh came to England aged eleven and attended University College School and then entered University College, London in 1847, having won a scholarship. There he studied under Augustus De Morgan, whose influence led to Routh to decide on a career in mathematics.
Routh obtained his BA (1849) and MA (1853) in London. He attended Peterhouse, Cambridge, where he was taught by Isaac Todhunter and coached by "senior wrangler maker" William Hopkins. While at Peterhouse, Routh rowed for Peterhouse Boat Club. In 1854, Routh graduated just above James Clerk Maxwell, as Senior Wrangler, sharing the Smith's prize with him. Routh was elected fellow of Peterhouse in 1856.
Mathematics tutor
On graduat |
https://en.wikipedia.org/wiki/Lateral%20body | Lateral bodies are structures that sit on the concave sides of the viral core of a poxvirus and is surrounded by a membrane. They serve as immunomodulatory delivery packets, and membrane cloaking to spread poxviruses. They were first visualized using electron microscopy in 1956 and shortly after, it was shown that they detach from the viral core upon membrane fusion.
Lateral body proteins
Lateral bodies are made up of at least three proteins, phosphoprotein F17, dual-specificity phosphatase H1 and the viral oxidoreductase G4. F17 is the main structural protein and may play a role in modulating cellular immune response through MAPK signaling pathways. H1 dephosphorylates STAT1 to prevent nuclear transcription and block IFNy-induced immune signaling. Finally, G4 is essential for viral morphogenesis. Additionally, the proteins packed in lateral bodies are redox proteins, which modulates the host oxidative response impacting early gene expression and virion production. |
https://en.wikipedia.org/wiki/Yamaha%20YM2151 | The Yamaha YM2151, also known as OPM (FM Operator Type-M) is an eight-channel, four-operator sound chip. It was Yamaha's first single-chip FM synthesis implementation, being created originally for some of the Yamaha DX series of keyboards (DX21, DX27, and DX100). Yamaha also used it in some of their budget-priced electric pianos, such as the YPR-7, -8, and -9.
Uses
The YM2151 was used in many arcade game system boards, starting with Atari's Marble Madness in 1984, then Sega arcade system boards from 1985, and then arcade games from Konami, Capcom, Data East, Irem, and Namco, as well as Williams pinball machines, with its heaviest use in the mid-to-late 1980s. It was also used in Sharp's X1 and X68000 home computers, as well as the modern hobbyist Commander X16 8-bit computer.
The chip was used in the Yamaha SFG-01 and SFG-05 FM Sound Synthesizer units. These are expansion units for Yamaha MSX computers and were already built into some machines such as the Yamaha CX5M. Later SFG-05 modules contain the YM2164 (OPP), an almost identical chip with only minor changes to control registers. The SFGs were followed by the Yamaha FB-01, a standalone version powered exclusively by the YM2164.
Technical details
The YM2151 was paired with either a YM3012 stereo DAC or a YM3014 monophonic DAC so that the output of its FM tone generator could be supplied to speakers as analog audio.
See also
Yamaha YM2164
Yamaha YM2612 |
https://en.wikipedia.org/wiki/Eden%27s%20conjecture | In the mathematics of dynamical systems, Eden's conjecture states that the supremum of the local Lyapunov dimensions on the global attractor is achieved on a stationary point or an unstable periodic orbit embedded into the attractor.
The validity of the conjecture was proved for a number of well-known systems having global attractor (e.g. for the global attractors in the Lorenz system, complex Ginzburg–Landau equation). It is named after Alp Eden, who proposed it in 1987.
Kuznetsov–Eden's conjecture
For local attractors, a conjecture on the Lyapunov dimension of self-excited attractor, refined by N. Kuznetsov, is stated that for a typical system, the Lyapunov dimension of a self-excited attractor does not exceed the Lyapunov dimension of one of the unstable equilibria, the unstable manifold of which intersects with the basin of attraction and visualizes the attractor. The conjecture is valid, e.g., for the classical self-excited Lorenz attractor; for the self-excited attractors in the Henon map (even in the case of multistability and coexistence of local attractors with different Lyapunov dimensions). For a hidden attractor the conjecture is that the maximum of the local Lyapunov dimensions is achieved on an unstable periodic orbit embedded into the attractor. |
https://en.wikipedia.org/wiki/Cyclic%20compound | A cyclic compound (or ring compound) is a term for a compound in the field of chemistry in which one or more series of atoms in the compound is connected to form a ring. Rings may vary in size from three to many atoms, and include examples where all the atoms are carbon (i.e., are carbocycles), none of the atoms are carbon (inorganic cyclic compounds), or where both carbon and non-carbon atoms are present (heterocyclic compounds with rings containing both carbon and non-carbon). Depending on the ring size, the bond order of the individual links between ring atoms, and their arrangements within the rings, carbocyclic and heterocyclic compounds may be aromatic or non-aromatic; in the latter case, they may vary from being fully saturated to having varying numbers of multiple bonds between the ring atoms. Because of the tremendous diversity allowed, in combination, by the valences of common atoms and their ability to form rings, the number of possible cyclic structures, even of small size (e.g., < 17 total atoms) numbers in the many billions.
Adding to their complexity and number, closing of atoms into rings may lock particular atoms with distinct substitution (by functional groups) such that stereochemistry and chirality of the compound results, including some manifestations that are unique to rings (e.g., configurational isomers). As well, depending on ring size, the three-dimensional shapes of particular cyclic structures – typically rings of five atoms and larger – can vary and interconvert such that conformational isomerism is displayed. Indeed, the development of this important chemical concept arose historically in reference to cyclic compounds. Finally, cyclic compounds, because of the unique shapes, reactivities, properties, and bioactivities that they engender, are the majority of all molecules involved in the biochemistry, structure, and function of living organisms, and in man-made molecules such as drugs, pesticides, etc.
Structure and classification
A cy |
https://en.wikipedia.org/wiki/Syringa%20%C3%97%20laciniata | Syringa × laciniata, the cut-leaf lilac or cutleaf lilac, is a hybrid lilac of unknown, though old origin. It is thought to be a hybrid between Syringa vulgaris from southeastern Europe and Syringa protolaciniata from western China. Although often cited as being from China, it more likely arose somewhere in southwestern Asia, where it was first scientifically described from cultivated plants in the 17th century, possibly Iran or Afghanistan, or Pakistan, where it has been cultivated since ancient times.
It is a deciduous shrub growing to 2 m tall. The leaves are 2–4 cm long, variably entire or cut deeply into three to nine lobes or leaflets. The flowers are pale lilac, produced in loose panicles up to 7 cm long in mid spring. It is hardy to USDA plant hardiness zone 5.
See also
Syringa × persica |
https://en.wikipedia.org/wiki/Heat%20flux%20measurements%20of%20thermal%20insulation | Heat flux measurements of thermal insulation are applied in laboratory and industrial environments to obtain reference or in-situ measurements of the thermal properties of an insulation material.
Thermal insulation is tested using nondestructive testing techniques relying on heat flux sensors. Procedures and requirements for in-situ measurements are standardized in ASTM C1041 standard: "Standard Practice for In-Situ Measurements of Heat Flux in Industrial Thermal Insulation Using Heat Flux Transducers".
Laboratory methods
On-site methods
On-site heat flux measurements are often focused on testing the thermal transport properties of for example pipes, tanks, ovens and boilers, by calculating the heat flux q or the apparent thermal conductivity . The real-time energy gain or loss is measured under pseudo steady state-conditions with minimal disturbance by a heat flux transducer (HFT). This on-site method is for flat surfaces (non-pipes) only.
Measurement procedure
Placement of the HFT:
The sensor should be placed on an area of insulation that represents the overall system. For example, it should not be placed closed to an inlet or outlet of a boiler or near a heating element.
Shield the sensor from other sources of heat flux that are not relevant to the measurement, e.g. solar radiation.
Make sure that the HFT is connected to the insulation surface via thermal paste or other conductive material. The emittance of the HFT should match the emittance of surface as close as possible. Air or other material between the sensor and the surface of measurement could lead to measurement errors.
Pre-measurements:
Measure the thickness of the insulation material to the nearest millimetre.
Log ambient weather conditions if necessary. Humidity, air movement and precipitation may be of interest for the interpretation of the results
Measure the temperature of the insulation surface near the sensor and the temperature at the inside of the insulation material, i.e. the proce |
https://en.wikipedia.org/wiki/AMX192 | AMX192 (often referred to simply as AMX) is an analog lighting communications protocol used to control stage lighting. It was developed by Strand Century in the late 1970s. Originally, AMX192 was only capable of controlling 192 discrete channels of lighting. Later, multiple AMX192 streams were supported by some lighting desks. AMX192 has mostly been replaced in favour of DMX, and is typically only found in legacy hardware.
History
The name AMX192 is derived from analog multiplexing and the maximum number of controllable lighting channels (192). AMX was developed to address a significant problems in controlling dimmers. For many years, in order to send a control signal from a lighting control unit to the dimmer units, the only method available was to provide a dedicated wire from the control unit to each dimmer (analogue control) where the voltage present on the wire was varied by the control unit to set the output level of the dimmer. In the late 1970s, the AMX192 serial analogue multiplexing standard was developed in the US, permitting one cable to control several dimmers.
At about the same time, D54 was developed in the United Kingdom, and differed from AMX192 in that it used an embedded clocking scheme. AMX192 used a separate differential clock with a driver circuit similar to RS-485, but current limited on each leg with 100Ω resistors.
See also
Dimmer
Lighting control console
Lighting control system
DMX512
RDM
External links
Strand Lighting Corporate
University of Exeter - Strand Archive
Stage lighting
Network protocols |
https://en.wikipedia.org/wiki/Cryptoeconomics | Cryptoeconomics is an evolving economic paradigm for a cross-disciplinary approach to the study of digital economies and decentralized finance (DeFi) applications. Cryptoeconomics integrates concepts and principles from traditional economics, cryptography, computer science, and game theory disciplines. Just as traditional economics provides a theoretical foundation for traditional financial (a.k.a., Centralized Finance or CeFi) services, cryptoeconomics provides a theoretical foundation for DeFi services bought and sold via fiat cryptocurrencies, and executed by smart contracts.
Definitions and goals
The term cryptoeconomics was coined by the Ethereum community during its formative years (2014-2015), but was initially inspired by the application of economic incentives in the original Bitcoin protocol in 2008. Although the phrase is typically attributed to Vitalik Buterin, the earliest public documented usage is a 2015 talk by Vlad Zamfir entitled “What is Cryptoeconomics?” Zamfir's view of cryptoeconomics is relatively broad and academic: “… a formal discipline that studies protocols that govern the production, distribution, and consumption of goods and services in a decentralized digital economy. Cryptoeconomics is a practical science that focuses on the design and characterization of these protocols”. Alternatively, in a 2017 talk, Buterin's view is more narrow and pragmatic: “… a methodology for building systems that try to guarantee certain kinds of information security properties”.
According to Binance, the primary goals of cryptoeconomics are to understand how to fund, design, develop, and facilitate the operations of DeFi systems, and to apply economic incentives and penalties to regulate the distribution of goods and services in emerging digital economies.
Cryptoeconomics may be considered an evolution of digital economics, which in turn evolved from traditional economics (commonly divided into microeconomics and macroeconomics). Consequently, tradition |
https://en.wikipedia.org/wiki/History%20of%20supercomputing | The term supercomputing arose in the late 1920s in the United States in response to the IBM tabulators at Columbia University. The CDC 6600, released in 1964, is sometimes considered the first supercomputer. However, some earlier computers were considered supercomputers for their day such as the 1954 IBM NORC in the 1950s, and in the early 1960s, the UNIVAC LARC (1960), the IBM 7030 Stretch (1962), and the Manchester Atlas (1962), all of which were of comparable power.
While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records.
By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through the teraflop computational barrier.
Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching petaflop performance levels.
Beginnings: 1950s and 1960s
The term "Super Computing" was first used in the New York World in 1929 to refer to large custom-built tabulators that IBM had made for Columbia University.
In 1957, a group of engineers left Sperry Corporation to form Control Data Corporation (CDC) in Minneapolis, Minnesota. Seymour Cray left Sperry a year later to join his colleagues at CDC. In 1960, Cray completed the CDC 1604, one of the first generation of commercially successful transistorized computers and at the time of its release, the fastest computer in the world. However, the sole fully transistorized Harwell CADET was operational in 1951, and IBM delivered its commercially successful transistorized IBM 7090 in 1959.
Around 1960, Cray decided to design a computer that would be the fastest in the world by a large margin. After four years of experimentation along with Jim Thornton, and Dean Roush and about 30 other |
https://en.wikipedia.org/wiki/Marine%20botany | Marine botany is the study of flowering vascular plant species and marine algae that live in shallow seawater of the open ocean and the littoral zone, along shorelines of the intertidal zone and coastal wetlands, even in low-salinity brackish water of estuaries.
It is a branch of marine biology and botany.
Marine Plant Classifications
There are five kingdoms that present-day classifications group organisms into: the Monera, Protist, Plantae, Fungi, and Animalia.
The Monera
Less than 2,000 species of bacteria occur in the marine environment out of the 100,000 species. Although this group of species is small, they play a tremendous role in energy transfer, mineral cycles, and organic turnover. The monera differs from the four other kingdoms as "members of the Monera have a prokaryotic cytology in which the cells lack membrane-bound organelles such as chloroplasts, mitochondria, nuclei, and complex flagella."
The bacteria can be divided into two major subkingdoms: Eubacteria and Archaebacteria.
Eubacteria
Eubacteria include the only bacteria that contain chlorophyll a. Not only that, but Eubacteria are placed in the divisions of Cyanobacteria and Prochlorophyta.
Characteristics of Eubacteria:
They do not have any membrane-bound organelles.
Most are enclosed by a cellular wall.
Archaebacteria
Archaebacteria are a type of single-cell organism and have a number of characteristics not seen in more "modern" cell types. These characteristics include:
Unique cell membrane chemistry
Unique gene transcription
Capable of methanogenesis
Differences in ribosomal RNA
Types of Archaebacteria:
Thermoproteota: Extremely heat-tolerant
"Euryarchaeota": Able to survive in very salty habitats
"Korarchaeota": The oldest lineage of archaebacteria
Archaebacteria vs. Eubacteria
While both are prokaryotic, these organisms exist in different biological domains because of how genetically different they are. Some believe archaebacteria are some of the oldest forms of lif |
https://en.wikipedia.org/wiki/Microlithography | Microlithography is a general name for any manufacturing process that can create a minutely patterned thin film of protective materials over a substrate, such as a silicon wafer, in order to protect selected areas of it during subsequent etching, deposition, or implantation operations.
The term is normally used for processes that can reliably produce features of microscopic size, such as 10 micrometres or less. The term nanolithography may be used to designate processes that can produce nanoscale features, such as less than 100 nanometres.
Microlithography is a microfabrication process that is extensively used in the semiconductor industry and also manufacture microelectromechanical systems.
Processes
Specific microlithography processes include:
Photolithography using light projected on a photosensitive metarial film (photoresist).
Electron beam lithography, using a steerable electron beam.
Nanoimprinting
Interference lithography
Magnetolithography
Scanning probe lithography
Surface-charge lithography
Diffraction lithography
These processes differ in speed and cost, as well as in the material they can be applied to and the range of feature sizes they can produce. For instance, while the size of features achievable with photolithography is limited by the wavelength of the light used, the technique it is considerably faster and simpler than electron beam lithography, that can achieve much smaller ones.
Applications
The main application for microlithography is fabrication of integrated circuits ("electronic chips"), such as solid-state memories and microprocessors. They can also be used to create diffraction gratings, microscope calibration grids, and other flat structures with microscopic details.
See also
Printed circuit board |
https://en.wikipedia.org/wiki/Metal%20vapor%20synthesis | In chemistry, metal vapor synthesis (MVS) is a method for preparing metal complexes by combining freshly produced metal atoms or small particles with ligands. In contrast to the high reactivity of such freshly produced metal atoms, bulk metals typically are unreactive toward neutral ligands. The method has been used to prepare compounds that cannot be prepared by traditional synthetic methods, e.g. Ti(η6-toluene)2. The technique relies on a reactor that evaporates the metal, allowing the vapor to impinge on a cold reactor wall that is coated with the organic ligand. The metal evaporates upon being heated resistively or irradiated with an electron beam. The apparatus operates under high vacuum. In a common implementation, the metal vapor and the organic ligand are co-condensed at liquid nitrogen temperatures.
In several case where compounds are prepared by MVS, related preparations employ conventional routes. Thus, tris(butadiene)molybdenum was first prepared by co-condensation of butadiene and Mo vapor, but yields are higher for the reduction of molybdenum(V) chloride in the presence of the diene. |
https://en.wikipedia.org/wiki/Daphnia%20pulex | Daphnia pulex is the most common species of water flea. It has a cosmopolitan distribution: the species is found throughout the Americas, Europe, and Australia. It is a model species, and was the first crustacean to have its genome sequenced.
Description
D. pulex is an arthropod whose body segments are difficult to distinguish. It can only be recognised by its appendages (only ever one pair per segment), and by studying its internal anatomy. The head is distinct and is made up of six segments, which are fused together even as an embryo. It bears the mouthparts, and two pairs of antennae, the second pair of which is enlarged into powerful organs used for swimming. No clear division is seen between the thorax and abdomen, which collectively bear five pairs of appendages. The shell surrounding the animal extends posteriorly into a spine. Like most other Daphnia species, D. pulex reproduces by cyclical parthenogenesis, alternating between sexual and asexual reproduction.
Ecology
D. pulex occurs in a wide range of aquatic habitats, although it is most closely associated with small, shaded pools. In oligotrophic lakes, D. pulex has little pigmentation, while it may become bright red in hypereutrophic waters, due to the production of haemoglobin.
Predation
Daphnia species are prey for a variety of both vertebrate and invertebrate predators. The role of predation on D. pulex population ecology is extensively studied, and has been shown to be a major axis of variation in shaping population dynamics and landscape-level distribution. In addition to the direct population ecological effects of predation, the process contributes to phenotypic evolution in contrasting ways; larger D. pulex individuals are more visible to vertebrate predators, but invertebrate predators are unable to handle larger ones. As a result, larger water fleas tend to be found with invertebrate predators, while smaller size is associated with vertebrate predators.
Similar to some other Daphnia species, |
https://en.wikipedia.org/wiki/MICrONS | The MICrONS program (Machine Intelligence from Cortical Networks) is a five-year project run by the United States government through the Intelligence Advanced Research Projects Activity (IARPA) with the goal of reverse engineering one cubic millimeter—spanning many petabytes of volumetric data—of a rodent's brain tissue and use insights from its study to improve machine learning and artificial intelligence by constructing a connectome. The program is part of the White House BRAIN Initiative.
Teams
The program has set up three independent teams, each of which will take a different approach towards the goal. The teams are led by David Cox of Harvard University, Tai Sing Lee of Carnegie Mellon University; and jointly by Andreas Tolias and Xaq Pitkow of the Baylor College of Medicine, Clay Reid of the Allen Institute for Brain Science, and Sebastian Seung of Princeton University.
The Cox team has aimed to build a three-dimensional map of the neural connections within the source tissue block using reconstructions from electron micrographs.
Technology and infrastructure for storing petabyte-scale volumetric data, including a cloud-based database, bossDB, were developed by the Johns Hopkins Applied Physics Lab.
Approach
The part of the brain chosen for the project is part of the visual cortex, chosen as a representative of a task – visual perception – that is easy for animals and human beings to perform, but has turned out to be extremely difficult to emulate with computers.
Cox's team is attempting to build a three dimensional mapping of the actual neural connections, based on fine electron micrographs. Lee's team is taking a DNA barcoding approach, in attempt to map the brain circuits by barcode-labelling of each neuron, and cross-synapse barcode connections. Tolias's team is taking a data-driven approach, assuming the brain creates statistical expectations about the world it sees. |
https://en.wikipedia.org/wiki/Oral%20and%20maxillofacial%20radiology | Oral and maxillofacial radiology, also known as dental and maxillofacial radiology, is the specialty of dentistry concerned with performance and interpretation of diagnostic imaging used for examining the craniofacial, dental and adjacent structures.
Oral and maxillofacial imaging includes cone beam computerized tomography, multislice computerized tomography, magnetic resonance imaging, positron emission tomography, ultrasound, panoramic radiography, cephalometric imaging, intra-oral imaging (e.g. bitewing, peri-apical and occlusal radiographs) in addition to special tests like sialographs. Other modalities, including optical coherence tomography are also under development for dental imaging.
Training
United States
Oral or dental maxillofacial radiology is one of nine dental specialties recognized by the American Dental Association.
To become an oral and maxillofacial radiologist one must first complete a dental degree and then apply for and complete a postgraduate course of training (usually between 2–4 years in length). Training includes all aspects of radiation physics, radiation biology, radiation safety, radiologic technique, the patho-physiology of disease and interpretation of diagnostic images.
The Commission on Dental Accreditation accredited programs are a minimum of two years in length. Several accredited programs in oral maxillofacial radiology require the resident to complete a master's degree, whereas others allow the option of pursuing a concurrent PhD or master's degree. Following successful completion of this training the Oral and Maxillofacial Radiologist becomes Board eligible to challenge the American Board of Oral and Maxillofacial Radiology examination. Successful completion of board certification results in Diplomat status in the American Board of Oral and Maxillofacial Radiology.
Australia
Australian programs are accredited by the Australian Dental Council and are 3 years in length, culminating in either a master's degree (MDS or MPhil) |
https://en.wikipedia.org/wiki/Giovanni%20Battista%20Rizza | Giovanni Battista Rizza (7 February 1924 – 15 October 2018), officially known as Giambattista Rizza, was an Italian mathematician, working in the fields of complex analysis of several variables and in differential geometry: he is known for his contribution to hypercomplex analysis, notably for extending Cauchy's integral theorem and Cauchy's integral formula to complex functions of a hypercomplex variable, the theory of pluriharmonic functions and for the introduction of the now called Rizza manifolds.
Biography
Life and academic career
Born in Piazza Armerina, the son of Giovanni and Angioletta Bocciarelli, he graduated from the Università degli Studi di Genova, earning his laurea degree in 1949 under the direction of Enzo Martinelli. In 1956 he was in Rome at the INdAM, having been awarded a scholarship for his early research activities. A year later, in 1957, he was elected "discepolo ricercatore" in the same institute. During the same year, he gave some lectures on topics belonging to the field of several complex variables, later included in the lecture notes . In Rome he also met Lucilla Bassotti, who eventually become his wife. In 1961, he won the competitive examination for the chair of "Geometria analitica con elementi di Geometria Proiettiva e Geometria Descrittiva con Disegno" of the University of Parma, scoring first out of the three finalists: a year later, in 1962, he became extraordinary professor, and then, in 1965, ordinary professor to the same chair. In 1979 he became ordinary professor of "Geometria superiore", holding that chair uninterruptedly until 1994: from 1994 up to his retirement in 1997, he was "professore fuori ruolo" in the same department of mathematics where he worked for more than 35 years.
Apart from his research and teaching work, he was actively involved as a member of the editorial board of the "Rivista di Matematica della Università di Parma", and served also as the journal director from 1992 to 1997.
Rizza died in Parma on |
https://en.wikipedia.org/wiki/Hodge%E2%80%93Tate%20module | In mathematics, a Hodge–Tate module is an analogue of a Hodge structure over p-adic fields. introduced and named Hodge–Tate structures using the results of on p-divisible groups.
Definition
Suppose that G is the absolute Galois group of a p-adic field K. Then G has a canonical cyclotomic character χ given by its action on the pth power roots of unity. Let C be the completion of the algebraic closure of K. Then a finite-dimensional vector space over C with a semi-linear action of the Galois group G is said to be of Hodge–Tate type if it is generated by the eigenvectors of integral powers of χ.
See also
p-adic Hodge theory
Mumford–Tate group |
https://en.wikipedia.org/wiki/Viola%20epipsila | Viola epipsila, the dwarf marsh violet, is a species of perennial forb in the genus Viola.
It is found in Alaska, Finland, Russia, Poland, and other countries in Europe. Since the 1980s, it has spread to the eastern United States. |
https://en.wikipedia.org/wiki/Genosome | A genosome (also known as a lipoplex) is a lipid and DNA complex that is used to deliver genes. It can be a form of non-viral gene therapy as the complex does not require any components of a virus in order to transport genetic material. In presence of CT-DNA, genosomes can form through surface electrostatic interaction.
See also |
https://en.wikipedia.org/wiki/Sink-toilet | A sink-toilet combination unit is sometimes used by prisons and militaries. Such units typically have no exposed pipes by which someone could hang himself. They are sometimes made of stainless steel for added durability. Sink-toilets are used at the Guantanamo Bay detention camp. Sink-toilets are also used in some homes as an environmentally friendly, water-saving option that, at the user's option, reuses waste water from the sink in the discharge of the cistern. Some sink toilets employ a filtration system to remove particles, debris, bacteria, and odors before the water is stored for flushing. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.