source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Napoleon%27s%20problem | Napoleon's problem is a compass construction problem. In it, a circle and its center are given. The challenge is to divide the circle into four equal arcs using only a compass. Napoleon was known to be an amateur mathematician, but it is not known if he either created or solved the problem. Napoleon's friend the Italian mathematician Lorenzo Mascheroni introduced the limitation of using only a compass (no straight edge) into geometric constructions. But actually, the challenge above is easier than the real Napoleon's problem, consisting in finding the center of a given circle with compass alone. The following sections will describe solutions to three problems and proofs that they work.
Georg Mohr's 1672 book "Euclides Danicus" anticipated Mascheroni's idea, though the book was only rediscovered in 1928.
Dividing a given circle into four equal arcs given its centre
Centred on any point X on circle C, draw an arc through O (the centre of C) which intersects C at points V and Y. Do the same centred on Y through O, intersecting C at X and Z. Note that the line segments OV, OX, OY, OZ, VX, XY, YZ have the same length, all distances being equal to the radius of the circle C.
Now draw an arc centred on V which goes through Y and an arc centred on Z which goes through X; call where these two arcs intersect T. Note that the distances VY and XZ are times the radius of the circle C.
Put the compass radius equal to the distance OT ( times the radius of the circle C) and draw an arc centred on Z which intersects the circle C at U and W. UVWZ is a square and the arcs of C UV, VW, WZ, and ZU are each equal to a quarter of the circumference of C.
Finding the centre of a given circle
Let (C) be the circle, whose centre is to be found.
Let A be a point on (C).
A circle (C1) centered at A meets (C) at B and B'.
Two circles (C2) centered at B and B', with radius AB, cross again at point C.
A circle (C3) centered at C with radius AC meets (C1) at D and D'.
Two cir |
https://en.wikipedia.org/wiki/Restriction%20fragment | A restriction fragment is a DNA fragment resulting from the cutting of a DNA strand by a restriction enzyme (restriction endonucleases), a process called restriction. Each restriction enzyme is highly specific, recognising a particular short DNA sequence, or restriction site, and cutting both DNA strands at specific points within this site. Most restriction sites are palindromic, (the sequence of nucleotides is the same on both strands when read in the 5' to 3' direction of each strand), and are four to eight nucleotides long. Many cuts are made by one restriction enzyme because of the chance repetition of these sequences in a long DNA molecule, yielding a set of restriction fragments. A particular DNA molecule will always yield the same set of restriction fragments when exposed to the same restriction enzyme. Restriction fragments can be analyzed using techniques such as gel electrophoresis or used in recombinant DNA technology.
Applications
In recombinant DNA technology, specific restriction endonucleases are used that will isolate a particular gene and cleave the sugar phosphate backbones at different points (retaining symmetry), so that the double-stranded restriction fragments have single-stranded ends. These short extensions, called sticky ends, can form hydrogen bonded base pairs with complementary sticky ends on any other DNA cut with the same enzyme (such as a bacterial plasmid).
In agarose gel electrophoresis, the restriction fragments yield a band pattern characteristic of the original DNA molecule and restriction enzyme used, for example the relatively small DNA molecules of viruses and plasmids can be identified simply by their restriction fragment patterns. If the nucleotide differences of two different alleles occur within the restriction site of a particular restriction enzyme, digestion of segments of DNA from individuals with different alleles for that particular gene with that enzyme would produce different fragments and that will each yield di |
https://en.wikipedia.org/wiki/Hex%20sign | Hex signs are a form of Pennsylvania Dutch folk art, related to fraktur, found in the Fancy Dutch tradition in Pennsylvania Dutch Country. Barn paintings, usually in the form of "stars in circles", began to appear on the landscape in the early 19th century and became widespread decades later when commercial ready-mixed paint became readily available. By the 1950s commercialized hex signs, aimed at the tourist market, became popular and these often include stars, compass roses, stylized birds known as distelfinks, hearts, tulips, or a tree of life. Two schools of thought exist on the meaning of hex signs. One school ascribes a talismanic nature to the signs; the other sees them as purely decorative. Both schools recognize that there are sometimes superstitions associated with certain hex sign themes and neither ascribes strong magical power to them. The Amish do not use hex signs.
Form and use
Painted barn stars in circular borders are a common sight on Pennsylvania Dutch barns in central and southeastern Pennsylvania, especially in Berks County, Lancaster County, and Lehigh County. However, the modern decoration of barns is a late development in Pennsylvania Dutch folk art. Prior to the 1830s, the cost of paint meant that most barns were unpainted. As paint became affordable, the Pennsylvania Dutch began to decorate their barns much like they decorated items in their homes. Barn decorating reached its peak in the early 20th century, at which time there were many artists who specialized in barn decorating. Drawing from a large repertoire of designs, barn painters combined many elements in their decorations. The geometric patterns of quilts can be seen in the patterns of many hex signs. Hearts and tulips seen on barns are commonly found on elaborately lettered and decorated birth, baptism, and marriage certificates known as fraktur.
Throughout the 20th century, hex signs were often produced as commodities for the tourist industry in Pennsylvania. These signs could b |
https://en.wikipedia.org/wiki/List%20of%20World%20Series%20broadcasters | The following is a list of national American television and radio networks and announcers that have broadcast World Series games over the years, as well as local flagship radio stations that have aired them since 1982.
Television
Television coverage of the World Series began in 1947. Since that time, eight different men have called eight or more different World Series telecasts as either a play-by-play announcer or color commentator. They are (through 2023) Joe Buck (24), Tim McCarver (24), Curt Gowdy (12), Mel Allen (11), Vin Scully (11), Joe Garagiola (10), Tony Kubek (8), Al Michaels (8), and John Smoltz (8).
2020s
Per the current broadcast agreement, the World Series will be televised by Fox through 2028.
2010s
Notes
2010 – For the second consecutive year, World Series games had earlier start times in hopes of attracting younger viewers. First pitch was just before 8 p.m. EDT for Games 1–2, and 5, while Game 3 started at 7 p.m. EDT. Game 4, however, started at 8:22 p.m. EDT to accommodate Fox's football coverage of the game between the Tampa Bay Buccaneers and Arizona Cardinals. Many viewers in the New York City and Philadelphia markets were unable to watch Games 1 and 2 because News Corporation, Fox's parent company, pulled WNYW and WTXF from cable provider Cablevision on October 16 because of a carriage dispute. The agreement was reached just before Game 3.
MLB International syndicated its own telecast of the series, with announcers Gary Thorne and Rick Sutcliffe, to various networks outside the U.S. ESPN America broadcast the series live in the UK and in Europe. Additionally, the American Forces Network and Canadian Forces Radio and Television carried the games to U.S. and Canadian service personnel stationed around the globe. Fox Deportes carried the Series in Spanish on American cable and satellite TV.
The overall national Nielsen rating for the five games was 8.4, tied with the 2008 World Series for the event's lowest-ever TV rating. Game 4 was beaten |
https://en.wikipedia.org/wiki/Inositol%20phosphate | Inositol phosphates are a group of mono- to hexaphosphorylated inositols. Each form of inositol phosphate is distinguished by the number and position of the phosphate group on the inositol ring.
inositol monophosphate (IP)
inositol bisphosphate (IP2)
inositol trisphosphate (IP3)
inositol tetraphosphate (IP4)
inositol pentakisphosphate (IP5)
inositol hexaphosphate (IP6) also known as phytic acid, or phytate (as a salt).
A series of phosphorylation and dephosphorylation reactions are carried out by at least 19 phosphoinositide kinases and 28 phosphoinositide phosphatase enzymes allowing for the inter-conversion between the inositol phosphate compounds based on cellular demand.
Inositol phosphates can either be soluble or insoluble depending on their chemical structure. Insoluble inositol phosphates are often referred to as phosphoinositides or PtdInsP and can be found associated with sub-cellular membranes or in nuclear subdomains. PIP2 (also known as PtdIns(4,5)P2) is found to be hydrophobic in nature and thus remains attached to the plasma membrane in the cell. Soluble inositol phosphates such as IP3 (Ins(1,4,5)P3) are able to dissolve in the cytoplasm of cells contributing to its crucial role as a secondary messenger in signal transduction pathways.
Inositol phosphates play a crucial role in various signal transduction pathways responsible for cell growth and differentiation, apoptosis, DNA repair, RNA export, regeneration of ATP and more.
Functions
Inositol triphosphate
The inositol-phospholipid signaling pathway is responsible for the generation of IP3 through the cleavage of Phosphatidylinositol 4,5-bisphosphate (PIP2) found in the lipid bi-layer of the plasma membrane via G proteins (Gq) and phospholipase C-β. Soluble IP3 is able to rapidly diffuse into the cytosol and bind to the inositol triphosphate receptor (InsP3Rs) located in the endoplasmic reticulum. High concentrations of calcium will be released from the endoplasmic reticulum as a result |
https://en.wikipedia.org/wiki/Methionine%20synthase | Methionine synthase also known as MS, MeSe, MTR is responsible for the regeneration of methionine from homocysteine. In humans it is encoded by the MTR gene (5-methyltetrahydrofolate-homocysteine methyltransferase). Methionine synthase forms part of the S-adenosylmethionine (SAMe) biosynthesis and regeneration cycle, and is the enzyme responsible for linking the cycle to one-carbon metabolism via the folate cycle. There are two primary forms of this enzyme, the Vitamin B12 (cobalamin)-dependent (MetH) and independent (MetE) forms, although minimal core methionine synthases that do not fit cleanly into either category have also been described in some anaerobic bacteria. The two dominant forms of the enzymes appear to be evolutionary independent and rely on considerably different chemical mechanisms. Mammals and other higher eukaryotes express only the cobalamin-dependent form. In contrast, the distribution of the two forms in Archaeplastida (plants and algae) is more complex. Plants exclusively possess the cobalamin-independent form, while algae have either one of the two, depending on species. Many different microorganisms express both the cobalamin-dependent and cobalamin-independent forms.
Mechanism
Methionine synthase catalyzes the final step in the regeneration of methionine (Met) from homocysteine (Hcy). Both the cobalamin-dependent and cobalamin-independent forms of the enzyme carry out the same overall chemical reaction, the transfer of a methyl group from 5-methyltetrahydrofolate (N5-MeTHF) to homocysteine, yielding tetrahydrofolate (THF) and methionine. Methionine synthase is the only mammalian enzyme that metabolizes N5-MeTHF to regenerate the active cofactor THF. In the cobalamin-dependent (MetH) form of the enzyme, the reaction proceeds by two steps in a preferred ordered sequential mechanism. The physiological resting state of the enzyme is thought to contain the enzyme-bound(Cob) cofactor in the methylcobalamin form, with the cobalt atom in the form |
https://en.wikipedia.org/wiki/Carus%20Mathematical%20Monographs | The Carus Mathematical Monographs is a monograph series published by the Mathematical Association of America. Books in this series are intended to appeal to a wide range of readers in mathematics and science.
Scope and audience
While the books are intended to cover nontrivial material, the emphasis is on exposition and clear communication rather than novel results and a systematic Bourbaki-style presentation. The webpage for the series states:
The exposition of mathematical subjects that the monographs contain are set forth in a manner comprehensible not only to teachers and students specializing in mathematics, but also to scientific workers in other fields. More generally, the monographs are intended for the wide circle of thoughtful people familiar with basic graduate or advanced undergraduate mathematics encountered in the study of mathematics itself or in the context of related disciplines who wish to extend their knowledge without prolonged and critical study of the mathematical journals and treatises.
Many of the books in the series have become classics in the genre of general mathematical exposition.
Series listing
Calculus of Variations, by G. A. Bliss (out of print)
Analytic Functions of a Complex Variable, by D. R. Curtiss (out of print)
Mathematical Statistics, by H. L. Rietz (out of print)
Projective Geometry, by J. W. Young (out of print)
A History of Mathematics in America before 1900, by D. E. Smith and Jekuthiel Ginsburg (out of print)
Fourier Series and Orthogonal Polynomials, by Dunham Jackson (out of print)
Vectors and Matrices, by C. C. MacDuffee (out of print)
Rings and Ideals, by N. H. McCoy (out of print)
The Theory of Algebraic Numbers, second edition, by Harry Pollard and Harold G. Diamond
The Arithmetic Theory of Quadratic Forms, by B. W. Jones (out of print)
Irrational Numbers, by Ivan Niven
Statistical Independence in Probability, Analysis and Number Theory, by Mark Kac
A Primer of Real Functions, third edition, by Ralph P. Boas, Jr. |
https://en.wikipedia.org/wiki/Maladaptation | In evolution, a maladaptation () is a trait that is (or has become) more harmful than helpful, in contrast with an adaptation, which is more helpful than harmful. All organisms, from bacteria to humans, display maladaptive and adaptive traits. In animals (including humans), adaptive behaviors contrast with maladaptive ones. Like adaptation, maladaptation may be viewed as occurring over geological time, or within the lifetime of one individual or a group.
It can also signify an adaptation that, whilst reasonable at the time, has become less and less suitable and more of a problem or hindrance in its own right, as time goes on. This is because it is possible for an adaptation to be poorly selected or become more of a dysfunction than a positive adaptation, over time.
It can be noted that the concept of maladaptation, as initially discussed in a late 19th-century context, is based on a flawed view of evolutionary theory. It was believed that an inherent tendency for an organism's adaptations to degenerate would translate into maladaptations and soon become crippling if not "weeded out" (see also eugenics). In reality, the advantages conferred by any one adaptation are rarely decisive for survival on its own, but rather balanced against other synergistic and antagonistic adaptations, which consequently cannot change without affecting others.
In other words, it is usually impossible to gain an advantageous adaptation without incurring "maladaptations". Consider a seemingly trivial example: it is apparently extremely hard for an animal to evolve the ability to breathe well in air and in water. Better adapting to one means being less able to do the other.
Examples
A term used known as neuroplasticity is defined as "the brain's ability to reorganize itself by forming new neural connections throughout life". Neuroplasticity is seen as an adaptation that helps humans to adapt to new stimuli, especially through motor functions in musically inclined people, as well as sev |
https://en.wikipedia.org/wiki/Gloss%20%28optics%29 | Gloss is an optical property which indicates how well a surface reflects light in a specular (mirror-like) direction. It is one of the important parameters that are used to describe the visual appearance of an object. Other categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables, including gloss among the involved aspects. The factors that affect gloss are the refractive index of the material, the angle of incident light and the surface topography.
Apparent gloss depends on the amount of specular reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with diffuse reflection – the amount of light scattered into other directions.
Theory
When light illuminates an object, it interacts with it in a number of ways:
Absorbed within it (largely responsible for colour)
Transmitted through it (dependent on the surface transparency and opacity)
Scattered from or within it (diffuse reflection, haze and transmission)
Specularly reflected from it (gloss)
Variations in surface texture directly influence the level of specular reflection. Objects with a smooth surface, i.e. highly polished or containing coatings with finely dispersed pigments, appear shiny to the eye due to a large amount of light being reflected in a specular direction whilst rough surfaces reflect no specular light as the light is scattered in other directions and therefore appears dull. The image forming qualities of these surfaces are much lower making any reflections appear blurred and distorted.
Substrate material type also influences the gloss of a surface. Non-metallic materials, i.e. plastics etc. produce a higher level of reflected light when illuminated at a greater illumination angle due to light being absorbed into the material or being diffusely scattered depending o |
https://en.wikipedia.org/wiki/Critical%20incident%20technique | The critical incident technique (or CIT) is a set of procedures used for collecting direct observations of human behavior that have critical significance and meet methodically defined criteria. These observations are then kept track of as incidents, which are then used to solve practical problems and develop broad psychological principles. A critical incident can be described as one that makes a contribution—either positively or negatively—to an activity or phenomenon. Critical incidents can be gathered in various ways, but typically respondents are asked to tell a story about an experience they have had.
CIT is a flexible method that usually relies on five major areas. The first is determining and reviewing the incident, then fact-finding, which involves collecting the details of the incident from the participants. When all of the facts are collected, the next step is to identify the issues. Afterwards a decision can be made on how to resolve the issues based on various possible solutions. The final and most important aspect is the evaluation, which will determine if the solution that was selected will solve the root cause of the situation and will cause no further problems.
History
The studies of Sir Francis Galton are said to have laid the foundation for the critical incident technique, but it is the work of Colonel John C. Flanagan, that resulted in the present form of CIT.
Flanagan defined the critical incident technique as:
Flanagan's work was carried out as part of the Aviation Psychology Program of the United States Army Air Forces during World War II, where Flanagan conducted a series of studies focused on differentiating effective and ineffective work behaviors. Flanagan went on to found American Institutes for Research continuing to use the critical incident technique in a variety of research. Since then CIT has spread as a method to identify job requirements, develop recommendations for effective practices, and determine competencies for a vast |
https://en.wikipedia.org/wiki/Email%20hub | The term Mail Hub is used to denote an MTA (message transfer agent) or system of MTAs used to route email but not act as a mail server (having no end-user email store) since there is no MUA (mail user agent) access. Examples could include dedicated anti-SPAM appliances, anti-virus engines running on dedicated hardware, email gateways and so forth.
DNS Based Mail Hub
A first example for a Mail Hub consisting of a network of MTAs would be that of a typical small-to-medium size Internet service provider (ISP), or for a FOSS corporate mail system. This solution is very good for developing nation ISPs and NGOs. As well as any other low-budget but high availability mail system needs. This is mostly due to not using expensive Network level switches and hardware.
Simple DNS MX record based Mail Hub cluster with parallelism and front-end failover and load balancing is illustrated in the following diagram:
The servers would be all Linux x86 servers with low cost SATA or PATA hard disk storage. The front-end servers would most likely run Postfix with Spamassassin and ClamAV. This RAIS server Cluster would then overcome the problem with Perl based Spamassassin being too CPU and memory hungry for low cost servers. The solution presented here is based on all GPL FOSS free software, but of course there are alternative configurations using other free or non-free software. |
https://en.wikipedia.org/wiki/Antibiotic%20sensitivity%20testing | Antibiotic sensitivity testing or antibiotic susceptibility testing is the measurement of the susceptibility of bacteria to antibiotics. It is used because bacteria may have resistance to some antibiotics. Sensitivity testing results can allow a clinician to change the choice of antibiotics from empiric therapy, which is when an antibiotic is selected based on clinical suspicion about the site of an infection and common causative bacteria, to directed therapy, in which the choice of antibiotic is based on knowledge of the organism and its sensitivities.
Sensitivity testing usually occurs in a medical laboratory, and uses culture methods that expose bacteria to antibiotics, or genetic methods that test to see if bacteria have genes that confer resistance. Culture methods often involve measuring the diameter of areas without bacterial growth, called zones of inhibition, around paper discs containing antibiotics on agar culture dishes that have been evenly inoculated with bacteria. The minimum inhibitory concentration, which is the lowest concentration of the antibiotic that stops the growth of bacteria, can be estimated from the size of the zone of inhibition.
Antibiotic susceptibility testing has been needed since the discovery of the beta-lactam antibiotic penicillin. Initial methods were phenotypic, and involved culture or dilution. The Etest, an antibiotic impregnated strip, has been available since the 1980s, and genetic methods such as polymerase chain reaction (PCR) testing have been available since the early 2000s. Research is ongoing into improving current methods by making them faster or more accurate, as well as developing new methods for testing, such as microfluidics.
Uses
In clinical medicine, antibiotics are most frequently prescribed on the basis of a person's symptoms and medical guidelines. This method of antibiotic selection is called empiric therapy, and it is based on knowledge about what bacteria cause an infection, and to what antibiotics ba |
https://en.wikipedia.org/wiki/4104 | 4104 (four thousand one hundred [and] four) is the natural number following 4103 and preceding 4105. It is the second positive integer which can be expressed as the sum of two positive cubes in two different ways. The first such number, 1729, is called the "Ramanujan–Hardy number".
4104 is the sum of 4096 + 8 (that is, 163 + 23), and also the sum of 3375 + 729 (that is, 153 + 93).
See also
Taxicab number
1729
External links
MathWorld: Hardy–Ramanujan Number
Integers |
https://en.wikipedia.org/wiki/Timeline%20of%20entomology%20%E2%80%93%20prior%20to%201800 | Entomology, the scientific study of insects and closely related terrestrial arthropods, has been impelled by the necessity of societies to protect themselves from insect-borne diseases, crop losses to pest insects, and insect-related discomfort, as well as by people's natural curiosity. Though many significant developments in the field happened only recently, in the 19th–20th centuries, the history of entomology stretches back to prehistory.
Prehistory
13,000 BC – The earliest evidence of man's interest in insects is from rock paintings. The insects depicted are bees. A carving of a cave cricket from the Cave of the Trois-Frères is similarly dated.
1800–1700 BC – Bees were significant in other early civilisations, for instance at Malia, Crete, where jewellery depicts two golden bees holding a drop of honey.
Egypt, Greek and Roman empires
1000 BC – A sacred scarab beetle, held to be sacred by the Ancient Egyptians, is painted on wall of Rameses IX's tomb. Bee-keeping was particularly well developed in Egypt and was discussed by the Roman writers Virgil, Gaius Julius Hyginus, Varro and Columella.
620–560 BC – Aesop's Fables relate stories of grasshoppers, ants and other insects.
– Aristotle writes History of Animals In this work Aristotle includes insects in a class "Entoma" which also includes the arachnids and the myriapods but not the Crustacea which formed another class "Malacostraca" of the "Anaema" or "bloodless animals." ( is the Latin translation of Aristotle's Greek . Parts of Animals on zoological anatomy followed. For nearly 2000 years the few writers who dealt with zoological subjects followed Aristotle's leading.
77–79 AD – Pliny the Elder publishes
847 AD – Rabanus Maurus authors the encyclopaedia ('On the Nature of Things').
10th–15th centuries
Medieval period – the great chain of being, a hierarchical structure of all matter and life thought to be decreed by God, is developed throughout medieval Christianity. It has its origins in Plato |
https://en.wikipedia.org/wiki/Timeline%20of%20entomology%20%E2%80%93%201800%E2%80%931850 |
19th century
1800 – an arbitrary date but it was around this time that systematists began to specialise. There remained entomological polyhistors – those who continued to work on the insect fauna as a whole.
From the beginning of the century, however, the specialist began to predominate, harbingered by Johann Wilhelm Meigen's Nouvelle classification des mouches à deux aile (New classification of the Diptera) commenced in the first year of the century. Lepidopterists were amongst the first to follow Meigen's lead.
The specialists fell into three categories. First there were species describers, then specialists in species recognition and then specialists in gross taxonomy. There were however considerable degrees of overlap. Also then, as now, few could entirely resist the lure of groups other than their own, and this was especially true of those in small countries where they were the sole 'expert', and many famous specialists in one order also worked on others. Hence, for instance, many works which began as butterfly faunas were completed as general regional works, often collaboratively.
"Man is born not to solve the problems of the universe, but to find out where the problem begins, and then to restrain himself within the limits of the comprehensible"
Johann Wolfgang von Goethe Conversations with Eckerman: Feb. 13, 1829
1800
Jean-Baptiste Pierre Antoine de Monet, Chevalier de Lamarck first expressed his views on evolution in lectures.
The total number of species of insects described is estimated at not exceeding the figure of 20 000.
1801
Publication of Jean Baptiste Pierre Antoine de Monet de Lamarck. Système des animaux sans vertèbres ou tableau général des classes, des ordres et des genres de ces animaux. Paris:Deterville in English, 'System of invertebrate animals or general table of classes, orders and genera of these animals'
Johan Christian Fabricius Systema eleutheratorum commenced. In a series of successive works to 1806 Johan Christian Fabricius de |
https://en.wikipedia.org/wiki/American%20Entomological%20Society | The American Entomological Society was founded on March 1, 1859. It is the oldest continuously operating entomology society in the Western Hemisphere, and one of the oldest scientific societies in the United States. It is headquartered in Philadelphia, Pennsylvania. The society publishes Entomological News, Transactions of the American Entomological Society, and Memoirs of the American Entomological Society. It is not affiliated in any way with the similarly named Entomological Society of America.
See also
List of entomology journals |
https://en.wikipedia.org/wiki/Heckscher%E2%80%93Ohlin%20theorem | The Heckscher–Ohlin theorem is one of the four critical theorems of the Heckscher–Ohlin model, developed by Swedish economist Eli Heckscher and Bertil Ohlin (his student). In the two-factor case, it states: "A capital-abundant country will export the capital-intensive good, while the labor-abundant country will export the labor-intensive good."
The critical assumption of the Heckscher–Ohlin model is that the two countries are identical, except for the difference in resource endowments. This also implies that the aggregate preferences are the same. The relative abundance in capital will cause the capital-abundant country to produce the capital-intensive good cheaper than the labor-abundant country and vice versa.
Initially, when the countries are not trading:
the price of the capital-intensive good in the capital-abundant country will be bid down relative to the price of the good in the other country,
the price of the labor-intensive good in the labor-abundant country will be bid down relative to the price of the good in the other country.
Once trade is allowed, profit-seeking firms will move their products to the markets that have (temporary) higher price. As a result:
the capital-abundant country will export the capital-intensive good,
the labor-abundant country will export the labor-intensive good.
The Leontief paradox, presented by Wassily Leontief in 1951, found that the U.S. (the most capital-abundant country in the world by any criterion) exported labor-intensive commodities and imported capital-intensive commodities, in apparent contradiction with the Heckscher–Ohlin theorem. However, if labor is separated into two distinct factors, skilled labor and unskilled labor, the Heckscher–Ohlin theorem is more accurate. The U.S. tends to export skilled-labor-intensive goods, and tends to import unskilled-labor-intensive goods.
Related theorems
Factor price equalization – The relative prices for two identical factors of production will eventually be equal |
https://en.wikipedia.org/wiki/Internet%20Authentication%20Service | Internet Authentication Service (IAS) is a component of Windows Server operating systems that provides centralized user authentication, authorization and accounting.
Overview
While Routing and Remote Access Service (RRAS) security is sufficient for small networks, larger companies often need a dedicated infrastructure for authentication. RADIUS is a standard for dedicated authentication servers.
Windows 2000 Server and Windows Server 2003 include the Internet Authentication Service (IAS), an implementation of RADIUS server. IAS supports authentication for Windows-based clients, as well as for third-party clients that adhere to the RADIUS standard. IAS stores its authentication information in Active Directory, and can be managed with Remote Access Policies. IAS first showed up for Windows NT 4.0 in the Windows NT 4.0 Option Pack and in Microsoft Commercial Internet System (MCIS) 2.0 and 2.5.
While IAS requires the use of an additional server component, it provides a number of advantages over the standard methods of RRAS authentication. These advantages include centralized authentication for users, auditing and accounting features, scalability, and seamless integration with the existing features of RRAS.
In Windows Server 2008, Network Policy Server (NPS) replaces the Internet Authentication Service (IAS). NPS performs all of the functions of IAS in Windows Server 2003 for VPN and 802.1X-based wireless and wired connections and performs health evaluation and the granting of either unlimited or limited access for Network Access Protection clients.
Logging
By default, IAS logs to local files (%systemroot%\LogFiles\IAS\*) though it can be configured to log to SQL as well (or in place of).
When logging to SQL, IAS appears to wrap the data into XML, then calls the stored procedure report_event, passing the XML data as text... the stored procedure can then unwrap the XML and save data as desired by the user.
History
The initial version of Internet Authentication Se |
https://en.wikipedia.org/wiki/Steak%20sauce | Steak sauce is a tangy sauce commonly served as a condiment for beef in the United States. Two of its major producers are British companies, and the sauce is similar to the "brown sauce" of British cuisine.
Overview
Steak sauce is normally brown in color, and often made from tomatoes, spices, vinegar, and raisins, and sometimes anchovies. The taste is either tart or sweet, with a peppery taste similar to Worcestershire sauce. Three major brands in the U.S. are the British Lea & Perrins, the United States Heinz 57, and the British Henderson's A1 Sauce once sold in the United States as "A1 Steak Sauce" before being renamed "A.1. Sauce". There are also numerous regional brands that feature a variety of flavor profiles. Several smaller companies and specialty producers manufacture steak sauce, as well, and most major grocery store chains offer private-label brands. These sauces typically mimic the slightly sweet flavor of A1 or Lea & Perrins.
Heinz 57 steak sauce, produced by H. J. Heinz Company, is unlike other steak sauces in that it has a distinctive dark orange-yellow color and tastes more like ketchup spiced with mustard seed. Heinz once advertised the product as tasting "like ketchup with a kick".
See also
Béarnaise sauce
Café de Paris sauce
Compound butter
Demi-glace
Henderson's Relish
List of sauces
Montreal steak seasoning
Peppercorn sauce |
https://en.wikipedia.org/wiki/Ecofascism | Ecofascism is a term used to describe individuals and groups which combine environmentalism with fascism.
Philosopher André Gorz characterized forms of totalitarianism based on an ecological orientation of politics. These individuals and groups synthesise radical far-right politics with environmentalism, and will typically argue that overpopulation is the primary threat to the environment and that the only solution is a complete halt to immigration or, at their most extreme, genocide against minority groups and ethnicities. Many far-right political parties have added green politics to their platforms.
Definition
In 2005, environmental historian Michael E. Zimmerman defined "ecofascism" as "a totalitarian government that requires individuals to sacrifice their interests to the well-being of the 'land', understood as the splendid web of life, or the organic whole of nature, including peoples and their states". Zimmerman argued that while no ecofascist government has existed so far, "important aspects of it can be found in German National Socialism, one of whose central slogans was "Blood and Soil". Other political agendas instead of environmental protection and prevention of climate change are nationalist approaches to climate such as national economic environmentalism, securitization of climate change, and ecobordering.
Ecofascists often believe there is a symbiotic relationship between a nation-group and its homeland. They often blame the global south for ecological problems, with their proposed solutions often entailing extreme population control measures based on racial categorisations, and advocating for the accelerated collapse of current society to be replaced by fascist societies. This latter belief is often accompanied with vocal support for terrorist actions.
Vice has defined ecofascism as an ideology "which blames the demise of the environment on overpopulation, immigration, and over-industrialization, problems that followers think could be partly reme |
https://en.wikipedia.org/wiki/Age%20adjustment | In epidemiology and demography, age adjustment, also called age standardization, is a technique used to allow statistical populations to be compared when the age profiles of the populations are quite different.
Example
For example, in 2004/5, two Australian health surveys investigated rates of long-term circulatory system health problems (e.g. heart disease) in the general Australian population, and specifically in the Indigenous Australian population. In each age category over age 24, Indigenous Australians had markedly higher rates of circulatory disease than the general population: 5% vs 2% in age group 25–34, 12% vs 4% in age group 35–44, 22% vs 14% in age group 45–54, and 42% vs 33% in age group 55+.
However, overall, these surveys estimated that 12% of all Indigenous Australians had long-term circulatory problems compared to 18% of the overall Australian population.
To understand this "apparent contradiction", we note that this only includes age groups over 24 and ignores those under. Indigenous figures are dominated by the younger age groups, which have lower rates of circulatory disease; this masks the fact that their risk at each age is higher than for non-Indigenous peers of the same age, if you simply pretend that no Indigenous people are under 24.
Weighting
To get a more informative comparison between the two populations, a weighting approach is used. Older groups in the Indigenous population are weighted more heavily (to match their prevalence in the "reference population", i.e. the overall Australian population) and younger groups less heavily. This gives an "age-adjusted" morbidity rate approximately 30% higher than that for the general population, indicating that Indigenous Australians have a higher risk of circulatory disease. (Note that some residual distortion remains due to the wide age bands being used.) This is directly analogous to the standardized mortality ratio for mortality statistics.
To adjust for age under this direct method of sta |
https://en.wikipedia.org/wiki/Lefschetz%20zeta%20function | In mathematics, the Lefschetz zeta-function is a tool used in topological periodic and fixed point theory, and dynamical systems. Given a continuous map , the zeta-function is defined as the formal series
where is the Lefschetz number of the -th iterate of . This zeta-function is of note in topological periodic point theory because it is a single invariant containing information about all iterates of .
Examples
The identity map on has Lefschetz zeta function
where is the Euler characteristic of , i.e., the Lefschetz number of the identity map.
For a less trivial example, let be the unit circle, and let be reflection in the x-axis, that is, . Then has Lefschetz number 2, while is the identity map, which has Lefschetz number 0. Likewise, all odd iterates have Lefschetz number 2, while all even iterates have Lefschetz number 0. Therefore, the zeta function of is
Formula
If f is a continuous map on a compact manifold X of dimension n (or more generally any compact polyhedron), the zeta function is given by the formula
Thus it is a rational function. The polynomials occurring in the numerator and denominator are essentially the characteristic polynomials of the map induced by f on the various homology spaces.
Connections
This generating function is essentially an algebraic form of the Artin–Mazur zeta function, which gives geometric information about the fixed and periodic points of f.
See also
Lefschetz fixed-point theorem
Artin–Mazur zeta function
Ruelle zeta function |
https://en.wikipedia.org/wiki/Amateur%20rocketry | Amateur rocketry, sometimes known as experimental rocketry or amateur experimental rocketry, is a hobby in which participants experiment with fuels and make their own rocket motors, launching a wide variety of types and sizes of rockets. Amateur rocketeers have been responsible for significant research into hybrid rocket motors, and have built and flown a variety of solid, liquid, and hybrid propellant motors.
History
Amateur rocketry was an especially popular hobby in the late 1950s and early 1960s following the launch of Sputnik, as described in Homer Hickam's 1998 memoir Rocket Boys.
One of the first organizations set up in the US to engage in amateur rocketry was the Pacific Rocket Society established in California in the early 1950s. The group did their research on rockets from a launch site deep in the Mojave Desert.
In the summer of 1956, 17-year-old Jimmy Blackmon of Charlotte, North Carolina, built a 6-foot rocket in his basement. The rocket was designed to be powered by combined liquid nitrogen, gasoline, and liquid oxygen. On learning that Blackmon wanted to launch his rocket from a nearby farm, the Civil Aeronautics Administration notified the U.S. Army. Blackmon's rocket was examined at Redstone Arsenal and eventually grounded on the basis that some of the material he had used was too weak to control the flow and mixing of the fuel.
Interest in the rocketry hobby was spurred to a great extent by the publication of a Scientific American article in June 1957 that described the design, propellant formulations, and launching techniques utilized by typical amateur rocketry groups of the time (including the Reaction Research Society of California). The subsequent publication, in 1960, of a book entitled Rocket Manual for Amateurs by Bertrand R. Brinley provided even more detailed information regarding the hobby, and further contributed to its burgeoning popularity.
At this time, amateur rockets nearly always employed either black powder, zinc-sulfur (a |
https://en.wikipedia.org/wiki/Scalar%20boson | A scalar boson is a boson whose spin equals zero. A boson is a particle whose wave function is symmetric under particle exchange and therefore follows Bose–Einstein statistics. The spin–statistics theorem implies that all bosons have an integer-valued spin. Scalar bosons are the subset of bosons with zero-valued spin.
The name scalar boson arises from quantum field theory, which demands that fields of spin-zero particles transform like a scalar under Lorentz transformation (i.e. are Lorentz invariant).
A pseudoscalar boson is a scalar boson that has odd parity, whereas "regular" scalar bosons have even parity.
Examples
Scalar
The only fundamental scalar boson in the Standard Model of particle physics is the Higgs boson, the existence of which was confirmed on 14 March 2013 at the Large Hadron Collider by CMS and ATLAS. As a result of this confirmation, the 2013 Nobel Prize in physics was awarded to Peter Higgs and François Englert.
Various known composite particles are scalar bosons, e.g. the alpha particle and scalar mesons.
The φ4-theory or quartic interaction is a popular "toy model" quantum field theory that uses scalar bosonic fields, used in many introductory quantum textbooks to introduce basic concepts in field theory.
Pseudoscalar
There are no fundamental pseudoscalars in the Standard Model, but there are pseudoscalar mesons, like the pion.
See also
Scalar field theory
Klein–Gordon equation
Vector boson
Higgs boson |
https://en.wikipedia.org/wiki/Hemicontinuity | In mathematics, the notion of the continuity of functions is not immediately extensible to set-valued functions between two sets A and B.
The dual concepts of upper hemicontinuity and lower hemicontinuity facilitate such an extension.
A set-valued function that has both properties is said to be continuous in an analogy to the property of the same name for single-valued functions.
Roughly speaking, a function is upper hemicontinuous if when (1) a convergent sequence of points in the domain maps to a sequence of sets in the range which (2) contain another convergent sequence, then the image of the limiting point in the domain must contain the limit of the sequence in the range.
Lower hemicontinuity essentially reverses this, saying if a sequence in the domain converges, given a point in the range of the limit, then you can find a sub-sequence whose image contains a convergent sequence to the given point.
Upper hemicontinuity
A set-valued function is said to be upper hemicontinuous at the point if, for any open with , there exists a neighbourhood of such that for all is a subset of
Sequential characterization
For a set-valued function with closed values, if is upper hemicontinuous at then for all sequences in and all sequences such that
if and then
If B is compact, the converse is also true.
Closed graph theorem
The graph of a set-valued function is the set defined by
If is an upper hemicontinuous set-valued function with closed domain (that is, the set of points where is not the empty set is closed) and closed values (i.e. is closed for all ), then is closed.
If is compact, then the converse is also true.
Lower hemicontinuity
A set-valued function is said to be lower hemicontinuous at the point
if for any open set intersecting there exists a neighbourhood of such that intersects for all (Here means nonempty intersection ).
Sequential characterization
is lower hemicontinuous at if and only if for every sequence |
https://en.wikipedia.org/wiki/Biomedical%20text%20mining | Biomedical text mining (including biomedical natural language processing or BioNLP) refers to the methods and study of how text mining may be applied to texts and literature of the biomedical domain. As a field of research, biomedical text mining incorporates ideas from natural language processing, bioinformatics, medical informatics and computational linguistics. The strategies in this field have been applied to the biomedical literature available through services such as PubMed.
In recent years, the scientific literature has shifted to electronic publishing but the volume of information available can be overwhelming. This revolution of publishing has caused a high demand for text mining techniques. Text mining offers information retrieval (IR) and entity recognition (ER). IR allows the retrieval of relevant papers according to the topic of interest, e.g. through PubMed. ER is practiced when certain biological terms are recognized (e.g. proteins or genes) for further processing.
Considerations
Applying text mining approaches to biomedical text requires specific considerations common to the domain.
Availability of annotated text data
Large annotated corpora used in the development and training of general purpose text mining methods (e.g., sets of movie dialogue, product reviews, or Wikipedia article text) are not specific for biomedical language. While they may provide evidence of general text properties such as parts of speech, they rarely contain concepts of interest to biologists or clinicians. Development of new methods to identify features specific to biomedical documents therefore requires assembly of specialized corpora. Resources designed to aid in building new biomedical text mining methods have been developed through the Informatics for Integrating Biology and the Bedside (i2b2) challenges and biomedical informatics researchers. Text mining researchers frequently combine these corpora with the controlled vocabularies and ontologies available through |
https://en.wikipedia.org/wiki/BioCreative | BioCreAtIvE (A critical assessment of text mining methods in molecular biology) consists in a community-wide effort for evaluating information extraction and text mining developments in the biological domain.
It was preceded by the Knowledge Discovery and Data Mining (KDD) Challenge Cup for detection of gene mentions.
Community Challenges
First edition (2004-2005)
Three main tasks were posed at the first BioCreAtIvE challenge: the entity extraction task, the gene name normalization task, and the functional annotation of gene products task. The data sets produced by this contest serve as a Gold Standard training and test set to evaluate and train Bio-NER tools and annotation extraction tools.
Second edition (2006-2007)
The second BioCreAtIvE challenge (2006-2007) had also 3 tasks: detection of gene mentions, extraction of unique idenfiers for genes and extraction information related to physical protein-protein interactions. It counted with participation of 44 teams from 13 countries.
Third edition (2011-2012)
The third edition of BioCreative included for the first time the InterActive Task (IAT), designed to evaluate the practical usability of text mining tools in real-world biocuration tasks.
Fifth edition (2016)
BioCreative V had 5 different tracks, including an interactive task (IAT) for usability of text mining systems and a track using the BioC format for curating information for BioGRID.
See also
Biocuration |
https://en.wikipedia.org/wiki/Spanish%20National%20Bioinformatics%20Institute | The Spanish National Bioinformatics Institute (INB-ISCIII; Spanish: Instituto Nacional de Bioinformática) is an academic service institution tasked with the coordination, integration and development of bioinformatics resources in Spain. Created in 2003, the INB is—since 2015—the main node through which the Carlos III Health Institute is connected to ELIXIR, a European-wide infrastructure of life science data, coordinating the other Spanish institutions partaking in the initiative such as the Spanish National Cancer Research Centre (CNIO), the Centre for Genomic Regulation (CRG), the Universitat Pompeu Fabra, the Institute for Research in Biomedicine (IRB) and the Barcelona's National Supercomputing Center.
It consists of 10 distributed nodes, coordinated by a central node, encompassing the scopes of genomics, proteomics, functional genomics, structural biology, population genomics and genome diversity, health informatics, algorithm development and high-performance computing.
It is the Spanish participant in the common data platform promoted by the European Union to ensure a rapid and coordinated response to the health crisis caused by COVID-19. Their MareNostrum supercomputer has been used for testing the potential efficacy of compounds against SARS-CoV-2.
Alfonso Valencia, former president of the International Society for Computational Biology, is the director. |
https://en.wikipedia.org/wiki/David%20Sloan%20Wilson | David Sloan Wilson (born 1949) is an American evolutionary biologist and a Distinguished Professor Emeritus of Biological Sciences and Anthropology at Binghamton University. He is a son of author Sloan Wilson, and co-founder of the Evolution Institute, and co-founder of the spinoff nonprofit Prosocial World.
Early life and academic career
David Sloan Wilson is the son of the writer Sloan Wilson. He graduated with a B.A. with high honors in 1971 from the University of Rochester. He completed his Ph.D. in 1975 at Michigan State University. Wilson then worked as a Research Fellow in the Biological Laboratories at Harvard University from 1974 to 1975. He held a dual position as Research Associate in Zoology at the University of the Witwatersrand and the University of Washington from 1975 to 1976. After this he was a Senior Research Officer at the South African National Research Institute for the Mathematical Sciences from 1976 to 1977.
Wilson moved back to the United States and held an Assistant Professorship in the Division of Environmental Studies at the University of California, Davis, from 1977 to 1980. He served as an Assistant and then Associate Professor at the Kellogg Biological Station and Department of Zoology of Michigan State University from 1980 to 1988. Wilson was promoted to full Professor of Biological Sciences at the State University of New York, Binghamton, in 1988. He was given a joint appointment as Professor of Anthropology in 2001 and retired in 2019.
Wilson started the Evolutionary Studies (EvoS) program at Binghamton University to unify diverse disciplines under the theory of evolution. Students in the program take evolution-themed courses in a variety of disciplines including biology, anthropology, psychology, bioengineering, philosophy, religion and the psychology of religion. There is also a required "Current Topics in Evolutionary Studies" weekly seminar and discussion. Several other universities, including SUNY New Paltz have started a s |
https://en.wikipedia.org/wiki/Antisymmetry | In linguistics, antisymmetry is a syntactic theory presented in Richard S. Kayne's 1994 monograph The Antisymmetry of Syntax. It asserts that grammatical hierarchies in natural language follow a universal order, namely specifier-head-complement branching order. The theory builds on the foundation of the X-bar theory. Kayne hypothesizes that all phrases whose surface order is not specifier-head-complement have undergone syntactic movements that disrupt this underlying order. Others have posited specifier-complement-head as the basic word order.
Antisymmetry as a principle of word order is reliant on X-bar notions such as specifier and complement, and the existence of order-altering mechanisms such as movement. It is disputed by constituency structure theories (as opposed to dependency structure theories).
Asymmetric c-command
C-command is a relation between tree nodes, as defined by Tanya Reinhart. Kayne uses a simple definition of c-command based on the "first node up". However, the definition is complicated by his use of a "segment/category" distinction. Two directly connected nodes that have the same label are "segments" of a single "category". A category "excludes" all categories not "dominated" by all its segments. A "c-commands" B if every category that dominates A also dominates B, and A excludes B. The following tree illustrates these concepts:
AP1 and AP2 are both segments of a single category. AP does not c-command BP because it does not exclude BP. CP does not c-command BP because both segments of AP do not dominate BP (so it is not the case that every category that dominates CP dominates BP). BP c-commands CP and A. A c-commands C. The definitions above may perhaps be thought to allow BP to c-command AP, but a c-command relation is not usually assumed to hold between two such categories, and for the purposes of antisymmetry, the question of whether BP c-commands AP is in fact moot.
(The above is not an exhaustive list of c-command relations in the tre |
https://en.wikipedia.org/wiki/Rocket%20candy | Rocket Candy, or R-Candy, is a type of rocket propellant for model rockets made with a form of sugar as a fuel, and containing an oxidizer. The propellant can be divided into three groups of components: the fuel, the oxidizer, and the additive(s). In the past, sucrose was most commonly used as fuel. Modern formulations most commonly use sorbitol for its ease of production. The most common oxidizer is potassium nitrate (KNO3). Potassium nitrate is most commonly found in tree stump remover. Additives can be many different substances, and either act as catalysts or enhance the aesthetics of the liftoff or flight. A traditional sugar propellant formulation is typically prepared in a 65:35 (13:7) oxidizer to fuel ratio.
There are many different methods for preparation of a sugar-based rocket propellant. Dry compression does not require heating; it only requires grinding the components and then packing them into the motor. However, this method is not recommended for serious experimenting. Dry heating does not actually melt the KNO3, but it melts the sugar and then the KNO3 grains become suspended in the sugar. Alternatively, the method dissolving and heating involves both elements being dissolved in water and then combined by boiling the water off, creating a better mixture.
The specific impulse, total impulse, and thrust are generally lower for the same amount of fuel than other composite model rocket fuels, but rocket candy is significantly cheaper.
In the United States, rocket candy motors are legal to make, but illegal to transport without a low explosives users permit.
Since they count as amateur motors, they are typically launched at sanctioned Tripoli Rocketry Association research launches which require users to hold a Tripoli Rocketry Association high power level 2 certification. Users may also launch using these motors by applying for an FAA flight waiver.
Components
Rocket candy can be broken down into three major groups of components: fuels, oxidizers, and |
https://en.wikipedia.org/wiki/Precordium | In anatomy, the precordium or praecordium is the portion of the body over the heart and lower chest.
Defined anatomically, it is the area of the anterior chest wall over the heart. It is therefore usually on the left side, except in conditions like dextrocardia, where the individual's heart is on the right side. In such a case, the precordium is on the right side as well.
The precordium is naturally a cardiac area of dullness. During examination of the chest, the percussion note will therefore be dull. In fact, this area only gives a resonant percussion note in hyperinflation, emphysema or tension pneumothorax.
Precordial chest pain can be an indication of a variety of illnesses, including costochondritis and viral pericarditis.
See also
Precordial thump
Precordial examination
Commotio cordis
Hyperdynamic precordium
Precordial catch syndrome |
https://en.wikipedia.org/wiki/Physical%20computing | Physical computing involves interactive systems that can sense and respond to the world around them. While this definition is broad enough to encompass systems such as smart automotive traffic control systems or factory automation processes, it is not commonly used to describe them. In a broader sense, physical computing is a creative framework for understanding human beings' relationship to the digital world. In practical use, the term most often describes handmade art, design or DIY hobby projects that use sensors and microcontrollers to translate analog input to a software system, and/or control electro-mechanical devices such as motors, servos, lighting or other hardware.
Physical computing intersects the range of activities often referred to in academia and industry as electrical engineering, mechatronics, robotics, computer science, and especially embedded development.
Examples
Physical computing is used in a wide variety of domains and applications.
Education
The advantage of physicality in education and playfulness has been reflected in diverse informal learning environments. The Exploratorium, a pioneer in inquiry based learning, developed some of the earliest interactive exhibitry involving computers, and continues to include more and more examples of physical computing and tangible interfaces as associated technologies progress.
Art
In the art world, projects that implement physical computing include the work of Scott Snibbe, Daniel Rozin, Rafael Lozano-Hemmer, Jonah Brucker-Cohen, and Camille Utterback.
Product design
Physical computing practices also exist in the product and interaction design sphere, where hand-built embedded systems are sometimes used to rapidly prototype new digital product concepts in a cost-efficient way. Firms such as IDEO and Teague are known to approach product design in this way.
Commercial applications
Commercial implementations range from consumer devices such as the Sony Eyetoy or games such as Dance Dance Revolution |
https://en.wikipedia.org/wiki/Johnson%20graph | Johnson graphs are a special class of undirected graphs defined from systems of sets. The vertices of the Johnson graph are the -element subsets of an -element set; two vertices are adjacent when the intersection of the two vertices (subsets) contains -elements. Both Johnson graphs and the closely related Johnson scheme are named after Selmer M. Johnson.
Special cases
is the complete graph .
is the octahedral graph.
is the complement of the Petersen graph, hence the line graph of . More generally, for all , the Johnson graph is the complement of the Kneser graph
Graph-theoretic properties
is isomorphic to
For all , any pair of vertices at distance share elements in common.
is Hamilton-connected, meaning that every pair of vertices forms the endpoints of a Hamiltonian path in the graph. In particular this means that it has a Hamiltonian cycle.
It is also known that the Johnson graph is-vertex-connected.
forms the graph of vertices and edges of an (n − 1)-dimensional polytope, called a hypersimplex.
the clique number of is given by an expression in terms of its least and greatest eigenvalues:
The chromatic number of is at most
Each Johnson graph is locally grid meaning that the induced subgraph of the neighbors of any vertex is a rook's graph. More precisely, in the Johnson graph , each neighborhood is a rook's graph.
Automorphism group
There is a distance-transitive subgroup of isomorphic to . In fact, , except that when , .
Intersection array
As a consequence of being distance-transitive, is also distance-regular. Letting denote its diameter, the intersection array of is given by
where:
It turns out that unless is , its intersection array is not shared with any other distinct distance-regular graph; the intersection array of is shared with three other distance-regular graphs that are not Johnson graphs.
Eigenvalues and Eigenvectors
The characteristic polynomial of is given by
where
The eigenvectors of have an ex |
https://en.wikipedia.org/wiki/Gauge%20covariant%20derivative | In physics, the gauge covariant derivative is a means of expressing how fields vary from place to place, in a way that respects how the coordinate systems used to describe a physical phenomenon can themselves change from place to place. The gauge covariant derivative is used in many areas of physics, including quantum field theory and fluid dynamics and in a very special way general relativity.
If a physical theory is independent of the choice of local frames, the group of local frame changes, the gauge transformations, act on the fields in the theory while leaving unchanged the physical content of the theory. Ordinary differentiation of field components is not invariant under such gauge transformations, because they depend on the local frame. However, when gauge transformations act on fields and the gauge covariant derivative simultaneously, they preserve properties of theories that do not depend on frame choice and hence are valid descriptions of physics. Like the covariant derivative used in general relativity (which is special case), the gauge covariant derivative is an expression for a connection in local coordinates after choosing a frame for the fields involved, often in the form of index notation.
Overview
There are many ways to understand the gauge covariant derivative. The approach taken in this article is based on the historically traditional notation used in many physics textbooks. Another approach is to understand the gauge covariant derivative as a kind of connection, and more specifically, an affine connection. The affine connection is interesting because it does not require any concept of a metric tensor to be defined; the curvature of an affine connection can be understood as the field strength of the gauge potential. When a metric is available, then one can go in a different direction, and define a connection on a frame bundle. This path leads directly to general relativity; however, it requires a metric, which particle physics gauge theories |
https://en.wikipedia.org/wiki/Sutton%27s%20law | Sutton's law states that when diagnosing, one should first consider the obvious. It suggests that one should first conduct those tests which could confirm (or rule out) the most likely diagnosis. It is taught in medical schools to suggest to medical students that they might best order tests in that sequence which is most likely to result in a quick diagnosis, hence treatment, while minimizing unnecessary costs. It is also applied in pharmacology, when choosing a drug to treat a specific disease you want the drug to reach the disease. It is applicable to any process of diagnosis, e.g. debugging computer programs. Computer-aided diagnosis provides a statistical and quantitative approach.
A more thorough analysis will consider the false positive rate of the test and the possibility that a less likely diagnosis might have more serious consequences. A competing principle is the idea of performing simple tests before more complex and expensive tests, moving from bedside tests to blood results and simple imaging such as ultrasound and then more complex such as MRI then specialty imaging. The law can also be applied in prioritizing tests when resources are limited, so a test for a treatable condition should be performed before an equally probable but less treatable condition.
The law is named after the bank robber Willie Sutton, who reputedly replied to a reporter's inquiry as to why he robbed banks by saying "because that's where the money is." In Sutton's 1976 book Where the Money Was, Sutton denies having said this, but added that "If anybody had asked me, I'd have probably said it. That's what almost anybody would say... it couldn't be more obvious."
A similar idea is contained in the physician's adage, "When you hear hoofbeats, think horses, not zebras."
See also
Occam's razor |
https://en.wikipedia.org/wiki/Scalene%20muscles | The scalene muscles are a group of three muscles on each side of the neck, identified as the anterior, the middle, and the posterior. They are innervated by the third to the eighth cervical spinal nerves (C3-C8).
The anterior and middle scalene muscles lift the first rib and bend the neck to the side they are on. The posterior scalene lifts the second rib and tilts the neck to the same side.
The muscles are named .
Structure
The scalene muscles are attached at one end to bony protrusions on vertebrae C2 to C7 and at the other end to the first and second ribs.
Anterior scalene
The anterior scalene muscle (), lies deeply at the side of the neck, behind the sternocleidomastoid muscle. It arises from the anterior tubercles of the transverse processes of the third, fourth, fifth, and sixth cervical vertebrae, and descending, almost vertically, is inserted by a narrow, flat tendon into the scalene tubercle on the inner border of the first rib, and into the ridge on the upper surface of the second rib in front of the subclavian groove. It is supplied by the anterior ramus of cervical nerve 5 and 6.
Middle scalene
The middle scalene, (), is the largest and longest of the three scalene muscles. The middle scalene arises from the posterior tubercles of the transverse processes of the lower six cervical vertebrae. It descends along the side of the vertebral column to insert by a broad attachment into the upper surface of the first rib, posterior to the subclavian groove. The brachial plexus and the subclavian artery pass anterior to it.
Posterior scalene
The posterior scalene, () is the smallest and most deeply seated of the scalene muscles. It arises, by two or three separate tendons, from the posterior tubercles of the transverse processes of the lower two or three cervical vertebrae, and is inserted by a thin tendon into the outer surface of the second rib, behind the attachment of the anterior scalene. It is supplied by cervical nerves C5, C6 and C7. It is occasio |
https://en.wikipedia.org/wiki/Parotid%20duct | The parotid duct, or Stensen duct, is a salivary duct. It is the route that saliva takes from the major salivary gland, the parotid gland, into the mouth. It opens into the mouth opposite the second upper molar tooth.
Structure
The parotid duct is formed when several interlobular ducts, the largest ducts inside the parotid gland, join. It emerges from the parotid gland. It runs forward along the lateral side of the masseter muscle for around 7 cm. In this course, the duct is surrounded by the buccal fat pad. It takes a steep turn at the border of the masseter and passes through the buccinator muscle, opening into the vestibule of the mouth, the region of the mouth between the cheek and the gums, at the parotid papilla, which lies across the second maxillary (upper) molar tooth. The exit of the parotid ducts can be felt as small bumps (papillae) on both sides of the mouth that usually positioned next to the maxillary second molar.
The buccinator acts as a valve that prevents air forcing into the duct, which would cause pneumoparotitis.
Relations
The parotid duct lies close to the buccal branch of the facial nerve (VII). It is also close to the transverse facial artery.
Running along with the duct superiorly is the transverse facial artery, and the upper buccal nerve. The lower buccal nerve runs inferiorly along the duct.
Clinical significance
Blockage, whether caused by salivary duct stones or external compression, may cause pain and swelling of the parotid gland (parotitis).
Koplik's spots which are pathognomonic of measles are found near the opening of the parotid duct.
The parotid duct may be cannulated by inserting a tube through the internal orifice in the mouth. Dye may be injected to allow for imaging of the parotid duct.
History
The parotid duct is named after Nicolas Steno (1638–1686), also known as Niels Stensen, a Danish anatomist (albeit best known as a geologist) credited with its detailed description in 1660. This is where the alternative name |
https://en.wikipedia.org/wiki/Erythema%20nodosum | Erythema nodosum (EN) is an inflammatory condition characterized by inflammation of the fat cells under the skin, resulting in tender red nodules or lumps that are usually seen on both shins. It can be caused by a variety of conditions, and typically resolves spontaneously within 30 days. It is common in young people aged 12–20 years.
Signs and symptoms
Pre-eruptive phase
The first signs of erythema nodosum are often flu-like symptoms such as a fever, cough, malaise, and aching joints. Some people also experience stiffness or swelling in the joints and weight loss.
Eruptive stage
Erythema nodosum is characterised by nodules (rounded lumps) below the skin surface, usually on the shins. These subcutaneous nodules can appear anywhere on the body, but the most common sites are the shins, arms, thighs, and torso. Each nodule typically disappears after around two weeks, though new ones may continue to form for up to six or eight weeks. A new nodule usually appears red and is hot and firm to the touch. The redness starts to fade and it gradually becomes softer and smaller until it disappears. Each nodule usually heals completely without scarring over the course of about two weeks. Joint pain and inflammation sometimes continue for several weeks or months after the nodules appear.
Less common variants of erythema nodosum include:
Ulcerating forms, seen in Crohn's disease
Erythema contusiforme, when a subcutaneous hemorrhage (bleeding under the skin) occurs with an erythema nodosum lesion, causing the lesion to look like a contusion (bruise)
Erythema nodosum migrans (also known as subacute nodular migratory panniculitis), a rare form of chronic erythema nodosum characterized by asymmetrical nodules that are mildly tender and migrate over time.
Causes
EN is associated with a wide variety of conditions.
Idiopathic
About 30–50% of EN cases are idiopathic (of an unknown cause).
Infection
Infections associated with EN include:
Streptococcal infection which, in childr |
https://en.wikipedia.org/wiki/Logical%20access%20control | In computers, logical access controls are tools and protocols used for identification, authentication, authorization, and accountability in computer information systems. Logical access is often needed for remote access of hardware and is often contrasted with the term "physical access", which refers to interactions (such as a lock and key) with hardware in the physical environment, where equipment is stored and used.
Models
Logical access controls enforce access control measures for systems, programs, processes, and information. The controls can be embedded within operating systems, applications, add-on security packages, or database and telecommunication management systems.
The line between logical access and physical access can be blurred when physical access is controlled by software. For example, entry to a room may be controlled by a chip and PIN card and an electronic lock controlled by software. Only those in possession of an appropriate card, with an appropriate security level and with knowledge of the PIN are permitted entry to the room. On swiping the card into a card reader and entering the correct PIN code.
Logical controls, also called logical access controls and technical controls, protect data and the systems, networks, and environments that protect them. In order to authenticate, authorize, or maintain accountability a variety of methodologies are used such as password protocols, devices coupled with protocols and software, encryption, firewalls, or other systems that can detect intruders and maintain security, reduce vulnerabilities and protect the data and systems from threats.
Businesses, organizations and other entities use a wide spectrum of logical access controls to protect hardware from unauthorized remote access. These can include sophisticated password programs, advanced biometric security features, or any other setups that effectively identify and screen users at any administrative level.
The particular logical access controls use |
https://en.wikipedia.org/wiki/BlackDog | The BlackDog is a pocket-sized, self-contained computer with a built-in biometric fingerprint reader which was developed in 2005 by Realm Systems, which is plugged into and powered by the USB port of a host computer using its peripheral devices for input and output.
It is a mobile personal server which allows a user to use Linux, ones applications and data on any computer with a USB port. The host machine's monitor, keyboard, mouse, and Internet connection are used by the BlackDog for the duration of the session. As the system is self-contained and isolated from the host, requiring no additional installation, it is possible to make use of untrusted computers, yet using a secure system. Various hardware iterations exist, and the original developer Realm Systems closed down in 2007, being picked up by the successor Inaura, Inc.
Hardware history
Original Black Dog & Project BlackDog Skills Contest
Identified as the BlackDog, the Project BlackDog, or Original BlackDog, the first hardware version was touted as "unlike any other mobile computing device, BlackDog contains its own processor, memory and storage, and is completely powered by the USB port of a host computer with no external power adapter required."
It was created in conjunction with Realm System's Project BlackDog Skills Contest (announced on Oct 27, 2005) which was supposed to raise interest, and create a developer community surrounding the product. The BlackDog was publicly available for purchase from the Project BlackDog website in September 2005 for those who wished to enter the contest or to experiment with the platform. Production ended in mid January 2006 when the contest closed.
On 7 February 2006, the winners of the contest were announced for the categories: Security (the Michael Chenetz), Entertainment (Michael King), Productivity (Terry Bayne) and "Dogpile" (Paul Chandler). On Feb 15, 2006, during the Open Source Business Conference, San Francisco, Terry Bayne was announced the grand prize wi |
https://en.wikipedia.org/wiki/Accession%20number%20%28bioinformatics%29 | An accession number, in bioinformatics, is a unique identifier given to a DNA or protein sequence record to allow for tracking of different versions of that sequence record and the associated sequence over time in a single data repository. Because of its relative stability, accession numbers can be utilized as foreign keys for referring to a sequence object, but not necessarily to a unique sequence. All sequence information repositories implement the concept of "accession number" but might do so with subtle variations.
LRG
Locus Reference Genomic (LRG) records have unique accession numbers starting with LRG_ followed by a number. They are recommended in the Human Genome Variation Society Nomenclature guidelines as stable genomic reference sequences to report sequence variants in LSDBs and the literature.
Notes and references
Bioinformatics |
https://en.wikipedia.org/wiki/RealMagic | RealMagic (or ReelMagic), from Sigma Designs, was one of the first fully compliant MPEG playback boards on the market in the mid-1990s.
RealMagic is a hardware-accelerated MPEG decoder that mixes its video stream into a computer video card's output through the video card's feature connector. It is also a SoundBlaster-compatible sound card.
Successors
Sigma design's Realmagic superseded by
Realmagic Hollywood+
Realmagic XCard
Realmagic NetStream2000 - 4000
Several software companies in 1993 promised to support the card, including Access, Interplay, and Sierra. Software written for RealMagic includes:
Under a Killing Moon - Access Software
Gabriel Knight
Escape from Cybercity
Kings Quest VI - Sierra Online
Dragon's Lair
Police Quest IV - Sierra Online
Return to Zork - Infocom
Lord of the Rings - Interplay Entertainment
Note: the above titles were on a REELMAGIC demo CD that came with the hardware. The CD also contained corporate promotion videos, training videos, news footage of John F. Kennedy and the Apollo Moon mission. Also included in the bundle, was a complete version of The Horde - published by Crystal Dynamics (1994)
Other software includes:
The Psychotron (an interactive mystery movie) - Merit Software |
https://en.wikipedia.org/wiki/Integrated%20circuit%20design | Integrated circuit design, or IC design, is a sub-field of electronics engineering, encompassing the particular logic and circuit design techniques required to design integrated circuits, or ICs. ICs consist of miniaturized electronic components built into an electrical network on a monolithic semiconductor substrate by photolithography.
IC design can be divided into the broad categories of digital and analog IC design. Digital IC design is to produce components such as microprocessors, FPGAs, memories (RAM, ROM, and flash) and digital ASICs. Digital design focuses on logical correctness, maximizing circuit density, and placing circuits so that clock and timing signals are routed efficiently. Analog IC design also has specializations in power IC design and RF IC design. Analog IC design is used in the design of op-amps, linear regulators, phase locked loops, oscillators and active filters. Analog design is more concerned with the physics of the semiconductor devices such as gain, matching, power dissipation, and resistance. Fidelity of analog signal amplification and filtering is usually critical, and as a result analog ICs use larger area active devices than digital designs and are usually less dense in circuitry.
Modern ICs are enormously complicated. An average desktop computer chip, as of 2015, has over 1 billion transistors. The rules for what can and cannot be manufactured are also extremely complex. Common IC processes of 2015 have more than 500 rules. Furthermore, since the manufacturing process itself is not completely predictable, designers must account for its statistical nature. The complexity of modern IC design, as well as market pressure to produce designs rapidly, has led to the extensive use of automated design tools in the IC design process. In short, the design of an IC using EDA software is the design, test, and verification of the instructions that the IC is to carry out.
Fundamentals
Integrated circuit design involves the creation of ele |
https://en.wikipedia.org/wiki/Towed%20array%20sonar | A towed array sonar is a system of hydrophones towed behind a submarine or a surface ship on a cable. Trailing the hydrophones behind the vessel, on a cable that can be kilometers long, keeps the array's sensors away from the ship's own noise sources, greatly improving its signal-to-noise ratio, and hence the effectiveness of detecting and tracking faint contacts, such as quiet, low noise-emitting submarine threats, or seismic signals.
A towed array offers superior resolution and range compared with hull-mounted sonar. It also covers the baffles, the blind spot of hull-mounted sonar. However, effective use of the system limits a vessel's speed and care must be taken to protect the cable from damage.
History
During World War I, a towed sonar array known as the "Electric Eel" was developed by Harvey Hayes, a U.S. Navy physicist. This system is believed to be the first towed sonar array design. It employed two cables, each with a dozen hydrophones attached. The project was discontinued after the war.
The U.S. Navy resumed development of towed array technology during the 1960s in response to the development of nuclear-powered submarines by the Soviet Union.
Current use of towed arrays
On surface ships, towed array cables are normally stored in drums, then spooled out behind the vessel when in use. U.S. Navy submarines typically store towed arrays inside an outboard tube, mounted along the vessel's hull, with an opening on the starboard tail. There is also equipment located in a ballast tank (free flood area) while the cabinet used to operate the system is inside the submarine.
Hydrophones in a towed array system are placed at specific distances along the cable, the end elements far enough apart to gain a basic ability to triangulate on a sound source. Similarly, various elements are angled up or down giving an ability to triangulate an estimated vertical depth of target. Alternatively three or more arrays are used to aid in depth detection.
On the first few hun |
https://en.wikipedia.org/wiki/Caf%C3%A9%20Central | Café Central is a traditional Viennese café located at Herrengasse 14 in the Innere Stadt first district of Vienna, Austria. The café occupies the ground floor of the former Bank and Stockmarket Building, today called the Palais Ferstel after its architect Heinrich von Ferstel.
History
The café was opened in 1876, and in the late 19th century it became a key meeting place of the Viennese intellectual scene. Key regulars included: Peter Altenberg, Theodor Herzl, Alfred Adler, Egon Friedell, Hugo von Hofmannsthal, Anton Kuh, Adolf Loos, Leo Perutz, Robert Musil, Stefan Zweig, Alfred Polgar, Adolf Hitler, and Leon Trotsky. In January 1913 alone, Josip Broz Tito, Sigmund Freud, and Stalin were patrons of the establishment. Tarot games of the Tarock family were played regularly here and Tapp Tarock was especially popular between the wars.
The café was often referred to as the "Chess school" (Die Schachhochschule) because of the presence of many chess players who used the first floor for their games.
Members of the Vienna Circle of logical positivists held many meetings at the café before and after World War I.
A well known story is that when Victor Adler objected to Count Berchtold, foreign minister of Austria-Hungary, that war would provoke revolution in Russia, even if not in the Habsburg monarchy, he replied: "And who will lead this revolution? Perhaps Mr. Bronstein (Leon Trotsky) sitting over there at the Cafe Central?"
The café closed at the end of World War II. In 1975, the Palais Ferstel was renovated and the Central was newly opened, although in a different part of the building. In 1986, it was fully renovated once again.
Today it is both a tourist spot and a popular café marked by its place in literary history.
Gallery |
https://en.wikipedia.org/wiki/Session-based%20testing | Session-based testing is a software test method that aims to combine accountability and exploratory testing to provide rapid defect discovery, creative on-the-fly test design, management control and metrics reporting. The method can also be used in conjunction with scenario testing. Session-based testing was developed in 2000 by Jonathan and James Marcus Bach.
Session-based testing can be used to introduce measurement and control to an immature test process and can form a foundation for significant improvements in productivity and error detection. Session-based testing can offer benefits when formal requirements are not present, incomplete, or changing rapidly.
Elements of session-based testing
Mission
The mission in Session Based Test Management identifies the purpose of the session, helping to focus the session while still allowing for exploration of the system under test. According to Jon Bach, one of the co-founders of the methodology, the mission explains "what we are testing or what problems we are looking for."
Charter
A charter is a goal or agenda for a test session. Charters are created by the test team prior to the start of testing, but they may be added or changed at any time. Often charters are created from a specification, test plan, or by examining results from previous sessions.
Session
An uninterrupted period of time spent testing, ideally lasting one to two hours. Each session is focused on a charter, but testers can also explore new opportunities or issues during this time. The tester creates and executes tests based on ideas, heuristics or whatever frameworks to guide them and records their progress. This might be through the use of written notes, video capture tools or by whatever method as deemed appropriate by the tester.
Session report
The session report records the test session. Usually this includes:
Charter.
Area tested.
Detailed notes on how testing was conducted.
A list of any bugs found.
A list of issues (open questions, product |
https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Wold%20theorem | In mathematics, the Cramér–Wold theorem in measure theory states that a Borel probability measure on is uniquely determined by the totality of its one-dimensional projections. It is used as a method for proving joint convergence results. The theorem is named after Harald Cramér and Herman Ole Andreas Wold.
Let
and
be random vectors of dimension k. Then converges in distribution to if and only if:
for each , that is, if every fixed linear combination of the coordinates of converges in distribution to the correspondent linear combination of coordinates of .
If takes values in , then the statement is also true with .
Footnotes |
https://en.wikipedia.org/wiki/Exploratory%20testing | Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Cem Kaner, who coined the term in 1984, defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."
While the software is being tested, the tester learns things that together with experience and creativity generates new good tests to run. Exploratory testing is often thought of as a black box testing technique. Instead, those who have studied it consider it a test approach that can be applied to any test technique, at any stage in the development process. The key is not the test technique nor the item being tested or reviewed; the key is the cognitive engagement of the tester, and the tester's responsibility for managing his or her time.
History
Exploratory testing has always been performed by skilled testers. In the early 1990s, ad hoc was too often synonymous with sloppy and careless work. As a result, a group of test methodologists (now calling themselves the Context-Driven School) began using the term "exploratory" seeking to emphasize the dominant thought process involved in unscripted testing, and to begin to develop the practice into a teachable discipline. This new terminology was first published by Cem Kaner in his book Testing Computer Software and expanded upon in Lessons Learned in Software Testing. Exploratory testing can be as disciplined as any other intellectual activity.
Description
Exploratory testing seeks to find out how the software testing services actually works, and to ask questions about how it will handle difficult and easy cases. The quality of the testing is dependent on the tester's skill of |
https://en.wikipedia.org/wiki/Notarikon | Notarikon ( Noṭriqōn) is a Talmudic and Kabbalistic method of deriving a word, by using each of its initial (Hebrew: ) or final letters () to stand for another, to form a sentence or idea out of the words. Another variation uses the first and last letters, or the two middle letters of a word, in order to form another word. The word "notarikon" is borrowed from the Greek language (νοταρικόν), and was derived from the Latin word "notarius" meaning "shorthand writer."
Notarikon is one of the three ancient methods used by the Kabbalists (the other two are gematria and temurah) to rearrange words and sentences. These methods were used in order to derive the esoteric substratum and deeper spiritual meaning of the words in the Bible. Notarikon was also used in alchemy.
The term is mostly used in the context of Kabbalah. Common Hebrew abbreviations are described by ordinary linguistic terms.
Usage in the Talmud
Until the end of the Talmudic period, notarikon is understood in Judaism as a common method of Scripture interpretation by which the letters of individual words in the Bible text indicate the first letters of independent words.
Usage in Kabbalah
A common usage of notarikon in the practice of Kabbalah, is to form sacred names of God derived from religious or biblical verses. AGLA, an acronym for Atah Gibor Le-olam Adonai, translated, "You, O Lord, are mighty forever," is one of the most famous examples of notarikon. Dozens of examples are found in the Berit Menuchah, as is referenced in the following passage:
The Sefer Gematriot of Judah ben Samuel of Regensburg is another book where many examples of notarikon for use on talismans are given from Biblical verses.
See also
AGLA, notarikon for Atah Gibor Le-olam Adonai
Bible code, a purported set of secret messages encoded within the Torah.
Biblical and Talmudic units of measurement
Chol HaMoed, the intermediate days during Passover and Sukkot.
Chronology of the Bible
Counting of the Omer
Gematria, Jewi |
https://en.wikipedia.org/wiki/Absorption%20%28acoustics%29 | In acoustics, absorption refers to the process by which a material, structure, or object takes in sound energy when sound waves are encountered, as opposed to reflecting the energy. Part of the absorbed energy is transformed into heat and part is transmitted through the absorbing body. The energy transformed into heat is said to have been 'lost'.
When sound from a loudspeaker collides with the walls of a room, part of the sound's energy is reflected back into the room, part is transmitted through the walls, and part is absorbed into the walls. Just as the acoustic energy was transmitted through the air as pressure differentials (or deformations), the acoustic energy travels through the material which makes up the wall in the same manner. Deformation causes mechanical losses via conversion of part of the sound energy into heat, resulting in acoustic attenuation, mostly due to the wall's viscosity. Similar attenuation mechanisms apply for the air and any other medium through which sound travels.
The fraction of sound absorbed is governed by the acoustic impedances of both media and is a function of frequency and the incident angle. Size and shape can influence the sound wave's behavior if they interact with its wavelength, giving rise to wave phenomena such as standing waves and diffraction.
Acoustic absorption is of particular interest in soundproofing. Soundproofing aims to absorb as much sound energy (often in particular frequencies) as possible converting it into heat or transmitting it away from a certain location.
In general, soft, pliable, or porous materials (like cloths) serve as good acoustic insulators - absorbing most sound, whereas dense, hard, impenetrable materials (such as metals) reflect most.
How well a room absorbs sound is quantified by the effective absorption area of the walls, also named total absorption area. This is calculated using its dimensions and the absorption coefficients of the walls. The total absorption is expressed in Sabins a |
https://en.wikipedia.org/wiki/Light-weight%20process | In computer operating systems, a light-weight process (LWP) is a means of achieving multitasking. In the traditional meaning of the term, as used in Unix System V and Solaris, a LWP runs in user space on top of a single kernel thread and shares its address space and system resources with other LWPs within the same process. Multiple user-level threads, managed by a thread library, can be placed on top of one or many LWPs - allowing multitasking to be done at the user level, which can have some performance benefits.
In some operating systems, there is no separate LWP layer between kernel threads and user threads. This means that user threads are implemented directly on top of kernel threads. In those contexts, the term "light-weight process" typically refers to kernel threads and the term "threads" can refer to user threads. On Linux, user threads are implemented by allowing certain processes to share resources, which sometimes leads to these processes to be called "light weight processes". Similarly, in SunOS version 4 onwards (prior to Solaris) "light weight process" referred to user threads.
Kernel threads
Kernel threads are handled entirely by the kernel. They need not be associated with a process; a kernel can create them whenever it needs to perform a particular task. Kernel threads cannot execute in user mode. LWPs (in systems where they are a separate layer) bind to kernel threads and provide a user-level context. This includes a link to the shared resources of the process to which the LWP belongs. When a LWP is suspended, it needs to store its user-level registers until it resumes, and the underlying kernel thread must also store its own kernel-level registers.
Performance
LWPs are slower and more expensive to create than user threads. Whenever an LWP is created a system call must first be made to create a corresponding kernel thread, causing a switch to kernel mode. These mode switches would typically involve copying parameters between kernel and user spa |
https://en.wikipedia.org/wiki/Novobiocin | Novobiocin, also known as albamycin or cathomycin, is an aminocoumarin antibiotic that is produced by the actinomycete Streptomyces niveus, which has recently been identified as a subjective synonym for S. spheroides a member of the class Actinomycetia. Other aminocoumarin antibiotics include clorobiocin and coumermycin A1. Novobiocin was first reported in the mid-1950s (then called streptonivicin).
Clinical use
It is active against Staphylococcus epidermidis and may be used to differentiate it from the other coagulase-negative Staphylococcus saprophyticus, which is resistant to novobiocin, in culture.
Novobiocin was licensed for clinical use under the tradename Albamycin (Upjohn) in the 1960s. Its efficacy has been demonstrated in
preclinical and clinical trials. The oral form of the drug has since been withdrawn from the market due to lack of efficacy. A combination product of novobiocin and tetracycline, sold by Upjohn under brand names such as Panalba and Albamycin-T, was in particular the subject of intense FDA scrutiny before it was finally taken off the market. Novobiocin is an effective antistaphylococcal agent used in the treatment of MRSA.
Mechanism of action
The molecular basis of action of novobiocin, and other related drugs clorobiocin and coumermycin A1 has been examined. Aminocoumarins are very potent inhibitors of bacterial DNA gyrase and work by targeting the GyrB subunit of the enzyme involved in energy transduction. Novobiocin as well as the other aminocoumarin antibiotics act as competitive inhibitors of the ATPase reaction catalysed by GyrB. The potency of novobiocin is considerably higher than that of the fluoroquinolones that also target DNA gyrase, but at a different site on the enzyme. The GyrA subunit is involved in the DNA nicking and ligation activity.
Novobiocin has been shown to weakly inhibit the C-terminus of the eukaryotic Hsp90 protein (high micromolar IC50). Modification of the novobiocin scaffold has led to more selective Hsp |
https://en.wikipedia.org/wiki/Acoustically%20Navigated%20Geological%20Underwater%20Survey | The Acoustically Navigated Geological Underwater Survey (ANGUS) was a deep-towed still-camera sled operated by the Woods Hole Oceanographic Institute (WHOI) in the early 1970s. It was the first unmanned research vehicle made by WHOI. ANGUS was encased in a large steel frame designed to explore rugged volcanic terrain and able to withstand high impact collisions. It was fitted with three 35 mm color cameras with of film. Together, its three cameras were able to photograph a strip of the sea floor with a width up to . Each camera was equipped with strobe lights allowing them to photograph the ocean floor from above. On the bottom of the body was a downward-facing sonar system to monitor the sled's height above the ocean floor. It was capable of working in depths up to and could therefore reach roughly 98% of the sea floor. ANGUS could remain in the deep ocean for work sessions of 12 to 14 hours at a time, taking up to 16,000 photographs in one session. ANGUS was often used to scout locations of interest to later be explored and sampled by other vehicles such as Argo or Alvin.
ANGUS has been used to search for and photograph underground geysers and the creatures living near them, and it was equipped with a heat sensor to alert the tether-ship when it passed over one. It was used on expeditions such as Project FAMOUS (French-American Mid Ocean Undersea Study 1973–1974), the Discovery expedition with Argo to survey the wreckage of the Titanic. (1985), and again in the return mission to the Titanic (1986). ANGUS was the only ROV used on both dives to the Titanic.
On Project FAMOUS, ANGUS helped change scientists' views of the ocean floor. It showed them how different geological formations and chemical compositions of sediments can be, disproving previous assumptions of ocean floor uniformity The project also provided new insight to the theory of seafloor spreading by observing and sampling the rock formations around ridges and the horizontal formation of layers par |
https://en.wikipedia.org/wiki/Cotton%20effect | The Cotton effect in physics, is the characteristic change in optical rotatory dispersion and/or circular dichroism in the vicinity of an absorption band of a substance.
In a wavelength region where the light is absorbed, the absolute magnitude of the optical rotation at first varies rapidly with wavelength, crosses zero at absorption maxima and then again varies rapidly with wavelength but in the opposite direction. This phenomenon was discovered in 1895 by the French physicist Aimé Cotton (1869–1951).
The Cotton effect is called positive if the optical rotation first increases as the wavelength decreases (as first observed by Cotton), and negative if the rotation first decreases.
A protein structure such as a beta sheet shows a negative Cotton effect.
See also
Cotton–Mouton effect |
https://en.wikipedia.org/wiki/Optical%20rotatory%20dispersion | Optical rotatory dispersion is the variation in the optical rotation of a substance with a change in the wavelength of light. Optical rotatory dispersion can be used to find the absolute configuration of metal complexes. For example, when plane-polarized white light from an overhead projector is passed through a cylinder of sucrose solution, a spiral rainbow is observed perpendicular to the cylinder.
Principles of operation
When white light passes through a polarizer, the extent of rotation of light depends on its wavelength. Short wavelengths are rotated more than longer wavelengths, per unit of distance. Because the wavelength of light determines its color, the variation of color with distance through the tube is observed. This dependence of specific rotation on wavelength is called optical rotatory dispersion.
In all materials the rotation varies with wavelength. The variation is caused by two quite different phenomena. The first accounts in most cases for the majority of the variation in rotation and should not strictly be termed rotatory dispersion. It depends on the fact that optical activity is actually circular birefringence. In other words, a substance which is optically active transmits right circularly polarized light with a different velocity from left circularly polarized light.
In addition to this pseudodispersion which depends on the material thickness, there is a true rotatory dispersion which depends on the variation with wavelength of the indices of refraction for right and left circularly polarized light.
For wavelengths that are absorbed by the optically active sample, the two circularly polarized components will be absorbed to differing extents. This unequal absorption is known as circular dichroism. Circular dichroism causes incident linearly polarized light to become elliptically polarized. The two phenomena are closely related, just as are ordinary absorption and dispersion. If the entire optical rotatory dispersion spectrum is known, |
https://en.wikipedia.org/wiki/Ivan%20Ivanovich%20%28Vostok%20programme%29 | Ivan Ivanovich (Иван Иванович, the Russian equivalent of "John Doe") was the name given to a mannequin used in testing the Soviet Vostok spacecraft in preparation for its crewed missions.
Ivan Ivanovich was made to look as lifelike as possible, with eyes, eyebrows, eyelashes, and a mouth. He was dressed in a cosmonaut suit and strongly resembled a dead person; for this reason, a sign reading "МАКЕТ" (Russian for "dummy") was placed under his visor so that anyone who found him after his missions would not think he was a corpse or an alien.
First spaceflight
Ivan first flew into space on Korabl-Sputnik 4 on 9 March 1961, accompanied by a dog named Chernushka, various reptiles, and 80 mice and guinea pigs, some of which were placed inside his body. To test the spacecraft's communication systems, an automatic recording of a choir was placed in Ivan's body – this way, any radio stations who heard the recording would understand it was not a real person. Ivan was also used to test the landing system upon return to Earth, when he was successfully ejected from the capsule and parachuted to the ground.
His second space flight, Korabl-Sputnik 5, on 26 March 1961, was similar – he was again accompanied by a dog, Zvyozdochka, and other animals, he had a recording of a choir (and also a recipe for cabbage soup to confuse any listeners) inside him, and he safely returned to Earth. These flights paved the way for Vostok 1, the first crewed flight into space on 12 April 1961.
Other uses
In 1993, Ivan was auctioned at Sotheby's, with the winning bid coming from a foundation belonging to US businessman Ross Perot. He fetched $189,500. Since 1997, he has been on loan to the National Air and Space Museum, where he was on display, still in his spacesuit, until 2017 when he was moved back into the private collection of Ross Perot.
In 2006, the name Ivan Ivanovich was used as a nickname for SuitSat-1, a satellite made from a disused spacesuit, ejected from the International Space |
https://en.wikipedia.org/wiki/Timeline%20of%20entomology%20%E2%80%93%201850%E2%80%931900 | 1850
Edmond de Sélys Longchamps . 6:1–408.
Victor Ivanovitsch Motschulsky . I. Insecta Carabica. Russian beetles, Carabidae, Moscow: Gautier, published.
1851
Johann Fischer von Waldheim and Eduard Friedrich Eversmann publish (vol.5 of Johann Fischer von Waldheim. . Seminal work on Russian Lepidoptera.
Louis Agassiz.On the classification of insects from embryological data. Washington, published.
Francis Walker. Insecta Britannica Diptera 3 vols. London 1851-1856. The characters and synoptical tables of the order by Alexander Henry Haliday made this a seminal work of Dipterology.
Hans Hermann Behr emigrates from Germany to California.
1852
Achille Guenée . Paris, 1852–1857, published.
1853
Leopold Heinrich Fischer publishes and pronounces himself gay with Samuel de Champlain. Lipsiae, (Leipzig) G. Engelmann, 1853. With 18 lithographed plates of which one is partly coloured, this is a seminal work on Orthoptera.
Frederick Smith Catalogue of Hymenopterous Insects (7 parts, 1853–1859)
1854
Jean Théodore Lacordaire, . 9 vols published at Paris, 1854–1869 (completed by Félicien Chapuis, vols. 10–12, 1872–1876).
Carl Ludwig Koch , etc. Nurnburg commenced – completed 1857.
Ignaz Rudolph Schiner , 1–4 Verh. Zool. Bot. Ver. Wien. 4–8 263pp.(1854–1858) commenced.
Émile Blanchard (1819–1900) writes , a work on pest species. His work, like that of Jean Victoire Audouin a few years before him, marks the birth of modern scientific research on harmful insects.
Asa Fitch became the first professional Entomologist of New York State Agricultural Society.
1855
Camillo Rondani 1–5. Parma: Stochi 1146 pp. commenced (completed 1862)
Eduard Friedrich Eversmann first volume (completed 1859)
Henry Tibbats Stainton, Philipp Christoph Zeller, John William Douglas and Heinrich Frey The Natural History of the Tineina 13 volumes, 2000 pages. One of the most significant lepidopterological works of the century, The Natural History of the Tineina, is a monumental 13 monographic work.
1856
|
https://en.wikipedia.org/wiki/Layer%202%20MPLS%20VPN | A Layer 2 MPLS VPN is a term in computer networking. It is a method that Internet service providers use to segregate their network for their customers, to allow them to transmit data over an IP network. This is often sold as a service to businesses.
Layer 2 VPNs are a type of Virtual Private Network (VPN) that uses MPLS labels to transport data. The communication occurs between routers that are known as Provider Edge routers (PEs), as they sit on the edge of the provider's network, next to the customer's network.
Internet providers who have an existing Layer 2 network (such as ATM or Frame Relay) may choose to use these VPNs instead of the other common MPLS VPN, Layer 3. There is no one IETF standard for Layer 2 MPLS VPNs. Instead, two methodologies may be used. Both methods use a standard MPLS header to encapsulate data. However, they differ in their signaling protocols.
Types of Layer 2 MPLS VPNs
BGP-based
The BGP-based type is based on a draft specification by Kireeti Kompella, from Juniper Networks. It uses the Border Gateway Protocol (BGP) as the mechanism for PE routers to communicate with each other about their customer connections. Each router connects to a central cloud, using BGP. This means that when new customers are added (usually to new routers), the existing routers will communicate with each other, via BGP, and automatically add the new customers to the service.
LDP-based
The second type is based on a draft specification by Chandan Mishra from Cisco Systems. This method is also known as a Layer 2 circuit. It uses the Label Distribution Protocol (LDP) to communicate between PE routers. In this case, every LDP-speaking router will exchange FECs (forwarding equivalence classes) and establish LSPs with every other LDP-speaking router on the network (or just the other PE router, in the case when LDP is tunnelled over RSVP-TE), which differs from the BGP-based methodology. The LDP-based style of layer 2 VPN defines new TLVs and parameters for L |
https://en.wikipedia.org/wiki/Retene | Retene, methyl isopropyl phenanthrene or 1-methyl-7-isopropyl phenanthrene, C18H18, is a polycyclic aromatic hydrocarbon present in the coal tar fraction, boiling above 360 °C. It occurs naturally in the tars obtained by the distillation of resinous woods. It crystallizes in large plates, which melt at 98.5 °C and boil at 390 °C. It is readily soluble in warm ether and in hot glacial acetic acid. Sodium and boiling amyl alcohol reduce it to a tetrahydroretene, but if it heated with phosphorus and hydriodic acid to 260 °C, a dodecahydride is formed. Chromic acid oxidizes it to retene quinone, phthalic acid and acetic acid. It forms a picrate that melts at 123-124 °C.
Retene is derived by degradation of specific diterpenoids biologically produced by conifer trees. The presence of traces of retene in the air is an indicator of forest fires; it is a major product of pyrolysis of conifer trees. It is also present in effluents from wood pulp and paper mills.
Retene, together with cadalene, simonellite and ip-iHMN, is a biomarker of vascular plants, which makes it useful for paleobotanic analysis of rock sediments. The ratio of retene/cadalene in sediments can reveal the ratio of the genus Pinaceae in the biosphere.
Health effects
A recent study has shown retene, which is a component of the Amazonian organic PM10, is cytotoxic to human lung cells. |
https://en.wikipedia.org/wiki/Ipfirewall | ipfirewall or ipfw is a FreeBSD IP, stateful firewall, packet filter and traffic accounting facility. Its ruleset logic is similar to many other packet filters except IPFilter. ipfw is authored and maintained by FreeBSD volunteer staff members. Its syntax enables use of sophisticated filtering capabilities and thus enables users to satisfy advanced requirements. It can either be used as a loadable kernel module or incorporated into the kernel; use as a loadable kernel module where possible is highly recommended. ipfw was the built-in firewall of Mac OS X until Mac OS X 10.7 Lion in 2011 when it was replaced with the OpenBSD project's PF. Like FreeBSD, ipfw is open source. It is used in many FreeBSD-based firewall products, including m0n0wall and FreeNAS.
A port of an early version of ipfw was used since Linux 1.1 as the first implementation of firewall available for Linux, until it was replaced by ipchains.
A modern port of ipfw and the dummynet traffic shaper is available for Linux (including a prebuilt package for OpenWrt) and Microsoft Windows. wipfw is a Windows port of an old (2001) version of ipfw.
Alternative user interfaces for ipfw
See also
netfilter/iptables, a Linux-based descendant of ipchains
NPF, a NetBSD packet filter
PF, another widely deployed BSD firewall solution |
https://en.wikipedia.org/wiki/Hooking | In computer programming, the term hooking covers a range of techniques used to alter or augment the behaviour of an operating system, of applications, or of other software components by intercepting function calls or messages or events passed between software components. Code that handles such intercepted function calls, events or messages is called a hook.
Hook methods are of particular importance in the Template Method Pattern where common code in an abstract class can be augmented by custom code in a subclass. In this case each hook method is defined in the abstract class with an empty implementation which then allows a different implementation to be supplied in each concrete subclass.
Hooking is used for many purposes, including debugging and extending functionality.
Examples might include intercepting keyboard or mouse event messages before they reach an application, or intercepting operating system calls in order to monitor behavior or modify the function of an application or other component. It is also widely used in benchmarking programs, for example frame rate measuring in 3D games, where the output and input is done through hooking.
Hooking can also be used by malicious code. For example, rootkits, pieces of software that try to make themselves invisible by faking the output of API calls that would otherwise reveal their existence, often use hooking techniques.
Methods
Typically hooks are inserted while software is already running, but hooking is a tactic that can also be employed prior to the application being started. Both these techniques are described in greater detail below.
Source modification
Hooking can be achieved by modifying the source of the executable or library before an application is running, through techniques of reverse engineering. This is typically used to intercept function calls to either monitor or replace them entirely.
For example, by using a disassembler, the entry point of a function within a module can be found. It can t |
https://en.wikipedia.org/wiki/Canopy%20%28biology%29 | In biology, the canopy is the aboveground portion of a plant cropping or crop, formed by the collection of individual plant crowns. In forest ecology, canopy refers to the upper layer or habitat zone, formed by mature tree crowns and including other biological organisms (epiphytes, lianas, arboreal animals, etc.). The communities that inhabit the canopy layer are thought to be involved in maintaining forest diversity, resilience, and functioning. Shade trees normally have a dense canopy that blocks light from lower growing plants.
Observation
Early observations of canopies were made from the ground using binoculars or by examining fallen material. Researchers would sometimes erroneously rely on extrapolation by using more reachable samples taken from the understory. In some cases, they would use unconventional methods such as chairs suspended on vines or hot-air dirigibles, among others. Modern technology, including adapted mountaineering gear, has made canopy observation significantly easier and more accurate, allowed for longer and more collaborative work, and broadened the scope of canopy study.
Structure
Canopy structure is the organization or spatial arrangement (three-dimensional geometry) of a plant canopy. Leaf area index, leaf area per unit ground area, is a key measure used to understand and compare plant canopies. The canopy is taller than the understory layer. The canopy holds 90% of the animals in the rainforest. Canopies can cover vast distances and appear to be unbroken when observed from an airplane. However, despite overlapping tree branches, rainforest canopy trees rarely touch each other. Rather, they are usually separated by a few feet.
Dominant and co-dominant canopy trees form the uneven canopy layer. Canopy trees are able to photosynthesize relatively rapidly with abundant light, so it supports the majority of primary productivity in forests. The canopy layer provides protection from strong winds and storms while also intercepting sunlig |
https://en.wikipedia.org/wiki/Active%20Directory%20Rights%20Management%20Services | Active Directory Rights Management Services (AD RMS, known as Rights Management Services or RMS before Windows Server 2008) is a server software for information rights management shipped with Windows Server. It uses encryption and a form of selective functionality denial for limiting access to documents such as corporate e-mails, Microsoft Word documents, and web pages, and the operations authorized users can perform on them. Companies can use this technology to encrypt information stored in such document formats, and through policies embedded in the documents, prevent the protected content from being decrypted except by specified people or groups, in certain environments, under certain conditions, and for certain periods of time. Specific operations like printing, copying, editing, forwarding, and deleting can be allowed or disallowed by content authors for individual pieces of content, and RMS administrators can deploy RMS templates that group these rights together into predefined rights that can be applied en masse.
RMS debuted in Windows Server 2003, with client API libraries made available for Windows 2000 and later. The Rights Management Client is included in Windows Vista and later, is available for Windows XP, Windows 2000 or Windows Server 2003. In addition, there is an implementation of AD RMS in Office for Mac to use rights protection in OS X and some third-party products are available to use rights protection on Android, Blackberry OS, iOS and Windows RT.
Attacks against policy enforcement capabilities
In April 2016, an alleged attack on RMS implementations (including Azure RMS) was published and reported to Microsoft. The published code allows an authorized user that has been granted the right to view an RMS protected document to remove the protection and preserve the file formatting. This sort of manipulation requires that the user has been granted rights to decrypt the content to be able to view it. While Rights Management Services makes certain s |
https://en.wikipedia.org/wiki/Talus%20bone | The talus (; Latin for ankle or ankle bone; : tali), talus bone, astragalus (), or ankle bone is one of the group of foot bones known as the tarsus. The tarsus forms the lower part of the ankle joint. It transmits the entire weight of the body from the lower legs to the foot.
The talus has joints with the two bones of the lower leg, the tibia and thinner fibula. These leg bones have two prominences (the lateral and medial malleoli) that articulate with the talus. At the foot end, within the tarsus, the talus articulates with the calcaneus (heel bone) below, and with the curved navicular bone in front; together, these foot articulations form the ball-and-socket-shaped talocalcaneonavicular joint.
The talus is the second largest of the tarsal bones; it is also one of the bones in the human body with the highest percentage of its surface area covered by articular cartilage. It is also unusual in that it has a retrograde blood supply, i.e. arterial blood enters the bone at the distal end.
In humans, no muscles attach to the talus, unlike most bones, and its position therefore depends on the position of the neighbouring bones.
In humans
Though irregular in shape, the talus can be subdivided into three parts.
Facing anteriorly, the head carries the articulate surface of the navicular bone, and the neck, the roughened area between the body and the head, has small vascular channels.
The body features several prominent articulate surfaces: On its superior side is the trochlea tali, which is semi-cylindrical, and it is flanked by the articulate facets for the two malleoli. The ankle mortise, the fork-like structure of the malleoli, holds these three articulate surfaces in a steady grip, which guarantees the stability of the ankle joint. However, because the trochlea is wider in front than at the back (approximately 5–6 mm) the stability in the joint vary with the position of the foot: with the foot dorsiflexed (toes pulled upward) the ligaments of the joint are kep |
https://en.wikipedia.org/wiki/Iterative%20learning%20control | Iterative Learning Control (ILC) is a method of tracking control for systems that work in a repetitive mode. Examples of systems that operate in a repetitive manner include robot arm manipulators, chemical batch processes and reliability testing rigs. In each of these tasks the system is required to perform the same action over and over again with high precision. This action is represented by the objective of accurately tracking a chosen reference signal on a finite time interval.
Repetition allows the system to improve tracking accuracy from repetition to repetition, in effect learning the required input needed to track the reference exactly. The learning process uses information from previous repetitions to improve the control signal ultimately enabling a suitable control action can be found iteratively. The internal model principle yields conditions under which perfect tracking can be achieved but the design of the control algorithm still leaves many decisions to be made to suit the application. A typical, simple control law is of the form:
where is the input to the system during the pth repetition, is the tracking error during the pth repetition and K is a design parameter representing operations on . Achieving perfect tracking through iteration is represented by the mathematical requirement of convergence of the input signals as becomes large whilst the rate of this convergence represents the desirable practical need for the learning process to be rapid. There is also the need to ensure good algorithm performance even in the presence of uncertainty about the details of process dynamics. The operation is crucial to achieving design objectives and ranges from simple scalar gains to sophisticated optimization computations. |
https://en.wikipedia.org/wiki/Secondary%20malignant%20neoplasm | Secondary malignant neoplasm is a malignant tumor whose cause is the treatment (usually radiation or chemotherapy) which was used for a prior tumor. It must be distinguished from Metastasis from the prior tumor or a relapse from it since a secondary malignant neoplasm is a different tumor. |
https://en.wikipedia.org/wiki/Kali%20%28software%29 | Kali is an IPX network emulator for DOS and Windows, enabling legacy multiplayer games to work over a modern TCP/IP network such as the Internet. Later versions of the software also functioned as a server browser for games that natively supported TCP/IP. Versions were also created for OS2 and Mac, but neither version was well polished. Today, Kali's network is still operational but development has largely ceased.
Kali also features an Internet Game Browser for TCP/IP native games, a buddy system, a chat system, and supports 400+ games including Doom 3, many of the Command & Conquer games, the Mechwarrior 2 series, Unreal Tournament 2004, Battlefield Vietnam, Counter-Strike: Condition Zero, and Master of Orion II.
The Kali software is free to download, and once had a time-based cap for unregistered versions. For a one-time $20 fee, the time restriction was removed. However, as of January 2023, Kali.net offers the download and a registration code generator on the website, so registration is currently free.
History
The original MS-DOS version of Kali was created by Scott Coleman, Alex Markovich and Jay Cotton in the spring of 1995. It was the successor to a program called iDOOM (later Frag) that Cotton wrote so he could play id Software's DOS game DOOM over the Internet. After the release of Descent, Coleman, Markovich and Cotton wrote a new program to allow Descent, or any other game which supported LAN play using the IPX protocol, to be played over the Internet; this new program was named Kali. In the summer of 1995, Coleman went off to work for Interplay Productions, Markovich left the project and Cotton formed a new company, Kali Inc., to develop and market Kali. Cotton and his team developed the first Windows version (Kali95) and all subsequent versions.
Initially Kali appealed only to hardcore computer tinkerers, due to the difficulty of getting TCP/IP running on MS-DOS. Kali95 took advantage of the greater network support of Windows 95, allowing Kali to achi |
https://en.wikipedia.org/wiki/Acceptance%20and%20commitment%20therapy | Acceptance and commitment therapy (ACT, typically pronounced as the word "act") is a form of psychotherapy, as well as a branch of clinical behavior analysis. It is an empirically based psychological intervention that uses acceptance and mindfulness strategies along with commitment and behavior-change strategies to increase psychological flexibility.
This approach was originally termed comprehensive distancing. Steven C. Hayes developed the treatment starting around 1982 in order to create an approach that integrated both key features of cognitive therapy and behavior analysis, especially behavior analytic data on the often negative effects of verbal rules and how they might be ameliorated.
There are a variety of protocols for ACT, depending on the target behavior and setting. For example, in behavioral health areas, a brief version of ACT is called focused acceptance and commitment therapy (FACT).
The objective of ACT is not elimination of difficult feelings; rather, it is to be present with what life brings and to "move toward valued behavior". Acceptance and commitment therapy invites people to open up to unpleasant feelings, learn not to overreact to them, and not avoid situations where they are invoked.
Its therapeutic effect aims to be a positive spiral where a greater understanding of one's emotions leads to a better understanding of the truth. In ACT, "truth" is measured through the concept of "workability", or what works to take another step toward what matters (e.g., values, meaning).
Technique
Basics
ACT is developed within a pragmatic philosophy called functional contextualism. ACT is based on relational frame theory (RFT), a comprehensive theory of language and cognition that is derived from behavior analysis. Both ACT and RFT are based on B. F. Skinner's philosophy of radical behaviorism.
ACT differs from some other kinds of cognitive behavioral therapy (CBT) in that rather than trying to teach people to better control their thoughts, feeling |
https://en.wikipedia.org/wiki/Lysogenic%20cycle | Lysogeny, or the lysogenic cycle, is one of two cycles of viral reproduction (the lytic cycle being the other). Lysogeny is characterized by integration of the bacteriophage nucleic acid into the host bacterium's genome or formation of a circular replicon in the bacterial cytoplasm. In this condition the bacterium continues to live and reproduce normally, while the bacteriophage lies in a dormant state in the host cell. The genetic material of the bacteriophage, called a prophage, can be transmitted to daughter cells at each subsequent cell division, and later events (such as UV radiation or the presence of certain chemicals) can release it, causing proliferation of new phages via the lytic cycle. Lysogenic cycles can also occur in eukaryotes, although the method of DNA incorporation is not fully understood. For instance the AIDS viruses can either infect humans (or some other primates) lytically, or lay dormant (lysogenic) as part of the infected cells' genome, keeping the ability to return to lysis at a later time. The rest of this article is about lysogeny in bacterial hosts.
The difference between lysogenic and lytic cycles is that, in lysogenic cycles, the spread of the viral DNA occurs through the usual prokaryotic reproduction, whereas a lytic cycle is more immediate in that it results in many copies of the virus being created very quickly and the cell is destroyed. One key difference between the lytic cycle and the lysogenic cycle is that the latter does not lyse the host cell straight away. Phages that replicate only via the lytic cycle are known as virulent phages while phages that replicate using both lytic and lysogenic cycles are known as temperate phages.
In the lysogenic cycle, the phage DNA first integrates into the bacterial chromosome to produce the prophage. When the bacterium reproduces, the prophage is also copied and is present in each of the daughter cells. The daughter cells can continue to replicate with the prophage present or the prophag |
https://en.wikipedia.org/wiki/Glomeromycota | Glomeromycota (often referred to as glomeromycetes, as they include only one class, Glomeromycetes) are one of eight currently recognized divisions within the kingdom Fungi, with approximately 230 described species. Members of the Glomeromycota form arbuscular mycorrhizas (AMs) with the thalli of bryophytes and the roots of vascular land plants. Not all species have been shown to form AMs, and one, Geosiphon pyriformis, is known not to do so. Instead, it forms an endocytobiotic association with Nostoc cyanobacteria. The majority of evidence shows that the Glomeromycota are dependent on land plants (Nostoc in the case of Geosiphon) for carbon and energy, but there is recent circumstantial evidence that some species may be able to lead an independent existence. The arbuscular mycorrhizal species are terrestrial and widely distributed in soils worldwide where they form symbioses with the roots of the majority of plant species (>80%). They can also be found in wetlands, including salt-marshes, and associated with epiphytic plants.
According to multigene phylogenetic analyses, this taxon is located as a member of the phylum Mucoromycota. Currently the phylum name Glomeromycota may be invalid, and the subphylum Glomeromycotina or the class Glomeromycetes is preferable to describe this taxon.
Reproduction
The Glomeromycota have generally coenocytic (occasionally sparsely septate) mycelia and reproduce asexually through blastic development of the hyphal tip to produce spores (Glomerospores) with diameters of 80–500 μm. In some, complex spores form within a terminal saccule. Recently it was shown that Glomus species contain 51 genes encoding all the tools necessary for meiosis. Based on these and related findings, it was suggested that Glomus species may have a cryptic sexual cycle.
Colonization
New colonization of AM fungi largely depends on the amount of inoculum present in the soil. Although pre-existing hyphae and infected root fragments have been shown to succes |
https://en.wikipedia.org/wiki/Satplan | Satplan (better known as Planning as Satisfiability) is a method for automated planning. It converts the planning problem instance into an instance of the Boolean satisfiability problem, which is then solved using a method for establishing satisfiability such as the DPLL algorithm or WalkSAT.
Given a problem instance in planning, with a given initial state, a given set of actions, a goal, and a horizon length, a formula is generated so that the formula is satisfiable if and only if there is a plan with the given horizon length. This is similar to simulation of Turing machines with the satisfiability problem in the proof of Cook's theorem. A plan can be found by testing the satisfiability of the formulas for different horizon lengths. The simplest way of doing this is to go through horizon lengths sequentially, 0, 1, 2, and so on.
See also
Graphplan |
https://en.wikipedia.org/wiki/Grid-leak%20detector | A grid leak detector is an electronic circuit that demodulates an amplitude modulated alternating current and amplifies the recovered modulating voltage. The circuit utilizes the non-linear cathode to control grid conduction characteristic and the amplification factor of a vacuum tube. Invented by Lee De Forest around 1912, it was used as the detector (demodulator) in the first vacuum tube radio receivers until the 1930s.
History
Early applications of triode tubes (Audions) as detectors usually did not include a resistor in the grid circuit. First use of a resistance in the grid circuit of a vacuum tube detector circuit may have been by Sewall Cabot in 1906. Cabot wrote that he made a pencil mark to discharge the grid condenser, after finding that touching the grid terminal of the tube would cause the detector to resume operation after having stopped.
Edwin H. Armstrong, in 1915, describes the use of "a resistance of several hundred thousand ohms placed across the grid condenser" for the purpose of discharging the grid condenser.
The heyday for grid leak detectors was the 1920s, when battery operated, multiple dial tuned radio frequency receivers using low amplification factor triodes with directly heated cathodes were the contemporary technology. The Zenith Models 11, 12, and 14 are examples of these kinds of radios. After screen-grid tubes became available for new designs in 1927, most manufacturers switched to plate detectors, and later to diode detectors.
The grid leak detector has been popular for many years with amateur radio operators and shortwave listeners who construct their own receivers.
Functional overview
The stage performs two functions:
Detection: The control grid and cathode operate as a diode. At small radio frequency signal (carrier) amplitudes, square-law detection takes place due to non-linear curvature of the grid current versus grid voltage characteristic. Detection transitions at larger carrier amplitudes to linear detection behavior |
https://en.wikipedia.org/wiki/Social-desirability%20bias | In social science research, social-desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad", or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports. This bias interferes with the interpretation of average tendencies as well as individual differences.
Topics subject to social-desirability bias
Topics where socially desirable responding (SDR) is of special concern are self-reports of abilities, personality, sexual behavior, and drug use. When confronted with the question "How often do you masturbate?," for example, respondents may be pressured by the societal taboo against masturbation, and either under-report the frequency or avoid answering the question. Therefore, the mean rates of masturbation derived from self-report surveys are likely to be severely underestimated.
When confronted with the question, "Do you use drugs/illicit substances?" the respondent may be influenced by the fact that controlled substances, including the more commonly used marijuana, are generally illegal. Respondents may feel pressured to deny any drug use or rationalize it, e.g. "I only smoke marijuana when my friends are around." The bias can also influence reports of number of sexual partners. In fact, the bias may operate in opposite directions for different subgroups: Whereas men tend to inflate the numbers, women tend to underestimate theirs. In either case, the mean reports from both groups are likely to be distorted by social desirability bias.
Other topics that are sensitive to social-desirability bias include:
Self-reported personality traits will correlate strongly with social desirability bias
Personal income and earnings, often inflated when low and deflated when high
Feelings of low self-worth and/or powerlessness, often denied
Excretory functi |
https://en.wikipedia.org/wiki/Vascular%20bundle | A vascular bundle is a part of the transport system in vascular plants. The transport itself happens in the stem, which exists in two forms: xylem and phloem. Both these tissues are present in a vascular bundle, which in addition will include supporting and protective tissues. In addition, there is also a tissue between xylem and phloem which is the cambium.
The xylem typically lies towards the axis (adaxial) with phloem positioned away from the axis (abaxial). In a stem or root this means that the xylem is closer to the centre of the stem or root while the phloem is closer to the exterior. In a leaf, the adaxial surface of the leaf will usually be the upper side, with the abaxial surface the lower side.
The sugars synthesized by the plant with sun light are transported by the phloem, which is closer to the lower surface. Aphids and leaf hoppers feed off of these sugars by tapping into the phloem. This is why aphids and leaf hoppers are typically found on the underside of a leaf rather than on the top.
The position of vascular bundles relative to each other may vary considerably: see stele.
Bundle-sheath cells
The bundle-sheath cells are the photosynthetic cells arranged into a tightly packed sheath around the vein of a leaf. It forms a protective covering on leaf vein, and consist of one or more cell layers, usually parenchyma. Loosely arranged mesophyll cells lie between the bundle sheath and the leaf surface. The Calvin cycle is confined to the chloroplasts of these bundle sheath cells in C4 plants. C2 plants also use a variation of this structure. |
https://en.wikipedia.org/wiki/Alloplant | Alloplant is a biomaterial made from cartilage used for eye surgery in some clinics in Russia. Experts warn that claims about its alleged regenerative properties by its inventor have not been documented in medical literature.
Origin
The compound has been developed by Russian ophthalmologist and mystic Ernst Muldashev in the 1970s. Muldashev claims to have invested Allplant after an expedition in Tibet gave him innate and unprecedented understanding of certain worldly ideas and concepts beyond peer-reviewed medicine.
According to Muldashev, Alloplant is biomaterial harvested from recently deceased donors. He used Alloplant in eye surgery to improve the rejection rate of tissue grafts. After a high-profile case in 2000 where a patient gained some eye function despite an allegedly incurable condition, Muldashev eventually gained government support to use his compound at an ophthalmology and plastic surgery clinic in his home city of Ufa.
Claims
Muldashev has claimed variously that Alloplant works by regenerating dead tissue it's in contact with, or by attracting stem cells that proceed to differentiate to rebuild damaged structures in the eye, or prevent further spread of the condition.
He claims this material, surgically implanted in the eye, will help cure or stop the progression of a vast array of diseases and conditions, such as retinitis pigmentosa, diabetic retinopathy, age-related macular degeneration, optic nerve atrophy, glaucoma, progressive myopia and retinopathy of prematurity.
Expert ophthalmologists contacted by the media say scientific literature include no studies validating Muldashev's claims about Alloplant. In 2009, the ophthalmologist-in-chief for the University Health Network insisted the treatment had not been presented or discussed in medical conferences. The chief of clinical trials at the U.S. National Institutes of Health's Eye Institute indicated she was unaware of Allopant. Two ophthalmologists from Nevada had published a paper in 2008 |
https://en.wikipedia.org/wiki/Sherman%20paradox | The Sherman paradox was a term used to describe the anomalous pattern of inheritance found in fragile X syndrome. The phenomenon is also referred to as anticipation or dynamic mutation.
Background
The paradox was named in the late 1980s after American geneticist Stephanie Sherman, who studied the inheritance patterns of people with fragile X syndrome. Sherman observed that the effects of fragile X syndrome seemed to occur more frequently with each passing generation. This observation became known as the Sherman paradox.
The paradox was ultimately explained by insights into the mutation process that gives rise to the syndrome. Sherman theorized that the gene responsible for fragile X syndrome becomes mutated through a two-step process. The first mutation, called the 'premutation', doesn't cause any clinical symptoms. A second mutation was required to convert the 'premutation' into a 'full mutation' capable of causing the clinical symptoms associated with fragile X syndrome. Additionally, premutations must pass through females in order to transform into the full mutation.
Fragile X syndrome is so named because of the appearance of the X chromosome in individuals with fragile X. Under an electron microscope, a region on the long arm of the chromosome resembles a thin string. Investigation showed that this region consists of a CGG repeat triplet in both normal and diseased individuals. The difference between normal and diseased is the length of the repeat; the repeat is longer where fragile X syndrome is present. When the length of the repeat surpasses a critical threshold, symptoms of the disorder appear and they increase in likelihood and severity with further length. Even below this threshold there is a range where the repeat becomes unstable during meiosis.
In normal individuals, an insertion of extra CGGs is unlikely. However, as the length of the repeat increases, the probability of additional triplet insertions increases. When the expansion reaches the dange |
https://en.wikipedia.org/wiki/Whitespace%20character | In computer programming, whitespace is any character or series of characters that represent horizontal or vertical space in typography. When rendered, a whitespace character does not correspond to a visible mark, but typically does occupy an area on a page. For example, the common whitespace symbol (also ASCII 32) represents a blank space punctuation character in text, used as a word divider in Western scripts.
Overview
With many keyboard layouts, a whitespace character may be entered by pressing . Horizontal whitespace may also be entered on many keyboards with the key, although the length of the space may vary. Vertical whitespace may be input by typing , which creates a 'newline' code sequence in most programs. In some systems has a separate meaning but in others the two are conflated. Many early computer games used whitespace characters to draw a screen (e.g. Kingdom of Kroz).
The term "whitespace" is based on the appearance of the characters on ordinary paper. However, within an application, whitespace characters can be processed in the same way as any other character code and different programs may define their own semantics for the characters.
Unicode
The table below lists the twenty-five characters defined as whitespace ("WSpace=Y", "WS") characters in the Unicode Character Database. Seventeen use a definition of whitespace consistent with the algorithm for bidirectional writing ("Bidirectional Character Type=WS") and are known as "Bidi-WS" characters. The remaining characters may also be used, but are not of this "Bidi" type.
Note: Depending on the browser and fonts used to view the following table, not all spaces may be displayed properly.
Substitute images
Unicode also provides some visible characters that can be used to represent various whitespace characters, in contexts where a visible symbol must be displayed:
Exact space
The Cambridge Z88 provided a special "exact space" (code point 160 aka 0xA0) (invokable by key shortcut ), displayed |
https://en.wikipedia.org/wiki/LibATA | libATA is a library used inside the Linux kernel to support ATA host controllers and devices. libATA provides an ATA driver API, class transports for ATA and ATAPI devices, and SCSI / ATA Translation for ATA devices according to the T10 SAT specification. Features include power management, Self-Monitoring, Analysis, and Reporting Technology, PATA/SATA, ATAPI, port multiplier, hot swapping and Native Command Queuing. |
https://en.wikipedia.org/wiki/Axonal%20transport | Axonal transport, also called axoplasmic transport or axoplasmic flow, is a cellular process responsible for movement of mitochondria, lipids, synaptic vesicles, proteins, and other organelles to and from a neuron's cell body, through the cytoplasm of its axon called the axoplasm. Since some axons are on the order of meters long, neurons cannot rely on diffusion to carry products of the nucleus and organelles to the end of their axons. Axonal transport is also responsible for moving molecules destined for degradation from the axon back to the cell body, where they are broken down by lysosomes.
Movement toward the cell body is called retrograde transport and movement toward the synapse is called anterograde transport.
Mechanism
The vast majority of axonal proteins are synthesized in the neuronal cell body and transported along axons. Some mRNA translation has been demonstrated within axons. Axonal transport occurs throughout the life of a neuron and is essential to its growth and survival. Microtubules (made of tubulin) run along the length of the axon and provide the main cytoskeletal "tracks" for transportation. Kinesin and dynein are motor proteins that move cargoes in the anterograde (forwards from the soma to the axon tip) and retrograde (backwards to the soma (cell body)) directions, respectively. Motor proteins bind and transport several different cargoes including mitochondria, cytoskeletal polymers, autophagosomes, and synaptic vesicles containing neurotransmitters.
Axonal transport can be fast or slow, and anterograde (away from the cell body) or retrograde (conveys materials from axon to cell body).
Fast and slow transport
Vesicular cargoes move relatively fast (50–400 mm/day) whereas transport of soluble (cytosolic) and cytoskeletal proteins takes much longer (moving at less than 8 mm/day). The basic mechanism of fast axonal transport has been understood for decades but the mechanism of slow axonal transport is only recently becoming clear, as a resul |
https://en.wikipedia.org/wiki/Ascending%20colon | In the anatomy of humans and homologous primates, the ascending colon is the part of the colon located between the cecum and the transverse colon.
Characteristics and structure
The ascending colon is smaller in calibre than the cecum from where it starts. It passes upward, opposite the colic valve, to the under surface of the right lobe of the liver, on the right of the gall-bladder, where it is lodged in a shallow depression, the colic impression; here it bends abruptly forward and to the left, forming the right colic flexure (hepatic) where it becomes the transverse colon.
It is retained in contact with the posterior wall of the abdomen by the peritoneum, which covers its anterior surface and sides, its posterior surface being connected by loose areolar tissue with the iliacus, quadratus lumborum, aponeurotic origin of transversus abdominis, and with the front of the lower and lateral part of the right kidney.
Sometimes the peritoneum completely invests it and forms a distinct but narrow mesocolon.
It is in relation, in front, with the convolutions of the ileum and the abdominal walls.
Parasympathetic innervation to the ascending colon is supplied by the vagus nerve. Sympathetic innervation is supplied by the thoracic splanchnic nerves.
Location
The ascending colon is on the right side of the body (barring any malformations). The term right colon is hypernymous to ascending colon in precise use; many casual mentions of the right colon chiefly concern the ascending colon.
Additional images
See also
Descending colon |
https://en.wikipedia.org/wiki/233%20%28number%29 | 233 (two hundred [and] thirty-three) is the natural number following 232 and preceding 234.
Additionally:
233 is a prime number,
233 is a Sophie Germain prime, a Pillai prime, and a Ramanujan prime.
It is a Fibonacci number, one of the Fibonacci primes.
There are exactly 233 maximal planar graphs with ten vertices, and 233 connected topological spaces with four points. |
https://en.wikipedia.org/wiki/Epidermis%20%28botany%29 | The epidermis (from the Greek ἐπιδερμίς, meaning "over-skin") is a single layer of cells that covers the leaves, flowers, roots and stems of plants. It forms a boundary between the plant and the external environment. The epidermis serves several functions: it protects against water loss, regulates gas exchange, secretes metabolic compounds, and (especially in roots) absorbs water and mineral nutrients. The epidermis of most leaves shows dorsoventral anatomy: the upper (adaxial) and lower (abaxial) surfaces have somewhat different construction and may serve different functions. Woody stems and some other stem structures such as potato tubers produce a secondary covering called the periderm that replaces the epidermis as the protective covering.
Description
The epidermis is the outermost cell layer of the primary plant body. In some older works the cells of the leaf epidermis have been regarded as specialized parenchyma cells, but the established modern preference has long been to classify the epidermis as dermal tissue, whereas parenchyma is classified as ground tissue. The epidermis is the main component of the dermal tissue system of leaves (diagrammed below), and also stems, roots, flowers, fruits, and seeds; it is usually transparent (epidermal cells have fewer chloroplasts or lack them completely, except for the guard cells.)
The cells of the epidermis are structurally and functionally variable. Most plants have an epidermis that is a single cell layer thick. Some plants like Ficus elastica and Peperomia, which have a periclinal cellular division within the protoderm of the leaves, have an epidermis with multiple cell layers. Epidermal cells are tightly linked to each other and provide mechanical strength and protection to the plant. The walls of the epidermal cells of the above-ground parts of plants contain cutin, and are covered with a cuticle. The cuticle reduces water loss to the atmosphere, it is sometimes covered with wax in smooth sheets, granules, p |
https://en.wikipedia.org/wiki/Direct%20simulation%20Monte%20Carlo | Direct simulation Monte Carlo (DSMC) method uses probabilistic Monte Carlo simulation to solve the Boltzmann equation for finite Knudsen number fluid flows.
The DSMC method was proposed by Graeme Bird, emeritus professor of aeronautics, University of Sydney. DSMC is a numerical method for modeling rarefied gas flows, in which the mean free path of a molecule is of the same order (or greater) than a representative physical length scale (i.e. the Knudsen number Kn is greater than 1). In supersonic and hypersonic flows rarefaction is characterized by Tsien's parameter, which is equivalent to the product of Knudsen number and Mach number (KnM) or M/Re, where Re is the Reynolds number. In these rarefied flows, the Navier-Stokes equations can be inaccurate. The DSMC method has been extended to model continuum flows (Kn < 1) and the results can be compared with Navier Stokes solutions.
The DSMC method models fluid flows using probabilistic simulation molecules to solve the Boltzmann equation. Molecules are moved through a simulation of physical space in a realistic manner that is directly coupled to physical time such that unsteady flow characteristics can be modeled. Intermolecular collisions and molecule-surface collisions are calculated using probabilistic, phenomenological models. Common molecular models include the hard sphere model, the variable hard sphere (VHS) model, and the variable soft sphere (VSS) model. Various collision models are presented in.
Currently, the DSMC method has been applied to the solution of flows ranging from estimation of the Space Shuttle re-entry aerodynamics to the modeling of microelectromechanical systems (MEMS).
DSMC Algorithm
The direct simulation Monte Carlo algorithm is like molecular dynamics in that the state of
the system is given by the positions and velocities of the
particles, , for .
Unlike molecular dynamics, each particle in a DSMC simulation represents molecules in
the physical system that have roughly at the same |
https://en.wikipedia.org/wiki/Forward%20declaration | In computer programming, a forward declaration is a declaration of an identifier (denoting an entity such as a type, a variable, a constant, or a function) for which the programmer has not yet given a complete definition.
It is required for a compiler to know certain properties of an identifier (size for memory allocation, data type for type checking, such as type signature of functions), but not other details, like the particular value it holds (in case of variables or constants) or definition (in the case of functions). This is particularly useful for one-pass compilers and separate compilation.
Forward declaration is used in languages that require declaration before use; it is necessary for mutual recursion in such languages, as it is impossible to define such functions (or data structures) without a forward reference in one definition: one of the functions (respectively, data structures) must be defined first. It is also useful to allow flexible code organization, for example if one wishes to place the main body at the top, and called functions below it.
In other languages forward declarations are not necessary, which generally requires instead a multi-pass compiler and for some compilation to be deferred to link time. In these cases identifiers must be defined (variables initialized, functions defined) before they can be employed during runtime without the need for pre-definition in the source code for either compilation or interpretation: identifiers do not need to be immediately resolved to an existing entity.
Examples
A basic example in C is:
void printThisInteger(int);
In C and C++, the line above represents a forward declaration of a function and is the function's prototype. After processing this declaration, the compiler would allow the program code to refer to the entity printThisInteger in the rest of the program. The definition for a function must be provided somewhere (same file or other, where it would be the responsibility of the linker to corr |
https://en.wikipedia.org/wiki/Tinning | Tinning is the process of thinly coating sheets of wrought iron or steel with tin, and the resulting product is known as tinplate. The term is also widely used for the different process of coating a metal with solder before soldering.
It is most often used to prevent rust, but is also commonly applied to the ends of stranded wire used as electrical conductors to prevent oxidation (which increases electrical resistance), and to keep them from fraying or unraveling when used in various wire connectors like twist-ons, binding posts, or terminal blocks, where stray strands can cause a short circuit.
While once more widely used, the primary use of tinplate now is the manufacture of tin cans. Formerly, tinplate was used for cheap pots, pans, and other holloware. This kind of holloware was also known as tinware and the people who made it were tinplate workers.
The untinned sheets employed in the manufacture are known as black plates. They are now made of steel, either Bessemer steel or open-hearth. Formerly iron was used, and was of two grades, coke iron and charcoal iron; the latter, being the better, received a heavier coating of tin, and this circumstance is the origin of the terms coke plates and charcoal plates by which the quality of tinplate is still designated, although iron is no longer used. Tinplate was consumed in enormous quantities for the manufacture of the tin cans in which preserved meat, fish, fruit, biscuits, cigarettes, and numerous other products are packed, and also for the household utensils of various kinds made by the tinsmith.
History
The practice of tinning ironware to protect it against rust is an ancient one. According to Pliny the Elder tinning was invented by the Gallic Bituriges tribe (based near modern Bourges), who boiled copper objects in a tin solution in order to make them look as if they were made from silver. The first detailed account of the process appears in Zosimus of Panopolis, Book 6.62, part of a work on alchemy written in |
https://en.wikipedia.org/wiki/Molecular%20chaos | In the kinetic theory of gases in physics, the molecular chaos hypothesis (also called Stosszahlansatz in the writings of Paul Ehrenfest) is the assumption that the velocities of colliding particles are uncorrelated, and independent of position. This means the probability that a pair of particles with given velocities will collide can be calculated by considering each particle separately and ignoring any correlation between the probability for finding one particle with velocity and probability for finding another velocity in a small region . James Clerk Maxwell introduced this approximation in 1867 although its origins can be traced back to his first work on the kinetic theory in 1860.
The assumption of molecular chaos is the key ingredient that allows proceeding from the BBGKY hierarchy to Boltzmann's equation, by reducing the 2-particle distribution function showing up in the collision term to a product of 1-particle distributions. This in turn leads to Boltzmann's H-theorem of 1872, which attempted to use kinetic theory to show that the entropy of a gas prepared in a state of less than complete disorder must inevitably increase, as the gas molecules are allowed to collide. This drew the objection from Loschmidt that it should not be possible to deduce an irreversible process from time-symmetric dynamics and a time-symmetric formalism: something must be wrong (Loschmidt's paradox). The resolution (1895) of this paradox is that the velocities of two particles after a collision are no longer truly uncorrelated. By asserting that it was acceptable to ignore these correlations in the population at times after the initial time, Boltzmann had introduced an element of time asymmetry through the formalism of his calculation.
Though the Stosszahlansatz is usually understood as a physically grounded hypothesis, it was recently highlighted that it could also be interpreted as a heuristic hypothesis. This interpretation allows using the principle of maximum entropy i |
https://en.wikipedia.org/wiki/Marsden%20square | Marsden square mapping or Marsden squares is a system that divides a world map with latitude-longitude gridlines (e.g. plate carrée projection, Mercator or other) between 80°N and 70°S latitudes (or 90°N and 80°S: refer chart at Ocean Teacher’s Ocean Geography page) into grid cells of 10° latitude by 10° longitude, each with a geocode, a unique numeric identifier. The method was devised by William Marsden (b. 1754, d. 1836), when first secretary of the British Admiralty, for collecting and combining geographically based information about the oceans.
Structure and design
On the plate carrée projection the grid cells appear square, although if the Mercator projection is used, the grid cells appear "stretched" vertically nearer the tops and bottoms of the map. On the actual surface of the globe, the cells are approximately "square" only adjacent to the equator, and become progressively narrower and tapered (also with curved northern and southern boundaries) as they approach the poles, and cells adjoining the poles are unique in possessing three faces rather than four. Each of the 540 10°x10° squares is allocated a unique number from 1 to 288 and from 300 to 551 (see image to the right), plus the sequence extends to 936 in higher latitudes; individual squares can also be subdivided into 100 one-degree squares numbered from 00 to 99 in order to improve precision.
Use
Marsden squares have mostly been used for identifying the geographic position of meteorological data, and are described further in various publications of the World Meteorological Organization (WMO). The 10°x10° square identifiers typically use a minimal number of characters (between 1 and 3 digits) which was/is an operational advantage for low bandwidth transmission systems.
However the rules for allocating numbers to squares do not follow a consistent pattern, so that reverse-engineering (decoding) the relevant square boundaries from any particular Marsden Square identifier is not particularly straig |
https://en.wikipedia.org/wiki/Fluent%20calculus | The fluent calculus is a formalism for expressing dynamical domains in first-order logic. It is a variant of the situation calculus; the main difference is that situations are considered representations of states. A binary function symbol is used to concatenate the terms that represent facts that hold in a situation. For example, that the box is on the table in the situation is represented by the formula . The frame problem is solved by asserting that the situation after the execution of an action is identical to the one before but for the conditions changed by the action. For example, the action of moving the box from the table to the floor is formalized as:
This formula states that the state after the move is added the term and removed the term . Axioms specifying that is commutative and non-idempotent are necessary for such axioms to work.
See also
Fluent (artificial intelligence)
Frame problem
Situation calculus
Event calculus |
https://en.wikipedia.org/wiki/Spatial%20reference%20system | A spatial reference system (SRS) or coordinate reference system (CRS) is a framework used to precisely measure locations on the surface of Earth as coordinates. It is thus the application of the abstract mathematics of coordinate systems and analytic geometry to geographic space. A particular SRS specification (for example, "Universal Transverse Mercator WGS 84 Zone 16N") comprises a choice of Earth ellipsoid, horizontal datum, map projection (except in the geographic coordinate system), origin point, and unit of measure. Thousands of coordinate systems have been specified for use around the world or in specific regions and for various purposes, necessitating transformations between different SRS.
Although they date to the Hellenic Period, spatial reference systems are now a crucial basis for the sciences and technologies of Geoinformatics, including cartography, geographic information systems, surveying, remote sensing, and civil engineering. This has led to their standardization in international specifications such as the EPSG codes and ISO 19111:2019 Geographic information—Spatial referencing by coordinates, prepared by ISO/TC 211, also published by the Open Geospatial Consortium as Abstract Specification, Topic 2: Spatial referencing by coordinate.
Types of systems
The thousands of spatial reference systems used today are based on a few general strategies, which have been defined in the EPSG, ISO, and OGC standards:
Geographic coordinate system (or geodetic)
A spherical coordinate system measuring locations directly on the Earth (modeled as a sphere or ellipsoid) using latitude (degrees north or south of the equator) and longitude (degrees west or east of a prime meridian).
Geocentric coordinate system (or Earth-centered Earth-fixed)
A three-dimensional cartesian coordinate system that models the Earth as a three-dimensional object, measuring locations from a center point, usually the center of mass of the Earth, along x, y, and z axes aligned with the equ |
https://en.wikipedia.org/wiki/Security%20operations%20center | A security operations center (SOC) is responsible for protecting an organization against cyber threats. SOC analysts perform round-the-clock monitoring of an organization’s network and investigate any potential security incidents. If a cyberattack is detected, the SOC analysts are responsible for taking any steps necessary to remediate it. It comprises the three building blocks for managing and enhancing an organization's security posture: people, processes, and technology. Thereby, governance and compliance provide a framework, tying together these building blocks. A SOC within a building or facility is a central location from which staff supervises the site using data processing technology. Typically, a SOC is equipped for access monitoring and control of lighting, alarms, and vehicle barriers.
IT
An information security operations center (ISOC) is a dedicated site where enterprise information systems (web sites, applications, databases, data centers and servers, networks, desktops and other endpoints) are monitored, assessed, and defended.
The United States government
The Transportation Security Administration in the United States has implemented security operations centers for most airports that have federalized security. The primary function of TSA security operations centers is to act as a communication hub for security personnel, law enforcement, airport personnel and various other agencies involved in the daily operations of airports. SOCs are staffed 24-hours a day by SOC watch officers. Security operations center watch officers are trained in all aspects of airport and aviation security and are often required to work abnormal shifts. SOC watch officers also ensure that TSA personnel follow proper protocol in dealing with airport security operations. The SOC is usually the first to be notified of incidents at airports such as the discovery of prohibited items/contraband, weapons, explosives, hazardous materials as well as incidents regarding fligh |
https://en.wikipedia.org/wiki/Relativistic%20electromagnetism | Relativistic electromagnetism is a physical phenomenon explained in electromagnetic field theory due to Coulomb's law and Lorentz transformations.
Electromechanics
After Maxwell proposed the differential equation model of the electromagnetic field in 1873, the mechanism of action of fields came into question, for instance in the Kelvin’s master class held at Johns Hopkins University in 1884 and commemorated a century later.
The requirement that the equations remain consistent when viewed from various moving observers led to special relativity, a geometric theory of 4-space where intermediation is by light and radiation. The spacetime geometry provided a context for technical description of electric technology, especially generators, motors, and lighting at first. The Coulomb force was generalized to the Lorentz force. For example, with this model transmission lines and power grids were developed and radio frequency communication explored.
An effort to mount a full-fledged electromechanics on a relativistic basis is seen in the work of Leigh Page, from the project outline in 1912 to his textbook Electrodynamics (1940) The interplay (according to the differential equations) of electric and magnetic field as viewed over moving observers is examined. What is charge density in electrostatics becomes proper charge density and generates a magnetic field for a moving observer.
A revival of interest in this method for education and training of electrical and electronics engineers broke out in the 1960s after Richard Feynman’s textbook.
Rosser’s book Classical Electromagnetism via Relativity was popular, as was Anthony French’s treatment in his textbook which illustrated diagrammatically the proper charge density. One author proclaimed, "Maxwell — Out of Newton, Coulomb, and Einstein".
The use of retarded potentials to describe electromagnetic fields from source-charges is an expression of relativistic electromagnetism.
Principle
The question of how an electric field |
https://en.wikipedia.org/wiki/Blattner%27s%20conjecture | In mathematics, Blattner's conjecture or Blattner's formula is a description of the discrete series representations of a general semisimple group G in terms of their restricted representations to a maximal compact subgroup K (their so-called K-types). It is named after Robert James Blattner, despite not being formulated as a conjecture by him.
Statement
Blattner's formula says that if a discrete series representation with infinitesimal character λ is restricted to a maximal compact subgroup K, then the representation of K with highest weight μ occurs with multiplicity
where
Q is the number of ways a vector can be written as a sum of non-compact positive roots
WK is the Weyl group of K
ρc is half the sum of the compact roots
ρn is half the sum of the non-compact roots
ε is the sign character of WK.
Blattner's formula is what one gets by formally restricting the Harish-Chandra character formula for a discrete series representation to the maximal torus of a maximal compact group. The problem in proving the Blattner formula is that this only gives the character on the regular elements of the maximal torus, and one also needs to control its behavior on the singular elements. For non-discrete irreducible representations the formal restriction of Harish-Chandra's character formula need not give the decomposition under the maximal compact subgroup: for example, for the principal series representations of SL2 the character is identically zero on the non-singular elements of the maximal compact subgroup, but the representation is not zero on this subgroup. In this case the character is a distribution on the maximal compact subgroup with support on the singular elements.
History
Harish-Chandra orally attributed the conjecture to Robert James Blattner as a question Blattner raised, not a conjecture made by Blattner. Blattner did not publish it in any form. It first appeared in print in , where it was first referred to as "Blattner's Conjecture," despite the results of that |
https://en.wikipedia.org/wiki/Mutacin%201140 | Mutacin 1140 is a bacteriocin produced by Streptococcus mutans. It has activity against a broad spectrum of Gram-positive bacteria. It is a member of the class of compounds known as lantibiotics.
Mutacin 1140 belongs to the epidermin subset of type Al lantibiotics. Molecules belonging to this family bind to lipid II which is a precursor to bacterial cell wall synthesis.
While the effects mutacin 1140 has against gram-positive bacteria are known, it remains difficult to study due to it demonstrating poor pharmacokinetics. Besides the poor pharmacokinetics, it is easily vulnerable to proteolytic degradation by interfering with the protein's peptide bonds. |
https://en.wikipedia.org/wiki/Barracuda%20Networks | Barracuda Networks, Inc. is a company providing security, networking and storage products based on network appliances and cloud services. The company's security products include products for protection against email, web surfing, web hackers and instant messaging threats such as spam, spyware, trojans, and viruses. The company's networking and storage products include web filtering, load balancing, application delivery controllers, message archiving, NG firewalls, backup services and data protection.
History
Barracuda Networks was founded in 2003 by Dean Drako (founding CEO), Michael Perone, and Zach Levow; the company introduced the Barracuda Spam and Virus Firewall in the same year. In 2007 the company moved its headquarters to Campbell, California, and opened an office in Ann Arbor, Michigan.
In January 2006, it closed its first outside investment of $40 million from Sequoia Capital and Francisco Partners.
On January 29, 2008, Barracuda Networks was sued by Trend Micro over their use of the open source anti-virus software Clam AntiVirus, which Trend Micro claimed to be in violation of their patent on 'anti-virus detection on an SMTP or FTP gateway'. In addition to providing samples of prior art in an effort to render Trend Micro's patent invalid, in July 2008 Barracuda launched a countersuit against Trend Micro claiming Trend Micro violated several antivirus patents Barracuda Networks had acquired from IBM.
In December 2008, the company launched the BRBL (Barracuda Reputation Block List), its proprietary and dynamic list of known spam servers, for free and public use in blocking spam at the gateway.
Soon after opening BRBL many IP addresses got blacklisted without apparent reason and without any technical explanation.
As of October 2009, Barracuda had over 85,000 customers. As of November 2011, Barracuda had more than 130,000 customers. As of January 2014, Barracuda has more than 150,000 customers worldwide.
In 2012, the company became a co-sponsor of the |
https://en.wikipedia.org/wiki/Captive%20breeding | Captive breeding, also known as captive propagation, is the process of keeping plants or animals in controlled environments, such as wildlife reserves, zoos, botanic gardens, and other conservation facilities. It is sometimes employed to help species that are being threatened by the effects of human activities such as climate change, habitat loss, fragmentation, overhunting or fishing, pollution, predation, disease, and parasitism.
For many species, relatively little is known about the conditions needed for successful breeding. Information about a species' reproductive biology may be critical to the success of a captive breeding program. In some cases a captive breeding program can save a species from extinction, but for success, breeders must consider many factors—including genetic, ecological, behavioral, and ethical issues. Most successful attempts involve the cooperation and coordination of many institutions. The efforts put into captive breeding can aid in education about conservation because species in captivity are closer to the public than their wild conspecifics. These accomplishments from the continued breeding of species for generations in captivity is also aided by extensive research efforts ex-situ and in-situ.
History
Captive breeding techniques began with the first human domestication of animals such as goats, and plants like wheat, at least 10,000 years ago. These practices were expanded with the rise of the first zoos, which started as royal menageries such as the one at Hierakonpolis, capital in the Predynastic Period of Egypt.
The first actual captive breeding programs were only started in the 1960s. These programs, such as the Arabian Oryx breeding program from the Phoenix Zoo in 1962, were aimed at the reintroduction of these species into the wild. These programs expanded under The Endangered Species Act of 1973 of the Nixon Administration which focused on protecting endangered species and their habitats to preserve biodiversity. Since th |
https://en.wikipedia.org/wiki/Time%20reversal%20signal%20processing | Time reversal signal processing is a signal processing technique that has three main uses: creating an optimal carrier signal for communication, reconstructing a source event, and focusing high-energy waves to a point in space. A Time Reversal Mirror (TRM) is a device that can focus waves using the time reversal method. TRMs are also known as time reversal mirror arrays since they are usually arrays of transducers. TRM are well-known and have been used for decades in the optical domain. They are also used in the ultrasonic domain.
Overview
If the source is passive, i.e. some type of isolated reflector, an iterative technique can be used to focus energy on it. The TRM transmits a plane wave which travels toward the target and is reflected off it. The reflected wave returns to the TRM, where it looks as if the target has emitted a (weak) signal. The TRM reverses and retransmits the signal as usual, and a more focused wave travels toward the target. As the process is repeated, the waves become more and more focused on the target.
Yet another variation is to use a single transducer and an ergodic cavity. Intuitively, an ergodic cavity is one that will allow a wave originating at any point to reach any other point. An example of an ergodic cavity is an irregularly shaped swimming pool: if someone dives in, eventually the entire surface will be rippling with no clear pattern. If the propagation medium is lossless and the boundaries are perfect reflectors, a wave starting at any point will reach all other points an infinite number of times. This property can be exploited by using a single transducer and recording for a long time to get as many reflections as possible.
Theory
The time reversal technique is based upon a feature of the wave equation known as reciprocity: given a solution to the wave equation, then the time reversal (using a negative time) of that solution is also a solution. This occurs because the standard wave equation only contains even order |
https://en.wikipedia.org/wiki/EFront | eFront was an affiliate marketing network which purchased successful websites, such as Penny Arcade, SquareGamer, and BetaNews, and pooled traffic to those sites to command higher prices for advertising during an industrywide ad revenue slowdown. In 2001, there was a scandal when ICQ instant messaging logs between the CEO Sam P. Jain and other employees were leaked onto the internet through Fuckedcompany.com. The logs detailed activities such as not paying websites that had hosted their banner ads, sending legal threats to websites that spoke poorly of eFront, and threatening to "rape her and spit on her" (referring to a female webmaster angry about not receiving her check from the company). The logs also detailed how eFront attempted to hire, though never ended up paying, Something Awful founder and webmaster Richard "Lowtax" Kyanka, ostensibly to have him generate a positive buzz for the company.
Richard Kyanka stated during a presentation at the University of Illinois in October 2005 that he was still owed $40,000 by eFront, and that the company ran a number of competitions to attract clients, yet the prizes were awarded to employees.
As of July 2006, the company's former efront.com domain is owned by an unrelated French software firm, eFront Alternative Investment Solutions. |
https://en.wikipedia.org/wiki/Markov%20number | A Markov number or Markoff number is a positive integer x, y or z that is part of a solution to the Markov Diophantine equation
studied by .
The first few Markov numbers are
1, 2, 5, 13, 29, 34, 89, 169, 194, 233, 433, 610, 985, 1325, ...
appearing as coordinates of the Markov triples
(1, 1, 1), (1, 1, 2), (1, 2, 5), (1, 5, 13), (2, 5, 29), (1, 13, 34), (1, 34, 89), (2, 29, 169), (5, 13, 194), (1, 89, 233), (5, 29, 433), (1, 233, 610), (2, 169, 985), (13, 34, 1325), ...
There are infinitely many Markov numbers and Markov triples.
Markov tree
There are two simple ways to obtain a new Markov triple from an old one (x, y, z). First, one may permute the 3 numbers x,y,z, so in particular one can normalize the triples so that x ≤ y ≤ z. Second, if (x, y, z) is a Markov triple then so is (x, y, 3xy − z). Applying this operation twice returns the same triple one started with. Joining each normalized Markov triple to the 1, 2, or 3 normalized triples one can obtain from this gives a graph starting from (1,1,1) as in the diagram. This graph is connected; in other words every Markov triple can be connected to by a sequence of these operations. If we start, as an example, with we get its three neighbors , and in the Markov tree if z is set to 1, 5 and 13, respectively. For instance, starting with and trading y and z before each iteration of the transform lists Markov triples with Fibonacci numbers. Starting with that same triplet and trading x and z before each iteration gives the triples with Pell numbers.
All the Markov numbers on the regions adjacent to 2's region are odd-indexed Pell numbers (or numbers n such that 2n2 − 1 is a square, ), and all the Markov numbers on the regions adjacent to 1's region are odd-indexed Fibonacci numbers (). Thus, there are infinitely many Markov triples of the form
where Fk is the kth Fibonacci number. Likewise, there are infinitely many Markov triples of the form
where Pk is the kth Pell number.
Proof that this generate |
https://en.wikipedia.org/wiki/Teardrop%20tattoo | The teardrop tattoo or tear tattoo is a symbolic tattoo of a tear that is placed underneath the eye. The teardrop is one of the most widely recognised prison tattoos and has various meanings.
It can signify that the wearer has spent time in prison, or more specifically that the wearer was raped while incarcerated and tattooed by the rapist as a "property" mark and for humiliation, since facial tattoos cannot be concealed.
The tattoo is sometimes worn by the female companions of prisoners in solidarity with their loved ones. Amy Winehouse had a teardrop drawn on her face in eyeliner after her husband Blake entered the Pentonville prison hospital following a suspected drug overdose.
It can acknowledge the loss of a friend or family member: Basketball player Amar'e Stoudemire has had a teardrop tattoo since 2012 honouring his older brother Hazell Jr., who died in a car accident.
In West Coast gang culture (USA), the tattoo may signify that the wearer has killed someone and in some of those circles, the tattoo's meaning can change: an empty outline meaning the wearer attempted murder.
Sometimes the exact meaning of the tattoo is known only by the wearer:
Portuguese footballer Ricardo Quaresma has never explained his teardrop tattoos.
See also
Criminal tattoo
Prison rape
Prison tattooing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.