source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Rahul%20Pandharipande
Rahul Pandharipande (born 1969) is a mathematician who is currently a professor of mathematics at the Swiss Federal Institute of Technology Zürich (ETH) working in algebraic geometry. His particular interests concern moduli spaces, enumerative invariants associated to moduli spaces, such as Gromov–Witten invariants and Donaldson–Thomas invariants, and the cohomology of the moduli space of curves. His father Vijay Raghunath Pandharipande was a renowned theoretical physicist who worked in the area of nuclear physics. Educational and professional history He received his A.B. from Princeton University in 1990 and his PhD from Harvard University in 1994 with a thesis entitled `A Compactification over the Moduli Space of Stable Curves of the Universal Moduli Space of Slope-Semistable Vector Bundles'. His thesis advisor at Harvard was Joe Harris. After teaching at the University of Chicago and the California Institute of Technology, he joined the faculty as Professor of Mathematics at Princeton University in 2002. In 2011, he accepted a Professorship at ETH Zürich. In 2022, he was awarded an honorary degree - Doctor of Science from the University of Illinois Urbana-Champaign.
https://en.wikipedia.org/wiki/Operational%20calculus
Operational calculus, also known as operational analysis, is a technique by which problems in analysis, in particular differential equations, are transformed into algebraic problems, usually the problem of solving a polynomial equation. History The idea of representing the processes of calculus, differentiation and integration, as operators has a long history that goes back to Gottfried Wilhelm Leibniz. The mathematician Louis François Antoine Arbogast was one of the first to manipulate these symbols independently of the function to which they were applied. This approach was further developed by Francois-Joseph Servois who developed convenient notations. Servois was followed by a school of British and Irish mathematicians including Charles James Hargreave, George Boole, Bownin, Carmichael, Doukin, Graves, Murphy, William Spottiswoode and Sylvester. Treatises describing the application of operator methods to ordinary and partial differential equations were written by Robert Bell Carmichael in 1855 and by Boole in 1859. This technique was fully developed by the physicist Oliver Heaviside in 1893, in connection with his work in telegraphy. Guided greatly by intuition and his wealth of knowledge on the physics behind his circuit studies, [Heaviside] developed the operational calculus now ascribed to his name. At the time, Heaviside's methods were not rigorous, and his work was not further developed by mathematicians. Operational calculus first found applications in electrical engineering problems, for the calculation of transients in linear circuits after 1910, under the impulse of Ernst Julius Berg, John Renshaw Carson and Vannevar Bush. A rigorous mathematical justification of Heaviside's operational methods came only after the work of Bromwich that related operational calculus with Laplace transformation methods (see the books by Jeffreys, by Carslaw or by MacLachlan for a detailed exposition). Other ways of justifying the operational methods of Hea
https://en.wikipedia.org/wiki/Jack%20Drummond
Sir Jack Cecil Drummond FRIC, FRS (12 January 1891 – 4/5 August 1952), known as a child as Jack Cecil Spinks, was a biochemist, noted for his work on nutrition as applied to the British diet under rationing during the Second World War. He was murdered, together with his wife and 10-year-old daughter, in what became known as the Dominici affair, on the night of 4–5 August 1952 near Lurs, a village or commune in the Basses-Alpes department (now Alpes-de-Haute-Provence) of Southern France. Early life and family background Jack Drummond was born either in Leicester, or London, likely Newington or Kennington. He was the son of Colonel John Drummond of the Royal Horse Artillery and Nora Gertrude McQuie, who had resided at 65 Howard Road, Clarendon Park, Leicester. John Drummond died at age 55, only three months after Jack's birth. Jack was adopted and raised by a paternal aunt, Maria Spinks, who lived in nearby Charlton. Maria's husband, George, was a retired captain quartermaster, who had seen action in the Crimea. According to author/biographer James Fergusson, life could not have been much fun for the solitary boy in the elderly couple's home. He attended The John Roan School in Greenwich and King's College School. Drummond's family origins remain unclear. No birth certificate exists for him in the Family Records Office. His father John, the major, describes himself as a bachelor in his will, which makes no mention of a son. In the 1891 census, Jack's name was given as "Cecil", his mother's as "Gertrude Drummond", and her age as 29. It is not known what happened to Gertrude (presumably Nora Gertrude McQuie) or whether she was ever married to John. In the 1901 census, his name is recorded as Jack Cecil Spinks, taking his adoptive mother's surname. It is likely that as a boy Jack used the surname Spinks to avoid social embarrassment to his adoptive parents, but reverted to the surname Drummond sometime during his teens. On 17 July 1915, Drummond married Mable Helen S
https://en.wikipedia.org/wiki/Fast%20Analog%20Computing%20with%20Emergent%20Transient%20States
Fast Analog Computing with Emergent Transient States or FACETS is a European project to research the properties of the human brain. Established and funded by the European Union in September 2005, the five-year project involves approximately 80 scientists from Austria, France, Germany, Hungary, Sweden, Switzerland and the United Kingdom. The main project goal is to address questions about how the brain computes. Another objective is to create microchip hardware equaling approximately 200,000 neurons with 50 million synapses on a single silicon wafer. Current prototypes are running 100,000 times faster than their biological counterparts, which would make them the fastest analog computing devices ever built for neuronal computations. The institutions involved are the University of Heidelberg, the French National Centre for Scientific Research (CNRS) of Gif sur Yvette, the CNRS of Marseille, the Institut national de recherche en informatique et en automatique, the University of Freiburg, the University of Graz, the École Polytechnique Fédérale de Lausanne, the Swedish Royal Institute of Technology, the University of London, the University of Plymouth, the University of Bordeaux, the University of Debrecen, the University of Dresden and the Institute for Theoretical Computer Science at Technische Universitat Graz. External links FACETS website a quick introduction Computational neuroscience Neurophysiology
https://en.wikipedia.org/wiki/Frontiers%20%281989%20TV%20series%29
Frontiers is an eight-part BBC television series, and accompanying book, that explored the geographic boundaries between countries. Eight writers and journalists in a variety of countries investigated the economic, political, geographical and historical reasons that account for why people are divided. The series was aired in 1989, just a few months before the fall of the Berlin Wall, which was featured in one episode. Episodes "Natural Break": Frederic Raphael explored the Pyrenees, the frontier between France and Spain, which at the time was preparing to join the (then) European Economic Community. "Gone Tomorrow": John Wells covered the Iron Curtain that split East and West Germans. "Gold and the Gun": Nadine Gordimer visited the war-torn border area between Mozambique and her native South Africa. "Night and Day": Richard Rodriguez showed how the rich North and poor South converged at the US/Mexican border. "Long Division": Ronald Eyre looked at the people living on both sides of the border in Ireland that splits the Republic from Ulster. "Big Brother's Bargain": Nigel Hamilton hiked up the boundary between Russia and Finland. "Border Run": Jon Swain visited the Thai/Cambodian border where thousands of Cambodian refugees had been stranded for over ten years. "Cyprus: Stranded in Time": Christopher Hitchens investigated the divided island of Cyprus. Further reading Frontiers, published in 1990 by BBC Books, External links 1989 British television series debuts 1989 British television series endings 1980s British documentary television series BBC television documentaries Borders English-language television shows
https://en.wikipedia.org/wiki/Mycovirus
Mycoviruses (Ancient Greek: μύκης ("fungus") + Latin ), also known as mycophages, are viruses that infect fungi. The majority of mycoviruses have double-stranded RNA (dsRNA) genomes and isometric particles, but approximately 30% have positive-sense, single-stranded RNA (+ssRNA) genomes. True mycoviruses demonstrate an ability to be transmitted to infect other healthy fungi. Many double-stranded RNA elements that have been described in fungi do not fit this description, and in these cases they are referred to as virus-like particles or VLPs. Preliminary results indicate that most mycoviruses co-diverge with their hosts, i.e. their phylogeny is largely congruent with that of their primary hosts. However, many virus families containing mycoviruses have only sparsely been sampled. Mycovirology is the study of mycoviruses. It is a special subdivision of virology and seeks to understand and describe the taxonomy, host range, origin and evolution, transmission and movement of mycoviruses and their impact on host phenotype. History The first record of an economic impact of mycoviruses on fungi was recorded in cultivated mushrooms (Agaricus bisporus) in the late 1940s and was called the La France disease. Hollings found more than three different types of viruses in the abnormal sporophores. This report essentially marks the beginning of mycovirology. The La France Disease is also known as X disease, watery stripe, dieback and brown disease. Symptoms include: Reduced yield Slow and aberrant mycelial growth Waterlogging of tissue Malformation Premature maturation Increased post-harvest deterioration (reduced shelf life) Mushrooms have shown no resistance to the virus, and so control has been limited to hygienic practises to stop the spread of the virus. Perhaps the best known mycovirus is Cryphonectria parasitica hypovirus 1 (CHV1). CHV1 is exceptional within mycoviral research for its success as a biocontrol agent against the fungus C. parasitica, the causative ag
https://en.wikipedia.org/wiki/Rice%20flour
Rice flour (also rice powder) is a form of flour made from finely milled rice. It is distinct from rice starch, which is usually produced by steeping rice in lye. Rice flour is a common substitute for wheat flour. It is also used as a thickening agent in recipes that are refrigerated or frozen since it inhibits liquid separation. Rice flour may be made from either white rice or brown rice. To make the flour, the husk of rice or paddy is removed and raw rice is obtained, which is then ground to flour. Types and names By rice Rice flour can be made from indica, japonica, and wild rice varieties. Usually, rice flour (, , , , , , , , , ) refers to flour made from non-glutinous white rice. When made with glutinous rice (or sweet rice), it is called glutinous rice flour or sweet rice flour (, Japanese: 白玉粉; romanized: shiratamako, ). In Japan, the glutinous rice flour produced from ground cooked glutinous rice, used to make mochi, is called mochigomeko (, or mochiko for short). In comparison to the glutinous rice flour, non-glutinous rice flour (, Japanese: 上新粉; romanized: jōshinko, ) can be specified as so. When made with brown rice with only the inedible outer hull removed, it is called brown rice flour (, ). Flour made from black, red, and green rice are each called as black rice flour (), red rice flour (), green rice flour (). In comparison to brown rice flour, white rice flour (, ) can be specified as so. By milling methods Different milling methods also produce different types of rice flour. Rice flour can be dry-milled from dry rice grains, or wet-milled from rice grains that were soaked in water prior to milling. Usually, "rice flour" refers to dry-milled rice flour (), which can be stored on a shelf. In Korea, wet-milled rice flour () is made from rice that was soaked in water, drained, ground using a stone-mill, and then optionally sifted. Like moderately moist sand, wet-milled rice flour forms an easily breakable lump when squeezed with hand. It is usu
https://en.wikipedia.org/wiki/Equiareal%20map
In differential geometry, an equiareal map, sometimes called an authalic map, is a smooth map from one surface to another that preserves the areas of figures. Properties If M and N are two Riemannian (or pseudo-Riemannian) surfaces, then an equiareal map f from M to N can be characterized by any of the following equivalent conditions: The surface area of f(U) is equal to the area of U for every open set U on M. The pullback of the area element μN on N is equal to μM, the area element on M. At each point p of M, and tangent vectors v and w to M at p,</p><p>where denotes the Euclidean wedge product of vectors and df denotes the pushforward along f. Example An example of an equiareal map, due to Archimedes of Syracuse, is the projection from the unit sphere to the unit cylinder outward from their common axis. An explicit formula is for (x, y, z) a point on the unit sphere. Linear transformations Every Euclidean isometry of the Euclidean plane is equiareal, but the converse is not true. In fact, shear mapping and squeeze mapping are counterexamples to the converse. Shear mapping takes a rectangle to a parallelogram of the same area. Written in matrix form, a shear mapping along the -axis is Squeeze mapping lengthens and contracts the sides of a rectangle in a reciprocal manner so that the area is preserved. Written in matrix form, with λ > 1 the squeeze reads A linear transformation multiplies areas by the absolute value of its determinant . Gaussian elimination shows that every equiareal linear transformation (rotations included) can be obtained by composing at most two shears along the axes, a squeeze and (if the determinant is negative), a reflection. In map projections In the context of geographic maps, a map projection is called equal-area, equivalent, authalic, equiareal, or area-preserving, if areas are preserved up to a constant factor; embedding the target map, usually considered a subset of R2, in the obvious way in R3, the requirement ab
https://en.wikipedia.org/wiki/Guild%20hosting%20service
A guild hosting or clan hosting service is a specialized type of web hosting service designed to support online gaming communities, generally referred to as guilds or clans. They vary from game server hosting in that the focus of such companies is to provide applications and communication tools outside the gaming environments themselves. Guild hosting services address a guild's basic need to have an online presence and allow guild members to communicate with each other outside of the game. While it is possible for any guild to do this on their own, setting up and maintaining a site requires constant maintenance, upgrades and integration of new software. One of the key reasons for the popularity of guild hosting services is their focus on relieving the guild from this overhead and freeing them up to spend more time playing the game. Typical features The services typically offered by such a service include: Public and/or private forums for members to communicate between themselves, or other tools for communications such as instant messaging or chat servers. Tools for tracking the roster of characters that a player might have in an MMORPG. An application for scheduling and organizing raids, tournaments and other gaming events. Applications for tracking treasure, items, or points accrued toward redeeming treasure (often referred to as a DKP system). History Originally, most people who decided to create a website for their guild used bulletin board software such as vBulletin and phpBB on traditional web hosting services. However, as the complexity of online games increased, many guilds sought after more advanced management features and turned to specialized services to accommodate their needs. However, there is still a considerable base of users who still employ the older method as it can be cheaper and allows them the flexibility to be creative in their efforts and in some cases be able to transfer their guild sites between hosting services. Many of the
https://en.wikipedia.org/wiki/List%20of%20computer%20system%20manufacturers
A computer system is a nominally complete computer that includes the hardware, operating system (main software), and the means to use peripheral equipment needed and used for full or mostly full operation. Such systems may constitute personal computers (including desktop computers, portable computers, laptops, all-in-ones, and more), mainframe computers, minicomputers, servers, and workstations, among other classes of computing. The following is a list of notable manufacturers and sellers of computer systems, both present and past. Current Inactive See also Market share of personal computer vendors List of computer hardware manufacturers List of laptop brands and manufacturers List of touch-solution manufacturers Notes
https://en.wikipedia.org/wiki/Flatness%20%28systems%20theory%29
Flatness in systems theory is a system property that extends the notion of controllability from linear systems to nonlinear dynamical systems. A system that has the flatness property is called a flat system. Flat systems have a (fictitious) flat output, which can be used to explicitly express all states and inputs in terms of the flat output and a finite number of its derivatives. Definition A nonlinear system is flat, if there exists an output that satisfies the following conditions: The signals are representable as functions of the states and inputs and a finite number of derivatives with respect to time : . The states and inputs are representable as functions of the outputs and of its derivatives with respect to time . The components of are differentially independent, that is, they satisfy no differential equation of the form . If these conditions are satisfied at least locally, then the (possibly fictitious) output is called flat output, and the system is flat. Relation to controllability of linear systems A linear system with the same signal dimensions for as the nonlinear system is flat, if and only if it is controllable. For linear systems both properties are equivalent, hence exchangeable. Significance The flatness property is useful for both the analysis of and controller synthesis for nonlinear dynamical systems. It is particularly advantageous for solving trajectory planning problems and asymptotical setpoint following control. Literature M. Fliess, J. L. Lévine, P. Martin and P. Rouchon: Flatness and defect of non-linear systems: introductory theory and examples. International Journal of Control 61(6), pp. 1327-1361, 1995 A. Isidori, C.H. Moog et A. De Luca. A Sufficient Condition for Full Linearization via Dynamic State Feedback. 25th CDC IEEE, Athens, Greece, pp. 203 - 208, 1986 See also Control theory Control engineering Controller (control theory) Flat pseudospectral method Control theory
https://en.wikipedia.org/wiki/Computational%20auditory%20scene%20analysis
Computational auditory scene analysis (CASA) is the study of auditory scene analysis by computational means. In essence, CASA systems are "machine listening" systems that aim to separate mixtures of sound sources in the same way that human listeners do. CASA differs from the field of blind signal separation in that it is (at least to some extent) based on the mechanisms of the human auditory system, and thus uses no more than two microphone recordings of an acoustic environment. It is related to the cocktail party problem. Principles Since CASA serves to model functionality parts of the auditory system, it is necessary to view parts of the biological auditory system in terms of known physical models. Consisting of three areas, the outer, middle and inner ear, the auditory periphery acts as a complex transducer that converts sound vibrations into action potentials in the auditory nerve. The outer ear consists of the external ear, ear canal and the ear drum. The outer ear, like an acoustic funnel, helps locating the sound source. The ear canal acts as a resonant tube (like an organ pipe) to amplify frequencies between 2–5.5 kHz with a maximum amplification of about 11 dB occurring around 4 kHz. As the organ of hearing, the cochlea consists of two membranes, Reissner’s and the basilar membrane. The basilar membrane moves to audio stimuli through the specific stimulus frequency matches the resonant frequency of a particular region of the basilar membrane. The movement the basilar membrane displaces the inner hair cells in one direction, which encodes a half-wave rectified signal of action potentials in the spiral ganglion cells. The axons of these cells make up the auditory nerve, encoding the rectified stimulus. The auditory nerve responses select certain frequencies, similar to the basilar membrane. For lower frequencies, the fibers exhibit "phase locking". Neurons in higher auditory pathway centers are tuned to specific stimuli features, such as periodicity, soun
https://en.wikipedia.org/wiki/Selenography
Selenography is the study of the surface and physical features of the Moon (also known as geography of the Moon, or selenodesy). Like geography and areography, selenography is a subdiscipline within the field of planetary science. Historically, the principal concern of selenographists was the mapping and naming of the lunar terrane identifying maria, craters, mountain ranges, and other various features. This task was largely finished when high resolution images of the near and far sides of the Moon were obtained by orbiting spacecraft during the early space era. Nevertheless, some regions of the Moon remain poorly imaged (especially near the poles) and the exact locations of many features (like crater depths) are uncertain by several kilometers. Today, selenography is considered to be a subdiscipline of selenology, which itself is most often referred to as simply "lunar science." The word selenography is derived from the Greek word Σελήνη (Selene, meaning Moon) and γράφω graphō, meaning to write. History The idea that the Moon is not perfectly smooth originates to at least , when Democritus asserted that the Moon's "lofty mountains and hollow valleys" were the cause of its markings. However, not until the end of the 15th century AD did serious study of selenography begin. Around AD 1603, William Gilbert made the first lunar drawing based on naked-eye observation. Others soon followed, and when the telescope was invented, initial drawings of poor accuracy were made, but soon thereafter improved in tandem with optics. In the early 18th century, the librations of the Moon were measured, which revealed that more than half of the lunar surface was visible to observers on Earth. In 1750, Johann Meyer produced the first reliable set of lunar coordinates that permitted astronomers to locate lunar features. Lunar mapping became systematic in 1779 when Johann Schröter began meticulous observation and measurement of lunar topography. In 1834 Johann Heinrich von Mädler pub
https://en.wikipedia.org/wiki/Gravitation%20of%20the%20Moon
The acceleration due to gravity on the surface of the Moon is approximately 1.625 m/s2, about 16.6% that on Earth's surface or 0.166 . Over the entire surface, the variation in gravitational acceleration is about 0.0253 m/s2 (1.6% of the acceleration due to gravity). Because weight is directly dependent upon gravitational acceleration, things on the Moon will weigh only 16.6% (= 1/6) of what they weigh on the Earth. Gravitational field The gravitational field of the Moon has been measured by tracking the radio signals emitted by orbiting spacecraft. The principle used depends on the Doppler effect, whereby the line-of-sight spacecraft acceleration can be measured by small shifts in frequency of the radio signal, and the measurement of the distance from the spacecraft to a station on Earth. Since the gravitational field of the Moon affects the orbit of a spacecraft, one can use this tracking data to detect gravity anomalies. Most low lunar orbits are unstable. Detailed data collected has shown that for low lunar orbit the only "stable" orbits are at inclinations near 27°, 50°, 76°, and 86°. Because of the Moon's synchronous rotation it is not possible to track spacecraft from Earth much beyond the limbs of the Moon, so until the recent Gravity Recovery and Interior Laboratory (GRAIL) mission the far-side gravity field was not well mapped. The missions with accurate Doppler tracking that have been used for deriving gravity fields are in the accompanying table. The table gives the mission spacecraft name, a brief designation, the number of mission spacecraft with accurate tracking, the country of origin, and the time span of the Doppler data. Apollos 15 and 16 released subsatellites. The Kaguya/SELENE mission had tracking between 3 satellites to get far-side tracking. GRAIL had very accurate tracking between 2 spacecraft and tracking from Earth. The accompanying table below lists lunar gravity fields. The table lists the designation of the gravity field, the highe
https://en.wikipedia.org/wiki/Wheedle
The Wheedle is the title character of a popular children's book by author Stephen Cosgrove. The character eventually evolved into a popular mascot generally associated with the city of Seattle. Children's book character Original story Wheedle on the Needle (Serendipity Books, 1974), written by Stephen Cosgrove and illustrated by Robin James, is about a large, round, furry creature called the Wheedle who lived in the Northwest. Bothered by the whistling of workers first settling the city of Seattle, the creature was unable to sleep and became irritable, eventually moving to Mount Rainier to escape the noise. The Wheedle slept there peacefully for many years, his red nose blinking, until the region's growth brought people – and their whistling – to his doorstep once again. In an effort to silence the noise, the Wheedle gathered clouds in a large sack atop Mt. Rainier, returned to Seattle, climbed atop the Space Needle, and threw them into the sky to make it rain. With their lips wet from precipitation, the city's residents were unable to whistle, and the creature once again had some peace and quiet. Upset, the people sent the mayor to try to convince the Wheedle to stop the rain; when the creature explained his problem, the mayor had a giant pair of earmuffs constructed to muffle the disagreeable warbling. When they were presented to him, "The Wheedle placed them over his ears, and smiled for the first time in years." In appreciation, the Wheedle gathered up all the clouds, put them back in his bag, and fell fast asleep – and once again, his big red nose began to blink. The book ends with a short poem: There's a Wheedle/On the Needle/I know just what/You're thinking/But if you look up/Late at night/You'll see/His red nose blinking. Later editions In 2002 a second edition of the book was published. The story was significantly rewritten, generally matching the existing illustrations, but eliminating environmental themes present in the original story and altering it
https://en.wikipedia.org/wiki/Dr.%20Oetker
Dr. Oetker () is a German multinational company that produces baking powder, cake mixes, frozen pizza, pudding, cake decoration, cornflakes, birthday candles, and various other products. The company is a wholly owned branch of the Oetker Group, headquartered in Bielefeld, Germany. Portfolio The portfolio includes more than 300 individual companies in five different businesses, among them food (including Dr. Oetker GmbH and Coppenrath & Wiese KG), breweries (Radeberger Group), sparkling wine and spirits (Henkell & Co. Sektkellerei), banking (Bankhaus Lampe), and "further interests" (among them chemicals, financing, and participation, and a number of high-class hotels all over Europe). History Formation The company was founded by August Oetker in 1891. The first product developed was Backin, a measured amount of baking powder that, when mixed with of flour and other ingredients, produced a cake. First World War Oetker's son Rudolf and his wife Ida had two children, Rudolf-August and Ursula; however, the senior Rudolf was later killed in the First World War. His widow Ida remarried Richard Kaselowsky, and they had four more children with Kaselowsky raising Rudolf-August and Ursula as his own. Kaselowsky became the manager of the company from 1920 to his death. Second World War During the 1930s and 1940s, Rudolf-August Oetker was an active member of the Waffen-SS of the Third Reich. The company supported the war effort by providing pudding mixes and munitions to German troops. The business used slave labour in some of its facilities. A bronze bust of Richard Kaselowsky still sits within the company headquarters in Bielefeld. Kaselowsky was killed during an air raid on Bielefeld in 1944. International expansion Rudolf August Oetker, the grandson of August Oetker, led the company between 1944 and 1981 when it achieved its highest growth. The Oetker family's private bank also employed as a director Rudolf von Ribbentrop (1921–2019), son of Joachim von Ribbentrop
https://en.wikipedia.org/wiki/POP%20before%20SMTP
POP before SMTP or SMTP after POP is a method of authentication used by mail server software which helps allow users the option to send e-mail from any location, as long as they can demonstrably also fetch their mail from the same place. The POP before SMTP approach has been superseded by SMTP Authentication. Technically, users are allowed to use SMTP from an IP address as long as they have previously made a successful login into the POP service at the same mail hosting provider, from the same address, within a predefined timeout period. The main advantage of this process is that it was generally transparent to the average user who will be connecting with an email client, which almost always attempted to fetch new mail before sending new mail. The disadvantages include a potentially complex setup for the mail hosting provider (requiring some sort of communication channel between the POP service and the SMTP service) and uncertainty as to how much time users will take to connect via SMTP (to send mail) after connecting to POP. Those users not handled by this method need to resort to other authorization methods. Also, in cases where users come from externally controlled dynamically assigned addresses, the SMTP server must be careful about not giving too much leeway when allowing unauthorized connections, because of a possibility of race conditions leaving an open mail relay unintentionally exposed. See also Simple Mail Transfer Protocol SMTP AUTH, specified in Mail submission protocol, specified in Email authentication Computer access control protocols
https://en.wikipedia.org/wiki/Work-at-home%20scheme
A work-at-home scheme is a get-rich-quick scam in which a victim is lured by an offer to be employed at home, very often doing some simple task in a minimal amount of time with a large amount of income that far exceeds the market rate for the type of work. The true purpose of such an offer is for the perpetrator to extort money from the victim, either by charging a fee to join the scheme, or requiring the victim to invest in products whose resale value is misrepresented. Overview Remote work schemes have been recorded since the early 20th century; the earliest studied "envelope stuffing" scam originated in the United States during the Great Depression in the 1920s and 1930s. In this scam, the worker is offered entry to a scheme where they can earn $2 for every envelope they fill. After paying a small $2 fee to join the scheme, the victim is sent a flyer template for the self-same work-from-home scheme, and instructed to post these advertisements around their local area – the victim is simply "stuffing envelopes" with flyer templates that perpetuate the scheme. Originally found as printed adverts in newspapers and magazines, variants of this scam have expanded into more modern media, such as television and radio adverts, and forum posts on the Internet. In some countries, law enforcement agencies work to fight work-at-home schemes. In 2006, the United States Federal Trade Commission (FTC) established Project False Hopes, a federal and state law enforcement sweep that targets bogus business opportunities and work-at-home scams. The crackdown involved more than 100 law enforcement actions by the FTC, the Department of Justice, the United States Postal Inspection Service, and law enforcement agencies in eleven states. Home-based business and remote work are a legitimate avenue for employment, but anyone seeking such an employment opportunity can be scammed by accepting home employment offers from individuals or unknown companies. A 2007 report in the United States su
https://en.wikipedia.org/wiki/European%20Spallation%20Source
The European Spallation Source ERIC (ESS) is a multi-disciplinary research facility that, when completed, will be the world's most powerful pulsed neutron source. The ESS is currently under construction in Lund, Sweden and its Data Management and Software Centre (DMSC) is located in Copenhagen, Denmark. The 13 European member countries are partners in the construction and operation of ESS. ESS will begin its scientific user program in 2023, and its construction phase is scheduled for completion by 2025. ESS will enable scientists to observe and understand basic atomic structures and forces at lengths and time scales unachievable from other neutron sources. The research facility is located close to the Max IV Laboratory, which conducts synchotron radiation research. The colocation of powerful neutron and synchotron facilities (other examples are the Institut Laue–Langevin with the European Synchrotron Radiation Facility, and the ISIS Neutron and Muon Source with the Diamond Light Source), is efficient because much of the knowledge, technical infrastructure, and scientific methods associated with the technologies are similar. The construction of the facility began in the summer of 2014 and the first science results are planned for 2023. During the construction phase, scientists and engineers from more than 100 partner laboratories, universities, and research institutes are collaborating to optimise the technical design of the ESS facility and maximise its research potential, with contributions of human resources, knowledge, and equipment. ESS will use nuclear spallation, a process in which neutrons are liberated from heavy elements by high energy protons.  This is intrinsically a much safer process than uranium fission. Unlike existing facilities, the ESS is neither a "short pulse" (micro-seconds) spallation source, nor a continuous source like the SINQ facility in Switzerland, but the first example of a "long pulse" source (milli-seconds) . The facility consi
https://en.wikipedia.org/wiki/Declared%20Rare%20and%20Priority%20Flora%20List
The Declared Rare and Priority Flora List is the system by which Western Australia's conservation flora are given a priority. Developed by the Government of Western Australia's Department of Environment and Conservation, it was used extensively within the department, including the Western Australian Herbarium. The herbarium's journal, Nuytsia, which has published over a quarter of the state's conservation taxa, requires a conservation status to be included in all publications of new Western Australian taxa that appear to be rare or endangered. The system defines six levels of priority taxa: X: Threatened (Declared Rare Flora) – Presumed Extinct Taxa These are taxa that are thought to be extinct, either because they have not been collected for over 50 years despite thorough searching, or because all known wild populations have been destroyed. They have been declared as such in accordance with the Wildlife Conservation Act 1950, and are therefore afforded legislative protection under that act. T: Threatened (Declared Rare Flora) – Extant Taxa These are taxa that have been thoroughly surveyed, and determined to be rare, in danger of extinction, or otherwise in need of special protection. They have been declared rare in accordance with the Wildlife Conservation Act 1950, and are therefore afforded legislative protection under that act. The code for this category was previously 'R'. P1: Priority One – Poorly Known Taxa These are taxa that are known from only a few (generally less than five) populations, all of which are under immediate threat. They are candidates for declaration as rare flora, but are in need of further survey. P2: Priority Two – Poorly Known Taxa These are taxa that are known from only a few (generally less than five) populations, some of which are not thought to be under immediate threat. They are candidates for declaration as rare flora, but are in need of further survey. P3: Priority Three – Poorly Known Taxa That are taxa that are known from severa
https://en.wikipedia.org/wiki/Exonic%20splicing%20enhancer
In molecular biology, an exonic splicing enhancer (ESE) is a DNA sequence motif consisting of 6 bases within an exon that directs, or enhances, accurate splicing of heterogeneous nuclear RNA (hnRNA) or pre-mRNA into messenger RNA (mRNA). Introduction Short sequences of DNA are transcribed to RNA; then this RNA is translated to a protein. A gene located in DNA will contain introns and exons. Part of the process of preparing the RNA includes splicing out the introns, sections of RNA that do not code for the protein. The presence of exonic splicing enhancers is essential for proper identification of splice sites by the cellular machinery. Role in splicing SR proteins bind to and promote exon splicing in regions with ESEs, while heterogeneous ribonucleoprotein particles (hnRNPs) bind to and block exon splicing in regions with exonic splicing silencers. Both types of proteins are involved in the assembly and proper functioning of spliceosomes. During RNA splicing, U2 small nuclear RNA auxiliary factor 1 (U2AF35) and U2AF2 (U2AF65) interact with the branch site and the 3' splice site of the intron to form the lariat. It is thought that SR proteins that bind to ESEs promote exon splicing by increasing interactions with U2AF35 and U2AF65. Mutation of exonic splicing enhancer motifs is a significant contributor to genetic disorders and some cancers. Simple point mutations in ESEs can inhibit affinity for splicing factors and alter alternative splicing, leading to altered mRNA sequence and protein translation. A field of genetic research is dedicated to determining the location and significance of ESE motifs in vivo. Research Computational methods were used to identify 238 candidate ESEs. ESEs are clinically significant because synonymous point mutations previously thought to be silent mutations located in an ESEs can lead to exon skipping and the production of a non functioning protein. Disruption of an exon splicing enhancer in exon 3 of MLH1 gene is the caus
https://en.wikipedia.org/wiki/WestGrid
WestGrid is a government-funded infrastructure program started in 2003, mainly in Western Canada, that provides institutional research faculty and students access to high performance computing and distributed data storage, using a combination of grid, networking, and collaboration tools. WestGrid is one of four partners within the umbrella organization, Compute Canada. Principal participants WestGrid has 14 partner institutions across four provinces - British Columbia, Alberta, Saskatchewan and Manitoba. The participating institutions include: Simon Fraser University University of British Columbia University of Victoria University of Northern British Columbia The Banff Centre University of Alberta University of Calgary University of Lethbridge Athabasca University University of Saskatchewan University of Regina University of Manitoba University of Winnipeg Brandon University WestGrid also works in partnership with each province's Optical Regional Advanced Network. WestGrid's network partners include: BCNET Cybera SRnet MRnet CANARIE
https://en.wikipedia.org/wiki/Evaporation%20%28deposition%29
Evaporation is a common method of thin-film deposition. The source material is evaporated in a vacuum. The vacuum allows vapor particles to travel directly to the target object (substrate), where they condense back to a solid state. Evaporation is used in microfabrication, and to make macro-scale products such as metallized plastic film. History Evaporation deposition was first observed in incandescent light bulbs during the late nineteenth century. The problem of bulb blackening was one of the main obstacles to making bulbs with long life, and received a great amount of study by Thomas Edison and his General Electric company, as well as many others working on their own lightbulbs. The phenomenon was first adapted to a process of vacuum deposition by Pohl and Pringsheim in 1912. However, it found little use until the 1930s, when people began experimenting with ways to make aluminum-coated mirrors for use in telescopes. Aluminum was far too reactive to be used in chemical wet deposition or electroplating methods. John D. Strong was successful in making the first aluminum telescope-mirrors in the 1930s using evaporation deposition. Because it produces an amorphous (glassy) coating rather than a crystalline one, with high uniformity and precise control of thickness, thereafter it became a common process for producing thin-film optical coatings from a variety of materials, both metal and non-metal (dielectric), and has been adopted for many other uses, such as coating plastic toys and automobile parts, the production of semiconductors and microchips, and Mylar films with uses ranging from capacitors to spacecraft thermal control. Physical principle Evaporation involves two basic processes: a hot source evaporates a material and it condenses on a colder substrate that is below its melting point. It resembles the familiar process by which liquid water appears on the lid of a boiling pot. However, the gaseous environment and heat source (see "Equipment" below) are
https://en.wikipedia.org/wiki/Bipolar%20neuron
A bipolar neuron, or bipolar cell, is a type of neuron that has two extensions (one axon and one dendrite). Many bipolar cells are specialized sensory neurons for the transmission of sense. As such, they are part of the sensory pathways for smell, sight, taste, hearing, touch, balance and proprioception. The other shape classifications of neurons include unipolar, pseudounipolar and multipolar. During embryonic development, pseudounipolar neurons begin as bipolar in shape but become pseudounipolar as they mature. Common examples are the retina bipolar cell, the ganglia of the vestibulocochlear nerve, the extensive use of bipolar cells to transmit efferent (motor) signals to control muscles, olfactory receptor neurons in the olfactory epithelium for smell (axons form the olfactory nerve), and neurons in the spiral ganglion for hearing (CN VIII). In the retina Often found in the retina, bipolar cells are crucial as they serve as both direct and indirect cell pathways. The specific location of the bipolar cells allow them to facilitate the passage of signals from where they start in the receptors to where they arrive at the amacrine and ganglion cells. Bipolar cells in the retina are also unusual in that they do not fire impulses like the other cells found within the nervous system. Rather, they pass the information by graded signal changes. Bipolar cells come in two varieties, having either an on-center or an off-center receptive field, each with a surround of the opposite sign. The off-center bipolar cells have excitatory synaptic connections with the photoreceptors, which fire continuously in the dark and are hyperpolarized (suppressed) by light. The excitatory synapses thus convey a suppressive signal to the off-center bipolar cells. On-center bipolar cells have inhibitory synapses with the photoreceptors and therefore are excited by light and suppressed in the dark. In the vestibular nerve Bipolar neurons exist within the vestibular nerve as it is responsibl
https://en.wikipedia.org/wiki/Category%20utility
Category utility is a measure of "category goodness" defined in and . It attempts to maximize both the probability that two objects in the same category have attribute values in common, and the probability that objects from different categories have different attribute values. It was intended to supersede more limited measures of category goodness such as "cue validity" (; ) and "collocation index" . It provides a normative information-theoretic measure of the predictive advantage gained by the observer who possesses knowledge of the given category structure (i.e., the class labels of instances) over the observer who does not possess knowledge of the category structure. In this sense the motivation for the category utility measure is similar to the information gain metric used in decision tree learning. In certain presentations, it is also formally equivalent to the mutual information, as discussed below. A review of category utility in its probabilistic incarnation, with applications to machine learning, is provided in . Probability-theoretic definition of category utility The probability-theoretic definition of category utility given in and is as follows: where is a size- set of -ary features, and is a set of categories. The term designates the marginal probability that feature takes on value , and the term designates the category-conditional probability that feature takes on value given that the object in question belongs to category . The motivation and development of this expression for category utility, and the role of the multiplicand as a crude overfitting control, is given in the above sources. Loosely , the term is the expected number of attribute values that can be correctly guessed by an observer using a probability-matching strategy together with knowledge of the category labels, while is the expected number of attribute values that can be correctly guessed by an observer the same strategy but without any knowledge of the category label
https://en.wikipedia.org/wiki/Sand%20table
A sand table uses constrained sand for modelling or educational purposes. The original version of a sand table may be the abax used by early Greek students. In the modern era, one common use for a sand table is to make terrain models for military planning and wargaming. Abax An abax was a table covered with sand commonly used by students, particularly in Greece, to perform studies such as writing, geometry, and calculations. An abax was the predecessor to the abacus. Objects, such as stones, were added for counting and then columns for place-valued arithmetic. The demarcation between an abax and an abacus seems to be poorly defined in history; moreover, modern definitions of the word abacus universally describe it as a frame with rods and beads and, in general, do not include the definition of "sand table". The sand table may well have been the predecessor to some board games. ("The word abax, or abacus, is used both for the reckoning-board with its counters and the play-board with its pieces, ..."). Abax is from the old Greek for "sand table". Ghubar An Arabic word for sand (or dust) is ghubar (or gubar), and Western numerals (the decimal digits 0–9) are derived from the style of digits written on ghubar tables in North-West Africa and Iberia, also described as the 'West Arabic' or 'gubar' style. Military use Sand tables have been used for military planning and wargaming for many years as a field expedient, small-scale map, and in training for military actions. In 1890 a Sand table room was built at the Royal Military College of Canada for use in teaching cadets military tactics; this replaced the old sand table room in a pre-college building, in which the weight of the sand had damaged the floor. The use of sand tables increasingly fell out of favour with improved maps, aerial and satellite photography, and later, with digital terrain simulations. More modern sand tables have incorporated Augmented Reality, such as the Augmented Reality Sandtable (ARES) deve
https://en.wikipedia.org/wiki/Cosine%20similarity
In data analysis, cosine similarity is a measure of similarity between two non-zero vectors defined in an inner product space. Cosine similarity is the cosine of the angle between the vectors; that is, it is the dot product of the vectors divided by the product of their lengths. It follows that the cosine similarity does not depend on the magnitudes of the vectors, but only on their angle. The cosine similarity always belongs to the interval For example, two proportional vectors have a cosine similarity of 1, two orthogonal vectors have a similarity of 0, and two opposite vectors have a similarity of -1. In some contexts, the component values of the vectors cannot be negative, in which case the cosine similarity is bounded in . For example, in information retrieval and text mining, each word is assigned a different coordinate and a document is represented by the vector of the numbers of occurrences of each word in the document. Cosine similarity then gives a useful measure of how similar two documents are likely to be, in terms of their subject matter, and independently of the length of the documents. The technique is also used to measure cohesion within clusters in the field of data mining. One advantage of cosine similarity is its low complexity, especially for sparse vectors: only the non-zero coordinates need to be considered. Other names for cosine similarity include Orchini similarity and Tucker coefficient of congruence; the Otsuka–Ochiai similarity (see below) is cosine similarity applied to binary data. Definition The cosine of two non-zero vectors can be derived by using the Euclidean dot product formula: Given two n-dimensional vectors of attributes, A and B, the cosine similarity, , is represented using a dot product and magnitude as where and are the th components of vectors and , respectively. The resulting similarity ranges from -1 meaning exactly opposite, to 1 meaning exactly the same, with 0 indicating orthogonality or decorrelation, w
https://en.wikipedia.org/wiki/Relative%20permeability
In multiphase flow in porous media, the relative permeability of a phase is a dimensionless measure of the effective permeability of that phase. It is the ratio of the effective permeability of that phase to the absolute permeability. It can be viewed as an adaptation of Darcy's law to multiphase flow. For two-phase flow in porous media given steady-state conditions, we can write where is the flux, is the pressure drop, is the viscosity. The subscript indicates that the parameters are for phase . is here the phase permeability (i.e., the effective permeability of phase ), as observed through the equation above. Relative permeability, , for phase is then defined from , as where is the permeability of the porous medium in single-phase flow, i.e., the absolute permeability. Relative permeability must be between zero and one. In applications, relative permeability is often represented as a function of water saturation; however, owing to capillary hysteresis one often resorts to a function or curve measured under drainage and another measured under imbibition. Under this approach, the flow of each phase is inhibited by the presence of the other phases. Thus the sum of relative permeabilities over all phases is less than 1. However, apparent relative permeabilities larger than 1 have been obtained since the Darcean approach disregards the viscous coupling effects derived from momentum transfer between the phases (see assumptions below). This coupling could enhance the flow instead of inhibit it. This has been observed in heavy oil petroleum reservoirs when the gas phase flows as bubbles or patches (disconnected). Modelling assumptions The above form for Darcy's law is sometimes also called Darcy's extended law, formulated for horizontal, one-dimensional, immiscible multiphase flow in homogeneous and isotropic porous media. The interactions between the fluids are neglected, so this model assumes that the solid porous media and the other fluids form a new p
https://en.wikipedia.org/wiki/Software%20quality%20management
Software quality management (SQM) is a management process that aims to develop and manage the quality of software in such a way so as to best ensure that the product meets the quality standards expected by the customer while also meeting any necessary regulatory and developer requirements, if any. Software quality managers require software to be tested before it is released to the market, and they do this using a cyclical process-based quality assessment in order to reveal and fix bugs before release. Their job is not only to ensure their software is in good shape for the consumer but also to encourage a culture of quality throughout the enterprise. Quality management activities Software quality management activities are generally split up into three core components: quality assurance, quality planning, and quality control. Some like software engineer and author Ian Sommerville don't use the term "quality control" (as quality control is often viewed as more a manufacturing term than a software development term), rather, linking its associated concepts with the concept of quality assurance. However, the three core components otherwise remain the same. Quality assurance Software quality assurance sets up an organized and logical set of organizational processes and deciding on that software development standards — based on industry best practices — that should be paired with those organizational processes, software developers stand a better chance of producing higher quality software. However, linking quality attributes such as "maintainability" and "reliability" to processes is more difficult in software development due to its creative design elements versus the mechanical processes of manufacturing. Additionally, "process standardization can sometimes stifle creativity, which leads to poorer rather than better quality software." This stage can include: encouraging documentation process standards, such as the creation of well-defined engineering documents using
https://en.wikipedia.org/wiki/Multibody%20system
Multibody system is the study of the dynamic behavior of interconnected rigid or flexible bodies, each of which may undergo large translational and rotational displacements. Introduction The systematic treatment of the dynamic behavior of interconnected bodies has led to a large number of important multibody formalisms in the field of mechanics. The simplest bodies or elements of a multibody system were treated by Newton (free particle) and Euler (rigid body). Euler introduced reaction forces between bodies. Later, a series of formalisms were derived, only to mention Lagrange’s formalisms based on minimal coordinates and a second formulation that introduces constraints. Basically, the motion of bodies is described by their kinematic behavior. The dynamic behavior results from the equilibrium of applied forces and the rate of change of momentum. Nowadays, the term multibody system is related to a large number of engineering fields of research, especially in robotics and vehicle dynamics. As an important feature, multibody system formalisms usually offer an algorithmic, computer-aided way to model, analyze, simulate and optimize the arbitrary motion of possibly thousands of interconnected bodies. Applications While single bodies or parts of a mechanical system are studied in detail with finite element methods, the behavior of the whole multibody system is usually studied with multibody system methods within the following areas: Aerospace engineering (helicopter, landing gears, behavior of machines under different gravity conditions) Biomechanics Combustion engine, gears and transmissions, chain drive, belt drive Dynamic simulation Hoist, conveyor, paper mill Military applications Particle simulation (granular media, sand, molecules) Physics engine Robotics Vehicle simulation (vehicle dynamics, rapid prototyping of vehicles, improvement of stability, comfort optimization, improvement of efficiency, ...) Example The following example shows a typical
https://en.wikipedia.org/wiki/Geodetic%20effect
The geodetic effect (also known as geodetic precession, de Sitter precession or de Sitter effect) represents the effect of the curvature of spacetime, predicted by general relativity, on a vector carried along with an orbiting body. For example, the vector could be the angular momentum of a gyroscope orbiting the Earth, as carried out by the Gravity Probe B experiment. The geodetic effect was first predicted by Willem de Sitter in 1916, who provided relativistic corrections to the Earth–Moon system's motion. De Sitter's work was extended in 1918 by Jan Schouten and in 1920 by Adriaan Fokker. It can also be applied to a particular secular precession of astronomical orbits, equivalent to the rotation of the Laplace–Runge–Lenz vector. The term geodetic effect has two slightly different meanings as the moving body may be spinning or non-spinning. Non-spinning bodies move in geodesics, whereas spinning bodies move in slightly different orbits. The difference between de Sitter precession and Lense–Thirring precession (frame dragging) is that the de Sitter effect is due simply to the presence of a central mass, whereas Lense–Thirring precession is due to the rotation of the central mass. The total precession is calculated by combining the de Sitter precession with the Lense–Thirring precession. Experimental confirmation The geodetic effect was verified to a precision of better than 0.5% percent by Gravity Probe B, an experiment which measures the tilting of the spin axis of gyroscopes in orbit about the Earth. The first results were announced on April 14, 2007 at the meeting of the American Physical Society. Formulae To derive the precession, assume the system is in a rotating Schwarzschild metric. The nonrotating metric is where c = G = 1. We introduce a rotating coordinate system, with an angular velocity , such that a satellite in a circular orbit in the θ = π/2 plane remains at rest. This gives us In this coordinate system, an observer at radial position r s
https://en.wikipedia.org/wiki/World%20Association%20of%20Copepodologists
The World Association of Copepodologists (WAC) is a non-profit organization created to promote research on copepods by facilitating communication among interested specialists. WAC has about 130 members worldwide. Although the World Association of Copepodologists is composed primarily of university professors and professional researchers, "any person interested in any aspect of the study of Copepoda is eligible for membership." The business of the WAC is conducted primarily at a business meeting held every three years at the International Conference on Copepoda (ICOC). Since 1987, conferences have been held at venues in Africa, Asia, Europe, North America and South America. The 13th conference was held in July 2017 at the Cabrillo Marine Aquarium in San Pedro, California, United States. The 14th conference is to be held in July 2020 in Kruger National Park in South Africa. Recent conferences have attracted about 250-350 participants. The WAC assists the local organizers of the ICOCs in sponsoring and organizing workshops associated with these conferences. Most workshops train students in identification and other practical aspects of copepod studies. Between conferences, communication between members takes place through the society newsletter Monoculus and via such on-line resources as the society website hosted by the [Senckenberg Research Institute] in [Wilhelmshaven] Germany and Internet forums hosted by academic bodies such as the Asociación Latinoamericana de Carcinología and the Virginia Institute of Marine Science. Monoculus, the name of the newsletter, is Latin for "one-eyed" and refers to a shared feature of many members of subclass Copepoda. History In 1979, copepodologist Dov Por wrote a letter to the meiofauna newsletter Psammonalia suggesting the creation of both a symposium and a newsletter dedicated to the discussion of the Copepoda. There was a small response to this initiative, which increased when the suggestion was repeated in Crustaceana, a
https://en.wikipedia.org/wiki/Sympathetic%20resonance
Sympathetic resonance or sympathetic vibration is a harmonic phenomenon wherein a passive string or vibratory body responds to external vibrations to which it has a harmonic likeness. The classic example is demonstrated with two similarly-tuned tuning forks. When one fork is struck and held near the other, vibrations are induced in the unstruck fork, even though there is no physical contact between them. In similar fashion, strings will respond to the vibrations of a tuning fork when sufficient harmonic relations exist between them. The effect is most noticeable when the two bodies are tuned in unison or an octave apart (corresponding to the first and second harmonics, integer multiples of the inducing frequency), as there is the greatest similarity in vibrational frequency. Sympathetic resonance is an example of injection locking occurring between coupled oscillators, in this case coupled through vibrating air. In musical instruments, sympathetic resonance can produce both desirable and undesirable effects. According to The New Grove Dictionary of Music and Musicians: Sympathetic resonance in music instruments Sympathetic resonance has been applied to musical instruments from many cultures and time periods, and to string instruments in particular. In instruments with undamped strings (e.g. harps, guitars and kotos), strings will resonate at their fundamental or overtone frequencies when other nearby strings are sounded. For example, an A string at 440 Hz will cause an E string at 330 Hz to resonate, because they share an overtone of 1320 Hz (the third harmonic of A and fourth harmonic of E). Sympathetic resonance is a factor in the timbre of a string instrument. Tailed bridge guitars like the Fender Jaguar differ in timbre from guitars with short bridges, due to the resonance that occurs in their extended floating bridge. Certain instruments are built with sympathetic strings, auxiliary strings which are not directly played but sympathetically produce sound in
https://en.wikipedia.org/wiki/Slidex
Slidex was a hand-held, paper-based encryption system used at a low, front line (platoon, troop and section) level in the British Army during the Second World War and later the Cold War period. It was replaced by the BATCO tactical code, which, in turn has been largely made obsolete by the Bowman secure voice radios. Design Slidex used a series of vocabulary cards arranged in a grid of 12 rows and 17 columns. Each of the 204 resulting cells has a word or phrase, as well as a letter or number. The latter allowed the system to spell out words and transmit numbers. The cards were stored in a folding case that had a pair of cursors to facilitate locating cells. Messages were encrypted and decrypted using code strips that could be placed in holder along the top and left side of the vocabulary card. Blank vocabulary cards were provided to allow units to create a word set for a specific mission. See also Encryption algorithm Military intelligence Further reading "The Slidex RT Code", Cryptologia 8(2), April 1984
https://en.wikipedia.org/wiki/Allied%20Telesis
is a network infrastructuretelecommunications company, formerly Allied Telesyn. The company is Headquartered in Japan, and has other branches in San Jose, California. The company was Founded in 1987, as a global provider of secure Ethernet & IP access solutions along with deployment of IP triple play networks over copper and fiber access infrastructure. Company history March 1987, System Plus Co. is established with 1 million Yen capital stock. September, 1987 The company is renamed Allied Telesis K.K. April 1990, Capital stock is increased to 99 million Yen. February 1991, Allied Telesyn Intl. (Asia) Pte., Ltd. is established in Singapore. June 1995, Allied Telesyn Intl. Pty Ltd. is established in Australia. November 1995, Malaysia Sales Office opens. June 1997, Capital stock is increased to 734 million Yen. July 1997, Taiwan Representative Office is launched. May 1999, Acquires a networking division from Teltrend Ltd., US. May 1999, Centrecom Systems Ltd. is established in UK. June 2000, Allied Telesyn Europe Service S.r.l. is established in Italy. June 2000, Allied Telesyn Korea Co., Ltd. is established in the Republic of Korea. July 2000, Allied Telesis K.K. is listed on the Second Section of the Tokyo Stock Exchange. October 2000, Allied Telesyn Labs New Zealand Ltd., an R&D center, is established in Christchurch, New Zealand. March 2001, Allied Telesyn Philippines Inc. is established in the Philippines as a software development base. March 2001, Allied Telesyn International m.b.H is established in Austria. September 2001, Allied Telesis (Suzhou) Co., Ltd. is established in China. October 2001, Allied Telesyn Networks Inc., an R&D center, is established in North Carolina, US. January 2002, Allied Telesis International SA is established in Switzerland. February 2002, Allied Telesyn International S.L.U. is established in Spain. July 2004, Allied Telesis K.K. is renamed Allied Telesis Holdings K.K. March 2005, Allied Telesis K.K. acquires ROOT Inc, a wireless ne
https://en.wikipedia.org/wiki/Nootkatone
Nootkatone is a natural organic compound, a sesquiterpenoid, and a ketone that is the most important and expensive aromatic of grapefruit, and which also occurs in other organisms. Previously, nootkatone was thought to be one of the main chemical components of the smell and flavour of grapefruits. In high purity, it usually is found as colorless crystals. Crude extractives are liquid, viscous, and yellow. Nootkatone typically is extracted from grapefruit through the chemical or biochemical oxidation of valencene. It is found in Alaska yellow cedar trees, as well as in vetiver grass. Mechanism of action As is true of other plant terpenoids, nootkatone activates α-adrenergic type 1 octopamine receptor (PaOA1) in susceptible arthropods, causing fatal spasms. Natural origin Nootkatone was isolated from the wood of the Alaskan yellow cedar, Cupressus nootkatensis. The species name, nootkatensis, is derived from the language of the Nuu-Chah-Nulth people of Canada (formerly referred to as the Nootka people). Uses Nootkatone in spray form has been shown as an effective repellent or insecticide against deer ticks and lone star ticks. It is also an effective repellent or insecticide against mosquitos, and may repel bed bugs, head lice, Formosan termites, and other insects. It is an environmentally friendly insecticide because it is a volatile essential oil that does not persist in the environment. It was approved by the U.S. EPA for this use on August 10, 2020. Its ability to repel ticks, mosquitoes, and other insects may last for hours, in contrast to other plant-based oil repellants like citronella, peppermint oil, and lemongrass oil. It is nontoxic to humans, is an approved food additive, and is commonly used in foods, cosmetics, and pharmaceuticals. The CDC has licensed patents to two companies to produce an insecticide and an insect repellant. Allylix, of San Diego, California (Now Evolva), is one of these licensees, which has developed an enzyme fermentation proce
https://en.wikipedia.org/wiki/Wave%20equation%20analysis
Wave equation analysis is a numerical method of analysis for the behavior of driven foundation piles. It predicts the pile capacity versus blow count relationship (bearing graph) and pile driving stress. The model mathematically represents the pile driving hammer and all its accessories (ram, cap, and cap block), as well as the pile, as a series of lumped masses and springs in a one-dimensional analysis. The soil response for each pile segment is modeled as viscoelastic-plastic. The method was first developed in the 1950s by E.A. Smith of the Raymond Pile Driving Company. Wave equation analysis of piles has seen many improvements since the 1950s such as including a thermodynamic diesel hammer model and residual stress. Commercial software packages (such as AllWave-PDP and GRLWEAP) are now available to perform the analysis. One of the principal uses of this method is the performance of a driveability analysis to select the parameters for safe pile installation, including recommendations on cushion stiffness, hammer stroke and other driving system parameters that optimize blow counts and pile stresses during pile driving. For example, when a soft or hard layer causes excessive stresses or unacceptable blow counts.
https://en.wikipedia.org/wiki/Electronic%20switch
In electronics, an electronic switch is a switch controlled by an active electronic component or device. Without using moving parts, they are called solid state switches, which distinguishes them from mechanical switches. Electronic switches are considered binary devices because they dramatically change the conductivity of a path in electrical circuit between two extremes when switching between their two states of on and off. History Many people use metonymy to call a variety of devices that conceptually connect or disconnect signals and communication paths between electrical devices as "switches", analogous to the way mechanical switches connect and disconnect paths for electrons to flow between two conductors. The traditional relay is an electromechanical switch that uses an electromagnet controlled by a current to operate a mechanical switching mechanism. Other operating principles are also used (for instance, solid-state relays invented in 1971 control power circuits with no moving parts, instead using a semiconductor device to perform switching—often a silicon-controlled rectifier or triac). Early telephone systems used an electromagnetically operated Strowger switch to connect telephone callers; later telephone exchanges contain one or more electromechanical crossbar switches. Thus the term 'switched' is applied to telecommunications networks, and signifies a network that is circuit switched, providing dedicated circuits for communication between end nodes, such as the public switched telephone network. The term switch has since spread to a variety of digital active devices such as transistors and logic gates whose function is to change their output state between logic states or connect different signal lines. The common feature of all these usages is they refer to devices that control a binary state of either on or off, closed or open, connected or not connected, conducting or not conducting, low impedance or high impedance. Types The diode can be t
https://en.wikipedia.org/wiki/Zultys
Zultys, Inc. is a privately owned software company headquartered in Sunnyvale, California. It develops unified communications products and integrated desktop IP phones. Corporate history Headquartered in Sunnyvale, Zultys Technologies was founded by Iain Milnes in 2001 as a privately held company. The company launched its first product, the MX1200 IP phone system, in January 2003 followed quickly by its flagship product, the MX250. In the same year branch offices were opened in London and Sydney, and Network World magazine named Zultys to its 2003 list of Top 10 companies to watch. By 2005 the company had 235 employees and branch offices in 10 different countries. In September 2006, due to a sudden cancellation of planned capital investment, Zultys filed for Chapter 11 bankruptcy in Northern District of California court. In October 2006, Zultys Technologies' assets and intellectual property were put up for bankruptcy auction. The primary bidders in the auction were Nebraska-based InPath Devices, Telrad Connergy backed Pivot VoIP (a group composed of former Zultys engineers), and Ian Milnes. Pivot VoIP eventually won the bidding and acquired Zultys for US$2.65 million plus debt obligations, and operations and product portfolios were merged with Pivot. The new company was formed as Zultys, Inc., and Telrad Connegy's Chairman and owner, Avi Weinrib, was named President and CEO. In November, 2009, Neil Lichtman was named CEO. Products Zultys' primary product is its line of cloud and premises-based Zultys MX IP PBXs, which are based on SIP open standards. Zultys IP phone systems offer features such as softphone, presence, secure chat, instant messaging, remote work, call centres, interactive voice response, automatic call distributor, automated and on-demand call-recording software, fax, integration of mobile devices into the office phone system and more. Client and administrative applications support Mac, Windows (32- or 64-bit), and Linux. In 2014, Zultys launch
https://en.wikipedia.org/wiki/Windows%20IoT
Windows IoT, short for Windows Internet of Things and formerly known as Windows Embedded, is a family of operating systems from Microsoft designed for use in embedded systems. Microsoft has three different subfamilies of operating systems for embedded devices targeting a wide market, ranging from small-footprint, real-time devices to point of sale (POS) devices like kiosks. Windows Embedded operating systems are available to original equipment manufacturers (OEMs), who make it available to end users preloaded with their hardware, in addition to volume license customers in some cases. In April 2018, Microsoft released Azure Sphere, another operating system designed for IoT applications running on the Linux kernel. The IoT family Microsoft rebranded "Windows Embedded" to "Windows IoT" starting with the release of embedded editions of Windows 10. Enterprise Windows IoT Enterprise branded editions, version 1809 and older, are binary identical to their respective Windows 10 Enterprise editions – Long-Term Servicing Branch (LTSB), Current Branch for Business (CBB), Semi-Annual Channel (SAC), and Long-Term Servicing Channel (LTSC) – but are licensed exclusively for use in embedded devices. This brand replaces the Embedded Industry, Embedded Standard, and "For Embedded Systems" (FES) brands/subfamilies. Plain unlabeled, Retail/Thin Client, Tablet, and Small Tablet SKUs are available, again differing only in licensing. It now contains a minor change that allows the use of smaller storage devices, with the possibility of more changes being made in the future. In addition, starting with the LTSC edition of version 21H2, Windows 10 IoT Enterprise LTSC will gain an extra five years of support compared to Windows 10 Enterprise LTSC. Windows 10 IoT Enterprise 2015 (value based pricing): SKU 6EU-00124 - Windows 10 IoT Enterprise 2015 LTSB - High End Edition (Intel Core i7 | Intel Xeon | AMD FX) SKU 6EU-00125 - Windows 10 IoT Enterprise 2015 LTSB - Value Edition (Intel Core
https://en.wikipedia.org/wiki/Content%20Vectoring%20Protocol
In computer networks, Content Vectoring Protocol is a protocol for filtering data that is crossing a firewall into an external scanning device. An example of this is where all HTTP traffic is virus-scanned before being sent out to the user. This protocol is identified as part of the Checkpoint training as being one of the benefits of their products. It is not known whether this is just a re-working of another protocol that has been re-branded by Checkpoint or if this is a generic Internet protocol. Its default is to use TCP port 18181. It is used separately by few servers implementing firewall to inspect the http content. It may or may not inspect the whole of the content, which is entirely based on the administrator managing the firewall. The administrator can direct the whole of the internet traffic to the content vectoring protocol or specific content coming from specific source to be inspected by the content vectoring protocol.
https://en.wikipedia.org/wiki/Coercive%20function
In mathematics, a coercive function is a function that "grows rapidly" at the extremes of the space on which it is defined. Depending on the context different exact definitions of this idea are in use. Coercive vector fields A vector field is called coercive if where "" denotes the usual dot product and denotes the usual Euclidean norm of the vector x. A coercive vector field is in particular norm-coercive since for , by Cauchy–Schwarz inequality. However a norm-coercive mapping is not necessarily a coercive vector field. For instance the rotation by 90° is a norm-coercive mapping which fails to be a coercive vector field since for every . Coercive operators and forms A self-adjoint operator where is a real Hilbert space, is called coercive if there exists a constant such that for all in A bilinear form is called coercive if there exists a constant such that for all in It follows from the Riesz representation theorem that any symmetric (defined as for all in ), continuous ( for all in and some constant ) and coercive bilinear form has the representation for some self-adjoint operator which then turns out to be a coercive operator. Also, given a coercive self-adjoint operator the bilinear form defined as above is coercive. If is a coercive operator then it is a coercive mapping (in the sense of coercivity of a vector field, where one has to replace the dot product with the more general inner product). Indeed, for big (if is bounded, then it readily follows); then replacing by we get that is a coercive operator. One can also show that the converse holds true if is self-adjoint. The definitions of coercivity for vector fields, operators, and bilinear forms are closely related and compatible. Norm-coercive mappings A mapping between two normed vector spaces and is called norm-coercive if and only if More generally, a function between two topological spaces and is called coercive if for every compact subset of there exi
https://en.wikipedia.org/wiki/World%20Ocean%20Atlas
The World Ocean Atlas (WOA) is a data product of the Ocean Climate Laboratory of the National Oceanographic Data Center (U.S.). The WOA consists of a climatology of fields of in situ ocean properties for the World Ocean. It was first produced in 1994 (based on the earlier Climatological Atlas of the World Ocean, 1982), with later editions at roughly four year intervals in 1998, 2001, 2005, 2009, 2013, 2018, and 2023. Dataset The fields that make up the WOA dataset consist of objectively-analysed global grids at 1° spatial resolution. The fields are three-dimensional, and data are typically interpolated onto 33 standardised vertical intervals from the surface (0 m) to the abyssal seafloor (5500 m). In terms of temporal resolution, averaged fields are produced for annual, seasonal and monthly time-scales. The WOA fields include ocean temperature, salinity, dissolved oxygen, apparent oxygen utilisation (AOU), percent oxygen saturation, phosphate, silicic acid, and nitrate. Early editions of the WOA additionally included fields such as mixed layer depth and sea surface height. In addition to the averaged fields of ocean properties, the WOA also contains fields of statistical information concerning the constituent data that the averages were produced from. These include fields such as the number of data points the average is derived from, their standard deviation and standard error. A lower horizontal resolution (5°) version of the WOA is also available. The WOA dataset is primarily available as compressed ASCII, but since WOA 2005 a netCDF version has also been produced. Gallery See also CORA dataset European Atlas of the Seas Geochemical Ocean Sections Study (GEOSECS) Global Ocean Data Analysis Project (GLODAP) World Ocean Circulation Experiment (WOCE)
https://en.wikipedia.org/wiki/GreenNet
GreenNet is a not-for-profit Internet service provider based in London, England. It was established in 1985 "as an effective and cheap way for environmental activists to communicate". In 1987 the Joseph Rowntree Charitable Trust gave GreenNet a grant to enable it to bring a large number of peace groups online, and "After a few years they became one of the first internet service providers in Britain". GreenNet formed an international link with IGC and was a founder member of the Association for Progressive Communications, established in 1990. The registered charity GreenNet Charitable Trust was established in 1994 and owns GreenNet. GnFido GreenNet developed a Fido gateway, GnFido, which allowed access to basic internet facilities such as email using a store-and-forward system. It provided the only available cheap and accessible internet access for thousands of individuals and organisations in Africa, South Asia and Eastern Europe. 2013 DDoS Attack On 1 August 2013, GreenNet and the Association for Progressive Communications (APC) suffered an extensive DDoS attack. The attack was later described as a "DNS reflection attack" also known as a spoofed attack Several sources initially suspected the attack was linked to the Zimbabwean Elections, which had been held on the previous day. GreenNet's services were not fully operational again until 10.30 BST on Thursday 7 August. On 9 August there was a second attack, which, while affecting some systems, allowed GreenNet to discover the site which was being targeted. In October 2013, the target was revealed to be the site of investigative reporter Andrew Jennings. 2014 Legal Action on GCHQ Hacking In July 2014 Privacy International, GreenNet and five other Internet Service Providers took GCHQ, the UK security service, to the Investigatory Powers Tribunal, alleging breach of privacy and breaking into their networks. The case ultimately failed, but GCHQ were forced to admit clandestine hacking activities. GreenNet were sho
https://en.wikipedia.org/wiki/Null%20object%20pattern
In object-oriented computer programming, a null object is an object with no referenced value or with defined neutral (null) behavior. The null object design pattern, which describes the uses of such objects and their behavior (or lack thereof), was first published as "Void Value" and later in the Pattern Languages of Program Design book series as "Null Object" . Motivation In most object-oriented languages, such as Java or C#, references may be null. These references need to be checked to ensure they are not null before invoking any methods, because methods typically cannot be invoked on null references. The Objective-C language takes another approach to this problem and does nothing when sending a message to nil; if a return value is expected, nil (for objects), 0 (for numeric values), NO (for BOOL values), or a struct (for struct types) with all its members initialised to null/0/NO/zero-initialised struct is returned. Description Instead of using a null reference to convey absence of an object (for instance, a non-existent customer), one uses an object which implements the expected interface, but whose method body is empty. A key purpose of using a null object is to avoid conditionals of different kinds, resulting in code that is more focused, quicker to read and follow - i e improved readability. One advantage of this approach over a working default implementation is that a null object is very predictable and has no side effects: it does nothing. For example, a function may retrieve a list of files in a folder and perform some action on each. In the case of an empty folder, one response may be to throw an exception or return a null reference rather than a list. Thus, the code which expects a list must verify that it in fact has one before continuing, which can complicate the design. By returning a null object (i.e., an empty list) instead, there is no need to verify that the return value is in fact a list. The calling function may simply iterate the list
https://en.wikipedia.org/wiki/Dynamic%20mutation
In genetics, a dynamic mutation is an unstable heritable element where the probability of expression of a mutant phenotype is a function of the number of copies of the mutation. That is, the replication product (progeny) of a dynamic mutation has a different likelihood of mutation than its predecessor. These mutations, typically short sequences repeated many times, give rise to numerous known diseases, including the trinucleotide repeat disorders. Robert I. Richards and Grant R. Sutherland called these phenomena, in the framework of dynamical genetics, dynamic mutations. Triplet expansion is caused by slippage during DNA replication. Due to the repetitive nature of the DNA sequence in these regions , 'loop out' structures may form during DNA replication while maintaining complementary base pairing between the parent strand and daughter strand being synthesized. If the loop out structure is formed from sequence on the daughter strand this will result in an increase in the number of repeats. However, if the loop out structure is formed on the parent strand a decrease in the number of repeats occurs. It appears that expansion of these repeats is more common than reduction. Generally the larger the expansion the more likely they are to cause disease or increase the severity of disease. This property results in the characteristic of anticipation seen in trinucleotide repeat disorders. Anticipation describes the tendency of age of onset to decrease and severity of symptoms to increase through successive generations of an affected family due to the expansion of these repeats. Common features Most of these diseases have neurological symptoms. Anticipation/The Sherman paradox refers to progressively earlier or more severe expression of the disease in more recent generations. Repeats are usually polymorphic in copy number, with mitotic and meiotic instability. Copy number related to the severity and/or age of onset Imprinting effects Reverse mutation - The mutation can rev
https://en.wikipedia.org/wiki/AudioMulch
AudioMulch is modular audio software for making music and processing sound. The software can synthesize sound and process live and pre-recorded sound in real-time. AudioMulch has a patcher-style graphical user interface, in which modules called contraptions can be connected together to route audio and process sounds. Included are modules used in electronic dance music such as a bassline-style synthesizer and a drum machine, effects like ring modulation, flanging, reverb and delays, and other modules such as a delay-line granulator and stereo spatializer. As well as these internal contraptions, AudioMulch supports VST and VSTi plugins. History Origins of AudioMulch AudioMulch grew out of musician Ross Bencina's performance practice in the mid-1990s. At this time, live, computer-based sound processing systems were often expensive and restricted to use within research institutions. By 1995 however, the processing capabilities of the personal computer were sufficient that Bencina was able to create OverSYTE, a real-time performance granulator. OverSYTE was used by Bencina to process sound in his real-time performances with vocalists and instrumental musicians. AudioMulch grew out of the limitations of OverSYTE, which could process only one sound at a time. In contrast, AudioMulch can process multiple sounds sources at once. Development of AudioMulch AudioMulch has been in development since 1997. The first release made available for download on the Internet was beta version 0.7b1, in March 1998. There were 36 Beta releases prior to Version 1.0 of the software, which was released in February 2006. AudioMulch 1.0 was developed for Microsoft Windows in the C++ programming language, using the Borland C++ Builder development environment. Version 1.0 Version 1.0 was released on 21 February 2006. Version 2.0 AudioMulch 2.0 was released 5 June 2009. According to the website, this version is available for both Windows and Macintosh computers. Version 2.1 Version 2.1
https://en.wikipedia.org/wiki/Word%20mark%20%28computer%20hardware%29
In computer hardware, a word mark or flag is a bit in each memory location on some variable word length computers (e.g., IBM 1401, 1410, 1620) used to mark the end of a word. Sometimes the actual bit used as a word mark on a given machine is not called word mark, but has a different name (e.g., flag on the IBM 1620, because on this machine it is multipurpose). The term word mark should not be confused with group mark or with record mark, which are distinct characters.
https://en.wikipedia.org/wiki/Coding%20gain
In coding theory, telecommunications engineering and other related engineering problems, coding gain is the measure in the difference between the signal-to-noise ratio (SNR) levels between the uncoded system and coded system required to reach the same bit error rate (BER) levels when used with the error correcting code (ECC). Example If the uncoded BPSK system in AWGN environment has a bit error rate (BER) of 10−2 at the SNR level 4 dB, and the corresponding coded (e.g., BCH) system has the same BER at an SNR of 2.5 dB, then we say the coding gain = , due to the code used (in this case BCH). Power-limited regime In the power-limited regime (where the nominal spectral efficiency [b/2D or b/s/Hz], i.e. the domain of binary signaling), the effective coding gain of a signal set at a given target error probability per bit is defined as the difference in dB between the required to achieve the target with and the required to achieve the target with 2-PAM or (2×2)-QAM (i.e. no coding). The nominal coding gain is defined as This definition is normalized so that for 2-PAM or (2×2)-QAM. If the average number of nearest neighbors per transmitted bit is equal to one, the effective coding gain is approximately equal to the nominal coding gain . However, if , the effective coding gain is less than the nominal coding gain by an amount which depends on the steepness of the vs. curve at the target . This curve can be plotted using the union bound estimate (UBE) where Q is the Gaussian probability-of-error function. For the special case of a binary linear block code with parameters , the nominal spectral efficiency is and the nominal coding gain is kd/n. Example The table below lists the nominal spectral efficiency, nominal coding gain and effective coding gain at for Reed–Muller codes of length : Bandwidth-limited regime In the bandwidth-limited regime (, i.e. the domain of non-binary signaling), the effective coding gain of a signal set at a given
https://en.wikipedia.org/wiki/RM4SCC
RM4SCC (Royal Mail 4-State Customer Code) is the name of the barcode character set based on the Royal Mail 4-State Bar Code symbology created by Royal Mail. The RM4SCC is used for the Royal Mail Cleanmail service. It enables UK postcodes as well as Delivery Point Suffixes (DPSs) to be easily read by a machine at high speed. This barcode is known as CBC (Customer Bar Code) within Royal Mail. PostNL uses a slightly modified version called KIX which stands for Klant index (Customer index); it differs from CBC in that it doesn't use the start and end symbols or the checksum, separates the house number and suffixes with an X, and is placed below the address. Singapore Post uses RM4SCC without alteration. There are strict guidelines governing usage of these barcodes, which allow for maximum readability by machines. They can be used with Royal Mail's Cleanmail system, as an alternative to OCR readable fonts, to allow businesses to easily and cheaply send large quantities of letters. Encoding and content An individual bar can be short, extend upwards, extend downwards, or extend both up and down. These four possibilities are reflected in the "four-state" name of the encoding. Each character is then made up of four of these bars. There are 36 possible combinations like this, and so 36 symbols: 0 to 9 and 26 letters. In addition, single-bar start and stop characters are defined. As the example shows, the complete barcode consists of a start character, the postcode, the Delivery Point Suffix (DPS), a checksum character, and a stop character. The DPS is a two-character code ranging from 1A to 9T, with codes 9U to 9Z being accepted as default codes when no DPS has been allocated. The DPS can be found in Royal Mail's Postcode Address File. Checksum For the purpose of calculating the checksum, the top and bottom halves of each character can be assigned the values shown in the table below. Each such value is derived by assigning weights of 4,2,1 and 0 to the extensions
https://en.wikipedia.org/wiki/Leaf%20sensor
A leaf sensor is a phytometric device (measurement of plant physiological processes) that measures water loss or the water deficit stress (WDS) in plants by real-time monitoring the moisture level in plant leaves. The first leaf sensor was developed by LeafSens, an Israeli company granted a US patent for a mechanical leaf thickness sensing device in 2001. LeafSen has made strides incorporating their leaf sensory technology into citrus orchards in Israel. A solid state smart leaf sensor technology was developed by the University of Colorado at Boulder for NASA in 2007. It was designed to help monitor and control agricultural water demand. AgriHouse received a National Science Foundation (NSF) STTR grant in conjunction with the University of Colorado to further develop the solid state leaf sensor technology for precision irrigation control in 2007. Precision monitoring Water deficit stress measurements A Phase I research grant from the National Science Foundation in 2007 showed that the leaf sensor technology has the potential to save between 30% and 50% of irrigation water by reducing irrigation from once every 24 hours to about every 2 to 2.5 days by sensing impending water deficit stress. Leaf sensor technology developed by AgriHouse indicates water deficit stress by measuring the turgor pressure of a leaf, which decreases dramatically at the onset of leaf dehydration. Early detection of impending water deficit stress in plants can be used as an input parameter for precision irrigation control by allowing plants to communicate water requirements directly to humans and/or electronic interfaces. For example, a base system utilizing the wirelessly transmitted information of several sensors appropriately distributed over various sectors of a round field irrigated by a center-pivot irrigation system could tell the irrigation lever exactly when and what field sector needs to be irrigated. Irrigation control In a 2008 USDA sponsored field study AgriHouse's SG-1000
https://en.wikipedia.org/wiki/Cryogenic%20treatment
A cryogenic treatment is the process of treating workpieces to cryogenic temperatures (typically around -300°F / -184°C, or as low as ) in order to remove residual stresses and improve wear resistance in steels and other metal alloys, such as aluminum. In addition to seeking enhanced stress relief and stabilization, or wear resistance, cryogenic treatment is also sought for its ability to improve corrosion resistance by precipitating micro-fine eta carbides, which can be measured before and after in a part using a quantimet. The process has a wide range of applications from industrial tooling to the improvement of musical signal transmission. Some of the benefits of cryogenic treatment include longer part life, less failure due to cracking, improved thermal properties, better electrical properties including less electrical resistance, reduced coefficient of friction, less creep and walk, improved flatness, and easier machining. Processes Cryogenic tempering Cryogenic tempering is two phase metal treatment that involves a descent and ascent phase, including a cryogenic treatment process (known as "cryogenic processing") where the material is slowly cooled to ultra low temperatures (typically around -300°F / -184°C), which is then optionally reheated slowly (typically up to +325°F / 162°C). Materials do not "harden" during the temperature descent or ascent, rather their molecular structures are compressed together tightly in uniformity through a computer controlled process that typically uses liquid nitrogen to slowly descend temperatures. Invention History of Cryogenic Processing & Cryogenic Tempering The cryogenic treatment process was invented by Ed Busch (CryoTech) in Detroit, Michigan in 1966, inspired by NASA research, which later merged with 300 Below, Inc. in 2000 to become the world's largest and oldest commercial cryogenic processing company after Peter Paulin of Decatur, IL collaborated with process control engineers to invent the world's first comput
https://en.wikipedia.org/wiki/Quantum%20nonlocality
In theoretical physics, quantum nonlocality refers to the phenomenon by which the measurement statistics of a multipartite quantum system do not admit an interpretation in terms of a local realistic theory. Quantum nonlocality has been experimentally verified under different physical assumptions. Any physical theory that aims at superseding or replacing quantum theory should account for such experiments and therefore cannot fulfill local realism; quantum nonlocality is a property of the universe that is independent of our description of nature. Quantum nonlocality does not allow for faster-than-light communication, and hence is compatible with special relativity and its universal speed limit of objects. Thus, quantum theory is local in the strict sense defined by special relativity and, as such, the term "quantum nonlocality" is sometimes considered a misnomer. Still, it prompts many of the foundational discussions concerning quantum theory. History Einstein, Podolsky and Rosen In the 1935 EPR paper, Albert Einstein, Boris Podolsky and Nathan Rosen described "two spatially separated particles which have both perfectly correlated positions and momenta" as a direct consequence of quantum theory. They intended to use the classical principle of locality to challenge the idea that the quantum wavefunction was a complete description of reality, but instead they sparked a debate on the nature of reality. Afterwards, Einstein presented a variant of these ideas in a letter to Erwin Schrödinger, which is the version that is presented here. The state and notation used here are more modern, and akin to David Bohm's take on EPR. The quantum state of the two particles prior to measurement can be written as where . Here, subscripts “A” and “B” distinguish the two particles, though it is more convenient and usual to refer to these particles as being in the possession of two experimentalists called Alice and Bob. The rules of quantum theory give predictions for the outcomes of
https://en.wikipedia.org/wiki/Graceful%20labeling
In graph theory, a graceful labeling of a graph with edges is a labeling of its vertices with some subset of the integers from 0 to inclusive, such that no two vertices share a label, and each edge is uniquely identified by the absolute difference between its endpoints, such that this magnitude lies between 1 and inclusive. A graph which admits a graceful labeling is called a graceful graph. The name "graceful labeling" is due to Solomon W. Golomb; this type of labeling was originally given the name β-labeling by Alexander Rosa in a 1967 paper on graph labelings. A major conjecture in graph theory is the graceful tree conjecture or Ringel–Kotzig conjecture, named after Gerhard Ringel and Anton Kotzig, and sometimes abbreviated GTC. It hypothesizes that all trees are graceful. It is still an open conjecture, although a related but weaker conjecture known as "Ringel's conjecture" was partially proven in 2020. Kotzig once called the effort to prove the conjecture a "disease". Another weaker version of graceful labelling is near-graceful labeling, in which the vertices can be labeled using some subset of the integers on such that no two vertices share a label, and each edge is uniquely identified by the absolute difference between its endpoints (this magnitude lies on ). Another conjecture in graph theory is Rosa's conjecture, named after Alexander Rosa, which says that all triangular cacti are graceful or nearly-graceful. A graceful graph with edges 0 to is conjectured to have no fewer than vertices, due to sparse ruler results. This conjecture has been verified for all graphs with 213 or fewer edges. Selected results In his original paper, Rosa proved that an Eulerian graph with number of edges m ≡ 1 (mod 4) or m ≡ 2 (mod 4) cannot be graceful. Also in his original paper, Rosa proved that the cycle Cn is graceful if and only if n ≡ 0 (mod 4) or n ≡ 3 (mod 4). All path graphs and caterpillar graphs are graceful. All lobster graphs with a perfect matchi
https://en.wikipedia.org/wiki/Fetal%20trimethadione%20syndrome
Fetal trimethadione syndrome (also known as paramethadione syndrome, German syndrome, tridione syndrome, among others) is a set of birth defects caused by the administration of the anticonvulsants trimethadione (also known as Tridione) or paramethadione to epileptic mothers during pregnancy. Fetal trimethadione syndrome is classified as a rare disease by the National Institute of Health's Office of Rare Diseases, meaning it affects less than 200,000 individuals in the United States. The fetal loss rate while using trimethadione has been reported to be as high as 87%. Presentation Fetal trimethadione syndrome is characterized by the following major symptoms as a result of the teratogenic characteristics of trimethadione. Cranial and facial abnormalities which include; microcephaly, midfacial flattening, V-shaped eyebrows and a short nose Cardiovascular abnormalities Absent kidney and ureter Meningocele, a birth defect of the spine Omphalocele, a birth defect where portions of the abdominal contents project into the umbilical cord A delay in mental and physical development Diagnosis Treatment Surgery may help alleviate the effects of some physical defects, but prognosis is poor, especially for those with severe cardiovascular and cognitive problems. Speech and physical therapy, as well as special education, is required for surviving children.
https://en.wikipedia.org/wiki/Stochastic%20approximation
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations. In a nutshell, stochastic approximation algorithms deal with a function of the form which is the expected value of a function depending on a random variable . The goal is to recover properties of such a function without evaluating it directly. Instead, stochastic approximation algorithms use random samples of to efficiently approximate properties of such as zeros or extrema. Recently, stochastic approximations have found extensive applications in the fields of statistics and machine learning, especially in settings with big data. These applications range from stochastic optimization methods and algorithms, to online forms of the EM algorithm, reinforcement learning via temporal differences, and deep learning, and others. Stochastic approximation algorithms have also been used in the social sciences to describe collective dynamics: fictitious play in learning theory and consensus algorithms can be studied using their theory. The earliest, and prototypical, algorithms of this kind are the Robbins–Monro and Kiefer–Wolfowitz algorithms introduced respectively in 1951 and 1952. Robbins–Monro algorithm The Robbins–Monro algorithm, introduced in 1951 by Herbert Robbins and Sutton Monro, presented a methodology for solving a root finding problem, where the function is represented as an expected value. Assume that we have a function , and a constant , such that the equation has a unique root at . It is assumed that while we cannot directly observe the function , we can instead obtain measurements of the random variable where . The st
https://en.wikipedia.org/wiki/TaqMan
TaqMan probes are hydrolysis probes that are designed to increase the specificity of quantitative PCR. The method was first reported in 1991 by researcher Kary Mullis at Cetus Corporation, and the technology was subsequently developed by Hoffmann-La Roche for diagnostic assays and by Applied Biosystems (now part of Thermo Fisher Scientific) for research applications. The TaqMan probe principle relies on the 5´–3´ exonuclease activity of Taq polymerase to cleave a dual-labeled probe during hybridization to the complementary target sequence and fluorophore-based detection. As in other quantitative PCR methods, the resulting fluorescence signal permits quantitative measurements of the accumulation of the product during the exponential stages of the PCR; however, the TaqMan probe significantly increases the specificity of the detection. TaqMan probes were named after the videogame Pac-Man (Taq Polymerase + PacMan = TaqMan) as its mechanism is based on the Pac-Man principle. Principle TaqMan probes consist of a fluorophore covalently attached to the 5’-end of the oligonucleotide probe and a quencher at the 3’-end. Several different fluorophores (e.g. 6-carboxyfluorescein, acronym: FAM, or tetrachlorofluorescein, acronym: TET) and quenchers (e.g. tetramethylrhodamine, acronym: TAMRA) are available. The quencher molecule quenches the fluorescence emitted by the fluorophore when excited by the cycler’s light source via Förster resonance energy transfer (FRET). As long as the fluorophore and the quencher are in proximity, quenching inhibits any fluorescence signals. TaqMan probes are designed such that they anneal within a DNA region amplified by a specific set of primers. (Unlike the diagram, the probe binds to single stranded DNA.) TaqMan probes can be conjugated to a minor groove binder (MGB) moiety, dihydrocyclopyrroloindole tripeptide (DPI3), in order to increase its binding affinity to the target sequence; MGB-conjugated probes have a higher melting temperature (T
https://en.wikipedia.org/wiki/Fouling%20community
Fouling communities are communities of organisms found on artificial surfaces like the sides of docks, marinas, harbors, and boats. Settlement panels made from a variety of substances have been used to monitor settlement patterns and to examine several community processes (e.g., succession, recruitment, predation, competition, and invasion resistance). These communities are characterized by the presence of a variety of sessile organisms including ascidians, bryozoans, mussels, tube building polychaetes, sea anemones, sponges, barnacles, and more. Common predators on and around fouling communities include small crabs, starfish, fish, limpets, chitons, other gastropods, and a variety of worms. Ecology Fouling communities follow a distinct succession pattern in a natural environment. Environmental impact Impacts on Humans Fouling communities can have a negative economic impact on humans, by damaging the bottom of boats, docks, and other marine human-made structures. This effect is known as Biofouling, and has been combated by Anti-fouling paint, which is now known to introduce toxic metals to the marine environment. Fouling communities have a variety of species, and many of these are filter feeders, meaning that organisms in the fouling community can also improve water clarity. Invasive Species Fouling communities do grow on natural structures, however these communities are largely made up of native species, whereas the communities growing on man-made structures have larger populations of invasive species. This difference between the species diversity across human structures and natural substrate is likely dependent on human pollution, which is known to weaken native species and create a community and environment dominated by non-indigenous species. These largely non-indigenous species communities living on docks and boats usually have a higher resistance to anthropogenic disturbances. This effect is sorely felt in untouched native marine communities, as non
https://en.wikipedia.org/wiki/Nonlinear%20conjugate%20gradient%20method
In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function the minimum of is obtained when the gradient is 0: . Whereas linear conjugate gradient seeks a solution to the linear equation , the nonlinear conjugate gradient method is generally used to find the local minimum of a nonlinear function using its gradient alone. It works when the function is approximately quadratic near the minimum, which is the case when the function is twice differentiable at the minimum and the second derivative is non-singular there. Given a function of variables to minimize, its gradient indicates the direction of maximum increase. One simply starts in the opposite (steepest descent) direction: with an adjustable step length and performs a line search in this direction until it reaches the minimum of : , After this first iteration in the steepest direction , the following steps constitute one iteration of moving along a subsequent conjugate direction , where : Calculate the steepest direction: , Compute according to one of the formulas below, Update the conjugate direction: Perform a line search: optimize , Update the position: , With a pure quadratic function the minimum is reached within N iterations (excepting roundoff error), but a non-quadratic function will make slower progress. Subsequent search directions lose conjugacy requiring the search direction to be reset to the steepest descent direction at least every N iterations, or sooner if progress stops. However, resetting every iteration turns the method into steepest descent. The algorithm stops when it finds the minimum, determined when no progress is made after a direction reset (i.e. in the steepest descent direction), or when some tolerance criterion is reached. Within a linear approximation, the parameters and are the same as in the linear conjugate gradient method but have been obtaine
https://en.wikipedia.org/wiki/Vegard%27s%20law
In crystallography, materials science and metallurgy, Vegard's law is an empirical finding (heuristic approach) resembling the rule of mixtures. In 1921, Lars Vegard discovered that the lattice parameter of a solid solution of two constituents is approximately a weighted mean of the two constituents' lattice parameters at the same temperature: e.g., in the case of a mixed oxide of uranium and plutonium as used in the fabrication of MOX nuclear fuel: Vegard's law assumes that both components A and B in their pure form (i.e. before mixing) have the same crystal structure. Here, is the lattice parameter of the solid solution, and are the lattice parameters of the pure constituents, and is the molar fraction of B in the solid solution. Vegard's law is seldom perfectly obeyed; often deviations from the linear behavior are observed. A detailed study of such deviations was conducted by King. However, it is often used in practice to obtain rough estimates when experimental data are not available for the lattice parameter for the system of interest. For systems known to approximately obey Vegard's law, the approximation may also be used to estimate the composition of a solution from knowledge of its lattice parameters, which are easily obtained from diffraction data. For example, consider the semiconductor compound . A relation exists between the constituent elements and their associated lattice parameters, , such that: When variations in lattice parameter are very small across the entire composition range, Vegard's law becomes equivalent to Amagat's law. Relationship to band gaps in semiconductors In many binary semiconducting systems, the band gap in semiconductors is approximately a linear function of the lattice parameter. Therefore, if the lattice parameter of a semiconducting system follows Vegard's law, one can also write a linear relationship between the band gap and composition. Using as before, the band gap energy, , can be written as: Sometimes, the l
https://en.wikipedia.org/wiki/Blaney%E2%80%93Criddle%20equation
The Blaney–Criddle equation (named after H. F. Blaney and W. D. Criddle) is a method for estimating reference crop evapotranspiration. Usage The Blaney–Criddle equation is a relatively simplistic method for calculating evapotranspiration. When sufficient meteorological data is available the Penman–Monteith equation is usually preferred. However, the Blaney–Criddle equation is ideal when only air-temperature datasets are available for a site. Given the coarse accuracy of the Blaney–Criddle equation, it is recommended that it be used to calculate evapotranspiration for periods of one month or greater. The equation calculates evapotranspiration for a 'reference crop', which is taken as actively growing green grass of 8–15 cm height. Equation ETo = p ·(0.457·Tmean + 8.128) Where: ETo is the reference evapotranspiration [mm day−1] (monthly) Tmean is the mean daily temperature [°C] given as Tmean = (Tmax + Tmin )/ 2 p is the mean daily percentage of annual daytime hours. Accuracy and bias Given the limited data input to the equation, the calculated evapotranspiration should be regarded as only broadly accurate. Rather than a precise measure of evapotranspiration, the output of the equation is better thought of as providing an order of magnitude. The inaccuracy of the equation is exacerbated by extreme variants of weather. In particular evapotranspiration is known to be exaggerated by up to 40% in calm, humid, clouded areas and depreciated by 60% in windy, dry, sunny areas. See also Jensen–Haise equation (M. E. Jensen and H. R. Haise, 1963) Penman–Monteith equation External links Rational Use of the FAO Blaney-Criddle Formula (Allen 1986) Potential Evapotranspiration Notes and references Agronomy Equations
https://en.wikipedia.org/wiki/Twinkie%20the%20Kid
Twinkie the Kid is the mascot for Twinkies, Hostess's golden cream-filled snack cakes. He is a registered trademark of Hostess Brands. He made his debut in 1971. He has appeared on product packaging, in commercials and as related collectible merchandise, except for a brief period between 1988 and 1990. Description Twinkie the Kid is an anthropomorphized Twinkie appearing as a wrangler. He wears boots, gloves, a kerchief with hearts, and a ten-gallon hat with the words "Twinkie the Kid" on the band. He was created by Denny Lesser, a route delivery driver for Hostess in the San Fernando Valley. He designed the mascot and his wife made the costume that he used for a traveling promotional campaign. Animated commercial appearances The character appeared in animated TV advertisements for Twinkies in the 1970s, voiced by Allen Swift. See also Captain Cupcake Chauncey Chocodile Fruit Pie the Magician Notes Cartoon mascots Food advertising characters Male characters in advertising Fictional food characters Fictional cowboys and cowgirls Hostess Brands Mascots introduced in 1971
https://en.wikipedia.org/wiki/Bedtime
Bedtime (also called putting to bed or tucking in) is a ritual part of parenting to help children feel more secure and become accustomed to a more rigid schedule of sleep than they might prefer. The ritual of bedtime is aimed at facilitating the transition from wakefulness to sleep. It may involve bedtime stories, children's songs, nursery rhymes, bed-making and getting children to change into nightwear. In some religious households, prayers are said shortly before going to bed. Sleep training may be part of the bedtime ritual for babies and toddlers. In adult use, the term means simply "time for bed", similar to curfew, as in "It's past my bedtime". Some people are accustomed to drinking a nightcap or herbal tea at bedtime. Sleeping coaches are also used to help individuals reach their bedtime goals. Researchers studying sleep are finding patterns revealing that cell phone use at night disturbs going to sleep at one's bedtime and achieving a good night's sleep. Synonyms In boarding schools and on trips or holidays that involve young people, the equivalent of bedtime is lights out or lights-out - this term is also used in prisons, hospitals, in the military, and in sleep research. Newspapers In the pre-digital newspaper era, a newspaper, usually daily, was "put to bed" when editorial work on the issue had formally ceased, the content was fixed, and printing could begin. See also Crib talk Lullaby Sleep cycle
https://en.wikipedia.org/wiki/The%20Foundations%20of%20Arithmetic
The Foundations of Arithmetic () is a book by Gottlob Frege, published in 1884, which investigates the philosophical foundations of arithmetic. Frege refutes other theories of number and develops his own theory of numbers. The Grundlagen also helped to motivate Frege's later works in logicism. The book was not well received and was not read widely when it was published. It did, however, draw the attentions of Bertrand Russell and Ludwig Wittgenstein, who were both heavily influenced by Frege's philosophy. An English translation was published (Oxford, 1950) by J. L. Austin, with a second edition in 1960. Criticisms of predecessors Psychologistic accounts of mathematics Frege objects to any account of mathematics based on psychologism, that is, the view that mathematics and numbers are relative to the subjective thoughts of the people who think of them. According to Frege, psychological accounts appeal to what is subjective, while mathematics is purely objective: mathematics is completely independent from human thought. Mathematical entities, according to Frege, have objective properties regardless of humans thinking of them: it is not possible to think of mathematical statements as something that evolved naturally through human history and evolution. He sees a fundamental distinction between logic (and its extension, according to Frege, math) and psychology. Logic explains necessary facts, whereas psychology studies certain thought processes in individual minds. Kant Frege greatly appreciates the work of Immanuel Kant. He criticizes him mainly on the grounds that numerical statements are not synthetic-a priori, but rather analytic-a priori. Kant claims that 7+5=12 is an unprovable synthetic statement. No matter how much we analyze the idea of 7+5 we will not find there the idea of 12. We must arrive at the idea of 12 by application to objects in the intuition. Kant points out that this becomes all the more clear with bigger numbers. Frege, on this point precisely
https://en.wikipedia.org/wiki/ISCSI%20Extensions%20for%20RDMA
The iSCSI Extensions for RDMA (iSER) is a computer network protocol that extends the Internet Small Computer System Interface (iSCSI) protocol to use Remote Direct Memory Access (RDMA). RDMA is provided by either the Transmission Control Protocol (TCP) with RDMA services (iWARP) that uses existing Ethernet setup and therefore no need of huge hardware investment, RoCE (RDMA over Converged Ethernet) that does not need the TCP layer and therefore provides lower latency, or InfiniBand. It permits data to be transferred directly into and out of SCSI computer memory buffers (which connects computers to storage devices) without intermediate data copies and without much CPU intervention. History An RDMA consortium was announced on May 31, 2002, with a goal of product implementations by 2003. The consortium released their proposal in July, 2003. The protocol specifications were published as drafts in September 2004 in the Internet Engineering Task Force and issued as RFCs in October 2007. The OpenIB Alliance was renamed in 2007 to be the OpenFabrics Alliance, and then released an open source software package. Description The motivation for iSER is to use RDMA to avoid unnecessary data copying on the target and initiator. The Datamover Architecture (DA) defines an abstract model in which the movement of data between iSCSI end nodes is logically separated from the rest of the iSCSI protocol; iSER is one Datamover protocol. The interface between the iSCSI and a Datamover protocol, iSER in this case, is called Datamover Interface (DI). The main difference between the standard iSCSI and iSCSI over iSER is the execution of SCSI read/write commands. With iSER the target drives all data transfer (with the exception of iSCSI unsolicited data) by issuing RDMA write/read operations, respectively. When the iSCSI layer issues an iSCSI command PDU, it calls the Send_Control primitive, which is part of the DI. The Send_Control primitive sends the STag with the PDU. The iSER layer in
https://en.wikipedia.org/wiki/Chlorophyllum%20molybdites
Chlorophyllum molybdites, commonly known as the green-spored parasol, false parasol, green-spored lepiota and vomiter, is a widespread mushroom. Poisonous and producing severe gastrointestinal symptoms of vomiting and diarrhea, it is commonly confused with the shaggy parasol (Chlorophyllum rhacodes) or shaggy mane (Coprinus comatus), and is the most commonly misidentified poisonous mushroom in North America. Its large size and similarity to the edible parasol mushroom (Macrolepiota procera), as well as its habit of growing in areas near human habitation, are reasons cited for this. The nature of the poisoning is predominantly gastrointestinal. Description It is an imposing mushroom with a pileus (cap) ranging from in diameter, hemispherical and with a flattened top. The cap is whitish in colour with coarse brownish scales. The gills are free and white, usually turning dark and green with maturity. It has a rare green spore print. The stipe ranges from tall and bears a double-edged ring. Its stem lacks the snakeskin pattern that is generally present on the parasol mushroom. Flesh thick, firm at first, soft with age, white, unchanging or sporadically becoming reddish-brown to pale reddish-pink, almost orange in the base of the foot when cut or crushed. Distribution and habitat Chlorophyllum molybdites grows in lawns and parks across eastern North America and California, as well as temperate and subtropical regions around the world. Fruiting bodies generally appear after summer and autumn rains. It appears to have spread to other countries, with reports from Scotland, Australia, and Cyprus. Toxicity Chlorophyllum molybdites is the most frequently eaten poisonous mushroom in North America. The symptoms are predominantly gastrointestinal in nature, with vomiting, diarrhea and colic, often severe, occurring 1–3 hours after consumption. Although these poisonings can be severe, particularly in children, none have yet resulted in death. Professor James Kimbrough write
https://en.wikipedia.org/wiki/Dirichlet%20algebra
In mathematics, a Dirichlet algebra is a particular type of algebra associated to a compact Hausdorff space X. It is a closed subalgebra of C(X), the uniform algebra of bounded continuous functions on X, whose real parts are dense in the algebra of bounded continuous real functions on X. The concept was introduced by . Example Let be the set of all rational functions that are continuous on ; in other words functions that have no poles in . Then is a *-subalgebra of , and of . If is dense in , we say is a Dirichlet algebra. It can be shown that if an operator has as a spectral set, and is a Dirichlet algebra, then has a normal boundary dilation. This generalises Sz.-Nagy's dilation theorem, which can be seen as a consequence of this by letting
https://en.wikipedia.org/wiki/Spectral%20set
In operator theory, a set is said to be a spectral set for a (possibly unbounded) linear operator on a Banach space if the spectrum of is in and von-Neumann's inequality holds for on - i.e. for all rational functions with no poles on This concept is related to the topic of analytic functional calculus of operators. In general, one wants to get more details about the operators constructed from functions with the original operator as the variable. For a detailed discussion of spectral sets and von Neumann's inequality, see. Functional analysis
https://en.wikipedia.org/wiki/Load%20dump
Load dump means the disconnection of a powered load. It can cause 2 problems: failure of supply to equipment or customers large voltage spikes from the inductive generator(s) In automotive electronics, it refers to the disconnection of the vehicle battery from the alternator while the battery is being charged. Due to such a disconnection of the battery, other loads connected to the alternator experience a surge in the voltage on the battery bus. This surge may be as high as 120 volts and the surge may take up to 400 ms to decay. It is typically clamped to 40 V in 12 V vehicles and about 60 V in 24 V systems. Overview The field winding of an alternator has a large inductance. When the vehicle battery is being charged, the alternator generates a large current, the magnitude of which is controlled by the current in the field winding. If the battery becomes disconnected while it is being charged the load on the alternator suddenly decreases. However, the vehicle's voltage regulator cannot quickly cause the field current to decrease sufficiently, so the alternator continues to generate a large current. This large current causes the voltage on the vehicle bus to increase significantly -- well above the normal and regulated level. All the loads connected to the alternator see this high voltage spike. The strength of the spike depends on many factors including the speed at which the alternator is rotating and the current which was being supplied to the battery before it was disconnected. These spike may peak at as high as 120 V and may take up to 400 ms to decay. This kind of a spike would damage many semiconductor devices, e.g. ECUs, that may be connected to the alternator. Special protection devices, such as TVS diodes, varistors which can withstand and absorb the energy of these spikes may be added to protect such semiconductor devices. Various automotive standards such as ISO 7637-2 and SAE J1113-11 specify a standard shape of the load dump pulse against which
https://en.wikipedia.org/wiki/Transaction-level%20modeling
Transaction-level modeling (TLM) is an approach to modelling complex digital systems by using electronic design automation software. TLM language (TLML) is a hardware description language, usually, written in C++ and based on SystemC library. TLMLs are used for modelling where details of communication among modules are separated from the details of the implementation of functional units or of the communication architecture. It's used for modelling of systems that involve complex data communication mechanisms. Components such as buses or FIFOs are modeled as channels, and are presented to modules using SystemC interface classes. Transaction requests take place by calling interface functions of these channel models, which encapsulate low-level details of the information exchange. At the transaction level, the emphasis is more on the functionality of the data transfers – what data are transferred to and from what locations – and less on their actual implementation, that is, on the actual protocol used for data transfer. This approach makes it easier for the system-level designer to experiment, for example, with different bus architectures (all supporting a common abstract interface) without having to recode models that interact with any of the buses, provided these models interact with the bus through the common interface. However, the application of transaction-level modeling is not specific to the SystemC language and can be used with other languages. The concept of TLM first appears in system level language and modeling domain. Transaction-level models are used for high-level synthesis of register-transfer level (RTL) models for a lower-level modelling and implementation of system components. RTL is usually represented by a hardware description language source code (e.g. VHDL, SystemC, Verilog). History In 2000, Thorsten Grötker, R&D Manager at Synopsys was preparing a presentation on the communication mechanism in what was to become the SystemC 2.0 standard, a
https://en.wikipedia.org/wiki/Desoxyribonucleate
"Desoxyribonucleic acid" and "desoxyribonucleate" are archaic terms for DNA, deoxyribonucleic acid, and its salts, respectively. The terms are used in this sense in various classic papers in genetics, such as Avery, MacLeod, and McCarty (1944).
https://en.wikipedia.org/wiki/Variational%20Monte%20Carlo
In computational physics, variational Monte Carlo (VMC) is a quantum Monte Carlo method that applies the variational method to approximate the ground state of a quantum system. The basic building block is a generic wave function depending on some parameters . The optimal values of the parameters is then found upon minimizing the total energy of the system. In particular, given the Hamiltonian , and denoting with a many-body configuration, the expectation value of the energy can be written as: Following the Monte Carlo method for evaluating integrals, we can interpret as a probability distribution function, sample it, and evaluate the energy expectation value as the average of the so-called local energy . Once is known for a given set of variational parameters , then optimization is performed in order to minimize the energy and obtain the best possible representation of the ground-state wave-function. VMC is no different from any other variational method, except that the many-dimensional integrals are evaluated numerically. Monte Carlo integration is particularly crucial in this problem since the dimension of the many-body Hilbert space, comprising all the possible values of the configurations , typically grows exponentially with the size of the physical system. Other approaches to the numerical evaluation of the energy expectation values would therefore, in general, limit applications to much smaller systems than those analyzable thanks to the Monte Carlo approach. The accuracy of the method then largely depends on the choice of the variational state. The simplest choice typically corresponds to a mean-field form, where the state is written as a factorization over the Hilbert space. This particularly simple form is typically not very accurate since it neglects many-body effects. One of the largest gains in accuracy over writing the wave function separably comes from the introduction of the so-called Jastrow factor. In this case the wave function is writt
https://en.wikipedia.org/wiki/Diffusion%20Monte%20Carlo
Diffusion Monte Carlo (DMC) or diffusion quantum Monte Carlo is a quantum Monte Carlo method that uses a Green's function to solve the Schrödinger equation. DMC is potentially numerically exact, meaning that it can find the exact ground state energy within a given error for any quantum system. When actually attempting the calculation, one finds that for bosons, the algorithm scales as a polynomial with the system size, but for fermions, DMC scales exponentially with the system size. This makes exact large-scale DMC simulations for fermions impossible; however, DMC employing a clever approximation known as the fixed-node approximation can still yield very accurate results. The projector method To motivate the algorithm, let's look at the Schrödinger equation for a particle in some potential in one dimension: We can condense the notation a bit by writing it in terms of an operator equation, with . So then we have where we have to keep in mind that is an operator, not a simple number or function. There are special functions, called eigenfunctions, for which , where is a number. These functions are special because no matter where we evaluate the action of the operator on the wave function, we always get the same number . These functions are called stationary states, because the time derivative at any point is always the same, so the amplitude of the wave function never changes in time. Since the overall phase of a wave function is not measurable, the system does not change in time. We are usually interested in the wave function with the lowest energy eigenvalue, the ground state. We're going to write a slightly different version of the Schrödinger equation that will have the same energy eigenvalue, but, instead of being oscillatory, it will be convergent. Here it is: . We've removed the imaginary number from the time derivative and added in a constant offset of , which is the ground state energy. We don't actually know the ground state energy, but t
https://en.wikipedia.org/wiki/Equity-indexed%20annuity
An indexed annuity (the word equity previously tied to indexed annuities has been removed to help prevent the assumption of stock market investing being present in these products) in the United States is a type of tax-deferred annuity whose credited interest is linked to an equity index—typically the S&P 500 or international index. It guarantees a minimum interest rate (typically between 1% and 3%) if held to the end of the surrender term and protects against a loss of principal. An equity index annuity is a contract with an insurance or annuity company. The returns may be higher than fixed instruments such as certificates of deposit (CDs), money market accounts, and bonds but not as high as market returns. Equity Index Annuities are insured by each state's Guarantee Fund; coverage is not as strong as the insurance provided by the FDIC. For example, in California the fund will cover "80%, not to exceed $250,000." The guarantees in the contract are backed by the relative strength of the insurer. The contracts may be suitable for a portion of the asset portfolio for those who want to avoid risk and are in retirement or nearing retirement age. The objective of purchasing an equity index annuity is to realize greater gains than those provided by CDs, money markets or bonds, while still protecting principal. The long term ability of Equity Index Annuities to beat the returns of other fixed instruments is a matter of debate. Indexed annuities represent about 25.3% of all fixed annuity sales in 2020 according to the My Annuity Store, Inc.. Equity-indexed annuities may also be referred to as fixed indexed annuities or simple indexed annuities. The mechanics of equity-indexed annuities are often complex and the returns can vary greatly depending on the month and year the annuity is purchased. Like many other types of annuities, equity-indexed annuities usually carry a surrender charge for early withdrawal. These "surrender periods" range between 3 and 16 years; typically
https://en.wikipedia.org/wiki/Gaussian%20quantum%20Monte%20Carlo
Gaussian Quantum Monte Carlo is a quantum Monte Carlo method that shows a potential solution to the fermion sign problem without the deficiencies of alternative approaches. Instead of the Hilbert space, this method works in the space of density matrices that can be spanned by an over-complete basis of gaussian operators using only positive coefficients. Containing only quadratic forms of the fermionic operators, no anti-commuting variables occur and any quantum state can be expressed as a real probability distribution.
https://en.wikipedia.org/wiki/Lateral%20sural%20cutaneous%20nerve
The lateral sural cutaneous nerve of the lumbosacral plexus supplies the skin on the posterior and lateral surfaces of the leg. The lateral sural cutaneous nerve originates from the common fibular nerve(L4-S2) and is the terminal branch of the common fibular nerve. Sural communicating branch One branch, the sural communicating nerve or colloquially known as the peroneal anastomotic (n. communicans fibularis), arises from sciatic origins near the head of the fibula, crosses the lateral head of the gastrocnemius to the middle of the leg, and joins with the medial sural cutaneous nerve to form the sural nerve Variation Another branch observed, that is mentioned in passing in previous literature is the medial branch of the lateral sural cutaneous nerve. In a 2021 study by Steele et al. (Annals of Anatomy), a medial branch of the lateral sural cutaneous nerve was observed in approximately 36% of lower extremities dissected (n=208) with an average diameter of 1.47 ± 0.655 mm with a 95% CI of 1.31 – 1.625 mm. This branch was noted to travel in a subcutaneous plane over the sural nerve to the posteromedial aspect of the ankle. "The lateral branch of the LSCN traveled the expected course over the fibula in the superficial fascia of the posterolateral compartment of the leg, while the medial branch terminates into the lower posteromedial aspect of ankle." Additional images
https://en.wikipedia.org/wiki/Medial%20sural%20cutaneous%20nerve
The medial sural cutaneous nerve (L4-S3) is a sensory nerve of the leg. It supplies cutaneous innervation the posteromedial leg. Structure The medial sural cutaneous nerve originates from the posterior aspect of the tibial nerve of the sciatic nerve. It descends between the two heads of the gastrocnemius muscle. Around the middle of the back of the leg, it pierces the deep fascia to become superficial. It unites with the lateral sural cutaneous nerve to form the sural nerve. Morphometric properties According to a large cadaveric study in which 208 sural nerves were dissected in their native position (by Steele et al.) the medial sural cutaneous nerve was consistently present in most lower extremities. This information aligns with other research as well. Only one sample in Steele et al. did not contain a medial sural cutaneous nerve. The diameter (at the medial sural cutaneous nerve origin) is found to be 2.74mm ± 0.93 (2.62–2.86) in 207 samples. Two new variations (as of 2021) of the sural nerve complex were observed where the MSCN is observed to travel to the lateral ankle and provides the branches for the lateral calcaneal nerves of the lateral ankle. Normally the sural nerve serves this purpose. Additional images
https://en.wikipedia.org/wiki/Path%20integral%20Monte%20Carlo
Path integral Monte Carlo (PIMC) is a quantum Monte Carlo method used to solve quantum statistical mechanics problems numerically within the path integral formulation. The application of Monte Carlo methods to path integral simulations of condensed matter systems was first pursued in a key paper by John A. Barker. The method is typically (but not necessarily) applied under the assumption that symmetry or antisymmetry under exchange can be neglected, i.e., identical particles are assumed to be quantum Boltzmann particles, as opposed to fermion and boson particles. The method is often applied to calculate thermodynamic properties such as the internal energy, heat capacity, or free energy. As with all Monte Carlo method based approaches, a large number of points must be calculated. In principle, as more path descriptors are used (these can be "replicas", "beads," or "Fourier coefficients," depending on what strategy is used to represent the paths), the more quantum (and the less classical) the result is. However, for some properties the correction may cause model predictions to initially become less accurate than neglecting them if a small number of path descriptors are included. At some point the number of descriptors is sufficiently large and the corrected model begins to converge smoothly to the correct quantum answer. Because it is a statistical sampling method, PIMC can take anharmonicity fully into account, and because it is quantum, it takes into account important quantum effects such as tunneling and zero-point energy (while neglecting the exchange interaction in some cases). The basic framework was originally formulated within the canonical ensemble, but has since been extended to include the grand canonical ensemble and the microcanonical ensemble. Its use has been extended to fermion systems as well as systems of bosons. An early application was to the study of liquid helium. Numerous applications have been made to other systems, including liquid wat
https://en.wikipedia.org/wiki/Reptation%20Monte%20Carlo
Reptation Monte Carlo is a quantum Monte Carlo method. It is similar to Diffusion Monte Carlo, except that it works with paths rather than points. This has some advantages relating to calculating certain properties of the system under study that diffusion Monte Carlo has difficulty with. In both diffusion Monte Carlo and reptation Monte Carlo, the method first aims to solve the time-dependent Schrödinger equation in the imaginary time direction. When you propagate the Schrödinger equation in time, you get the dynamics of the system under study. When you propagate it in imaginary time, you get a system that tends towards the ground state of the system. When substituting in place of , the Schrodinger equation becomes identical with a diffusion equation. Diffusion equations can be solved by imagining a huge population of particles (sometimes called "walkers"), each diffusing in a way that solves the original equation. This is how diffusion Monte Carlo works. Reptation Monte Carlo works in a very similar way, but is focused on the paths that the walkers take, rather than the density of walkers. In particular, a path may be mutated using a Metropolis algorithm which tries a change (normally at one end of the path) and then accepts or rejects the change based on a probability calculation. The update step in diffusion Monte Carlo would be moving the walkers slightly, and then duplicating and removing some of them. By contrast, the update step in reptation Monte Carlo mutates a path, and then accepts or rejects the mutation.
https://en.wikipedia.org/wiki/Anthropological%20Index%20Online
The Anthropological Index Online is an academic journal indexing service for anthropology. Overview The service indexes the journals received by The Anthropology Library at The British Museum (formerly at the Museum of Mankind), which receives periodicals in all branches of anthropology from academic institutions and publishers around the world. It is a collaboration between the Royal Anthropological Institute of Great Britain and Ireland and the Anthropology Department at the University of Kent. It is also available under licence from EBSCO Information Services as part of Anthropology Plus. There are several hundred thousand records to date, the earliest from the late 1950s. Subject coverage is cultural anthropology/social anthropology, physical anthropology, archaeology and linguistics. The index is regularly updated. See also List of academic databases and search engines External links Online databases British Museum Anthropology literature
https://en.wikipedia.org/wiki/NuSMV
In computer science, NuSMV is a reimplementation and extension of the SMV symbolic model checker, the first model checking tool based on binary decision diagrams (BDDs). The tool has been designed as an open architecture for model checking. It is aimed at reliable verification of industrially sized designs, for use as a backend for other verification tools and as a research tool for formal verification techniques. NuSMV has been developed as a joint project between ITC-IRST ( in Trento), Carnegie Mellon University, the University of Genoa and the University of Trento. NuSMV 2, version 2 of NuSMV, inherits all the functionalities of NuSMV. Furthermore, it combines BDD-based model checking with SAT-based model checking. It is maintained by Fondazione Bruno Kessler, the successor organization of ITC-IRST. Functionalities NuSMV supports the analysis of specifications expressed in CTL and LTL. It can be run in batch mode, or interactively with a textual user interface. Running NuSMV Interactively The interaction shell of NuSMV is activated from the system prompt as follows: [system_prompt]$ NuSMV -int NuSMV> go NuSMV> NuSMV first tries to read and execute commands from an initialization file if such file exists and is readable unless -s was passed on the command line. File master.nusmvrc is looked for in the directories defined in environment variable NUSMV_LIBRARY_PATH or in the default library path if no such variable is defined. If no such file exists, user's home directory and the current directory will also be checked. Commands in the initialization file are executed consecutively. When the initialization phase is completed the NuSMV shell prompt is displayed and the system is now ready to execute user commands. A NuSMV command usually consists of a command name and arguments to the invoked command. It is possible to make NuSMV read and execute a sequence of commands from a file, through the command line option -source: [system_prompt]$ NuSMV -source cmd_fil
https://en.wikipedia.org/wiki/Coding%20%28social%20sciences%29
In the social sciences, coding is an analytical process in which data, in both quantitative form (such as questionnaires results) or qualitative form (such as interview transcripts) are categorized to facilitate analysis. One purpose of coding is to transform the data into a form suitable for computer-aided analysis. This categorization of information is an important step, for example, in preparing data for computer processing with statistical software. Prior to coding, an annotation scheme is defined. It consists of codes or tags. During coding, coders manually add codes into data where required features are identified. The coding scheme ensures that the codes are added consistently across the data set and allows for verification of previously tagged data. Some studies will employ multiple coders working independently on the same data. This also minimizes the chance of errors from coding and is believed to increase the reliability of data. Directive One code should apply to only one category and categories should be comprehensive. There should be clear guidelines for coders (individuals who do the coding) so that code is consistent. Quantitative approach For quantitative analysis, data is coded usually into measured and recorded as nominal or ordinal variables. Questionnaire data can be pre-coded (process of assigning codes to expected answers on designed questionnaire), field-coded (process of assigning codes as soon as data is available, usually during fieldwork), post-coded (coding of open questions on completed questionnaires) or office-coded (done after fieldwork). Note that some of the above are not mutually exclusive. In social sciences, spreadsheets such as Excel and more advanced software packages such as R, Matlab, PSPP/SPSS, DAP/SAS, MiniTab and Stata are often used. Qualitative approach For disciplines in which a qualitative format is preferential, including ethnography, humanistic geography or phenomenological psychology a varied approach to co
https://en.wikipedia.org/wiki/Center%20for%20Biological%20Diversity
The Center for Biological Diversity is a nonprofit membership organization known for its work protecting endangered species through legal action, scientific petitions, creative media and grassroots activism. It was founded in 1989 by Kieran Suckling, Peter Galvin, Todd Schulke and Robin Silver. The center is based in Tucson, Arizona, with its headquarters in the historic Owls club building, and has offices and staff in New Mexico, Nevada, California, Oregon, Illinois, Minnesota, Alaska, Vermont, Florida and Washington, D.C. Background Given a small grant by the Fund For Wild Nature, the organization started in 1989 as a small group by the name of Greater Gila Biodiversity Project, with the objective to protect endangered species and critical habitat in the Southwestern United States. The organization grew and became the Center for Biological Diversity. Kieran Suckling, Peter Galvin, and Todd Schulke founded the organization in response to what they perceived as a failure on the part of the United States Forest Service to protect imperiled species from logging, grazing, and mining. As surveyors in New Mexico, the three men discovered "a rare Mexican spotted owl nest in an old-growth tree", but their discovery was ignored and the Forest Service continued with plans to lease the land to timber companies; Suckling, Galvin, and Schulke believed that it was within the Forest Service's mission to save sensitive species like the Mexican spotted owl from harm, and that the government had not performed its duty in deference to corporate interests. Suckling, Galvin and Schulke went to the media to register their outrage with success: the old-growth tree was protected from harm, and this success led to the founding of the Center for Biological Diversity. Suckling, Galvin and Schulke assert that in 1990 they discovered the Forest Service was allowing commercial logging within the protected habitat of the owl nests. Speaking to the New York Times in 2010 a spokeswoman for the
https://en.wikipedia.org/wiki/Kan%20fibration
In mathematics, Kan complexes and Kan fibrations are part of the theory of simplicial sets. Kan fibrations are the fibrations of the standard model category structure on simplicial sets and are therefore of fundamental importance. Kan complexes are the fibrant objects in this model category. The name is in honor of Daniel Kan. Definitions Definition of the standard n-simplex For each n ≥ 0, recall that the standard -simplex, , is the representable simplicial set Applying the geometric realization functor to this simplicial set gives a space homeomorphic to the topological standard -simplex: the convex subspace of ℝn+1 consisting of all points such that the coordinates are non-negative and sum to 1. Definition of a horn For each k ≤ n, this has a subcomplex , the k-th horn inside , corresponding to the boundary of the n-simplex, with the k-th face removed. This may be formally defined in various ways, as for instance the union of the images of the n maps corresponding to all the other faces of . Horns of the form sitting inside look like the black V at the top of the adjacent image. If is a simplicial set, then maps correspond to collections of -simplices satisfying a compatibility condition, one for each . Explicitly, this condition can be written as follows. Write the -simplices as a list and require that for all with . These conditions are satisfied for the -simplices of sitting inside . Definition of a Kan fibration A map of simplicial sets is a Kan fibration if, for any and , and for any maps and such that (where is the inclusion of in ), there exists a map such that and . Stated this way, the definition is very similar to that of fibrations in topology (see also homotopy lifting property), whence the name "fibration". Technical remarks Using the correspondence between -simplices of a simplicial set and morphisms (a consequence of the Yoneda lemma), this definition can be written in terms of simplices. The image of the map ca
https://en.wikipedia.org/wiki/Relational%20space
The relational theory of space is a metaphysical theory according to which space is composed of relations between objects, with the implication that it cannot exist in the absence of matter. Its opposite is the container theory. A relativistic physical theory implies a relational metaphysics, but not the other way round: even if space is composed of nothing but relations between observers and events, it would be conceptually possible for all observers to agree on their measurements, whereas relativity implies they will disagree. Newtonian physics can be cast in relational terms, but Newton insisted, for philosophical reasons, on absolute (container) space. The subject was famously debated by Gottfried Wilhelm Leibniz and a supporter of Newton's in the Leibniz–Clarke correspondence. An absolute approach can also be applied to time, with, for instance, the implication that there might have been vast epochs of time before the first event. See also René Descartes Philosophy of space and time Spacetime
https://en.wikipedia.org/wiki/Fibonacci%20search%20technique
In computer science, the Fibonacci search technique is a method of searching a sorted array using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers. Compared to binary search where the sorted array is divided into two equal-sized parts, one of which is examined further, Fibonacci search divides the array into two parts that have sizes that are consecutive Fibonacci numbers. On average, this leads to about 4% more comparisons to be executed, but it has the advantage that one only needs addition and subtraction to calculate the indices of the accessed array elements, while classical binary search needs bit-shift (see Bitwise operation), division or multiplication, operations that were less common at the time Fibonacci search was first published. Fibonacci search has an average- and worst-case complexity of O(log n) (see Big O notation). The Fibonacci sequence has the property that a number is the sum of its two predecessors. Therefore the sequence can be computed by repeated addition. The ratio of two consecutive numbers approaches the Golden ratio, 1.618... Binary search works by dividing the seek area in equal parts (1:1). Fibonacci search can divide it into parts approaching 1:1.618 while using the simpler operations. If the elements being searched have non-uniform access memory storage (i. e., the time needed to access a storage location varies depending on the location accessed), the Fibonacci search may have the advantage over binary search in slightly reducing the average time needed to access a storage location. If the machine executing the search has a direct mapped CPU cache, binary search may lead to more cache misses because the elements that are accessed often tend to gather in only a few cache lines; this is mitigated by splitting the array in parts that do not tend to be powers of two. If the data is stored on a magnetic tape where seek time depends on the current head position, a tradeoff between lo
https://en.wikipedia.org/wiki/1chipMSX
The One chip MSX, or 1chipMSX as the D4 Enterprise distributional name for the ESE MSX System 3, is a re-implementation of an MSX-2 home computer that uses a single FPGA to implement all the electronics (except the RAM) of an MSX-2, including the MSX-MUSIC and SCC+ audio extensions. The system is housed in a transparent blue plastic box, and can be used with a standard monitor (or TV) and a PC keyboard. Original MSX cartridges can be inserted, as well as SD and MMC memory cards as an external storage medium. Even though it lacks a 3.5" disk drive, disks are supported through emulation on a memory card, including support for booting MSX-DOS. Due to its VHDL programmable hardware, it's possible to give the device new hardware extensions by simply running a reconfiguration program under MSX-DOS. The "one chip-MSX" is equipped with two USB connectors, that can be used after adding some supporting VHDL code. Availability The ESE MSX System 3 is designed by ESE Artists' Factory and distributed as 1chipMSX by D4 Enterprise and was supposed to be distributed outside Japan by Bazix. However, due to RoHS regulations in Europe, it was claimed it could not be distributed to Europe in its original form and the European market had to wait for an adapted version which would be produced through Bazix and distributed to Europe by Bazix. However, no violation of RoHS has ever been proven, with all identifiable components of the PCB and power supply being RoHS-compliant. Bazix stopped being the representative of MSX Association and thus did not bring the 1chipMSX to the Western market. In the end, MSX Association was dissolved due to a dispute with other parties involved, resulting in a shift of all intellectual property rights concerning MSX to MSX Licensing Corporation. Bazix also dissolved because this dispute made an end to their efforts and ambitions to bring the 1chipMSX to the Western market (along with other projects that were also dependent on the Japanese partners). Hard
https://en.wikipedia.org/wiki/Hypsometric%20equation
The hypsometric equation, also known as the thickness equation, relates an atmospheric pressure ratio to the equivalent thickness of an atmospheric layer considering the layer mean of virtual temperature, gravity, and occasionally wind. It is derived from the hydrostatic equation and the ideal gas law. Formulation The hypsometric equation is expressed as: where: = thickness of the layer [m], = geometric height [m], = specific gas constant for dry air, = mean virtual temperature in Kelvin [K], = gravitational acceleration [m/s2], = pressure [Pa]. In meteorology, and are isobaric surfaces. In radiosonde observation, the hypsometric equation can be used to compute the height of a pressure level given the height of a reference pressure level and the mean virtual temperature in between. Then, the newly computed height can be used as a new reference level to compute the height of the next level given the mean virtual temperature in between, and so on. Derivation The hydrostatic equation: where is the density [kg/m3], is used to generate the equation for hydrostatic equilibrium, written in differential form: This is combined with the ideal gas law: to eliminate : This is integrated from to : R and g are constant with z, so they can be brought outside the integral. If temperature varies linearly with z (e.g., given a small change in z), it can also be brought outside the integral when replaced with , the average virtual temperature between and . Integration gives simplifying to Rearranging: or, eliminating the natural log: Correction The Eötvös effect can be taken into account as a correction to the hypsometric equation. Physically, using a frame of reference that rotates with Earth, an air mass moving eastward effectively weighs less, which corresponds to an increase in thickness between pressure levels, and vice versa. The corrected hypsometric equation follows: where the correction due to the Eötvös effect, A, can be expressed as follows
https://en.wikipedia.org/wiki/IBM%20remote%20batch%20terminals
The IBM 2780 and the IBM 3780 are devices developed by IBM to perform remote job entry (RJE) and other batch functions over telephone lines; they communicate with the mainframe via Binary Synchronous Communications (BSC or Bisync) and replaced older terminals using synchronous transmit-receive (STR). In addition, IBM has developed workstation programs for the 1130, 360/20, 2922, System/360 other than 360/20, System/370 and System/3. 2780 Data Transmission Terminals The 2780 Data Transmission Terminal first shipped in 1967. It consists of: A line printer similar to the IBM 1443 that can print up to 240 lines per minute (lpm), or 300 lpm using an extremely restricted character set. A card reader/punch unit, similar to an IBM 1442, that can read up to 400 cards per minute (cpm) and can punch up to 355 cpm. A line buffer that stores data received or to be transmitted over the communications line. A binary synchronous adapter which controls the flow of data over the communications line. The 2780 is capable of local (offline) card to print operation. It comes in four models: Model 1: Can read punched cards and transmit the data to a remote host computer, and can receive and print data sent by the host. Model 2: Same as Model 1 but adds the ability to punch card data received from the host. Model 3: Can only print data received from the host, but not send data to it. Model 4: Can read and punch card data, but has no printing capabilities. The 2780 uses a dedicated communication line at speeds of 1200, 2000, 2400 or 4800 bits per second. It is a half duplex device, although full duplex lines can be used with some increase in throughput. It can communicate in Transcode (a 6-bit code), 8-bit EBCDIC, or 7-bit ASCII. 2770 Data Communication System The 2770, announced in 1969, "was said to surpass all other IBM terminals in the variety of available input-output devices." The 2770 was developed by the IBM General Products Division (GPD) in Roches
https://en.wikipedia.org/wiki/Brass%20model
Brass models, made of brass or similar alloys, are scale models typically of railroad equipment, bridges and occasionally, of buildings. Although die-cast or plastic models have made considerable advances in late 1990s and continue to improve, brass models offer finer details. Brass models, considered to be collector's pieces and museum quality finish, are often used for display purposes rather than model railroad operations. However, these can be made fully operational and many railroaders do use them on their model railroads. They are generally considerably more expensive than other types of models due to limited production quantities and the "handmade" nature of the product itself. History In the late 1950s, Japan was known for producing low cost toys and products for export. The first brass model train were born during the occupation of Japan by Allied forces. Members of allied forces saw some of the models built by various craftsman and procured photos of American steam locomotive prototypes for these artisans to model. These were the early hand-built high quality brass models, built with relatively crude equipment in comparison to tools that became available later. Some people in the model railroad industry took note of what was being done and started importing these models to the United States. The scale of import increased with time. Bill Ryan of PFM (Pacific Fast Mail) was one of the early importers, and to this day the name PFM is synonymous with brass model trains. The quality of Japanese models continued to improve but with an improving domestic economy, manufacturing cost also increased. Eventually, importers moved their operations to Korea for cost benefits. Although the quality suffered considerably in the early years of this transition, within a few years some very fine brass models were being built. Korea continues to produce fine models; Boo-Rim Precision of Korea is among the most renowned producers of brass models. Thousands of brass model tr
https://en.wikipedia.org/wiki/List%20of%20graphs
This partial list of graphs contains definitions of graphs and graph families. For collected definitions of graph theory terms that do not refer to individual graph types, such as vertex and path, see Glossary of graph theory. For links to existing articles about particular kinds of graphs, see Category:Graphs. Some of the finite structures considered in graph theory have names, sometimes inspired by the graph's topology, and sometimes after their discoverer. A famous example is the Petersen graph, a concrete graph on 10 vertices that appears as a minimal example or counterexample in many different contexts. Individual graphs Highly symmetric graphs Strongly regular graphs The strongly regular graph on v vertices and rank k is usually denoted srg(v,k,λ,μ). Symmetric graphs A symmetric graph is one in which there is a symmetry (graph automorphism) taking any ordered pair of adjacent vertices to any other ordered pair; the Foster census lists all small symmetric 3-regular graphs. Every strongly regular graph is symmetric, but not vice versa. Semi-symmetric graphs Graph families Complete graphs The complete graph on vertices is often called the -clique and usually denoted , from German komplett. Complete bipartite graphs The complete bipartite graph is usually denoted . For see the section on star graphs. The graph equals the 4-cycle (the square) introduced below. Cycles The cycle graph on vertices is called the n-cycle and usually denoted . It is also called a cyclic graph, a polygon or the n-gon. Special cases are the triangle , the square , and then several with Greek naming pentagon , hexagon , etc. Friendship graphs The friendship graph Fn can be constructed by joining n copies of the cycle graph C3 with a common vertex. Fullerene graphs In graph theory, the term fullerene refers to any 3-regular, planar graph with all faces of size 5 or 6 (including the external face). It follows from Euler's polyhedron formula, V – E + F = 2 (where V, E, F indic
https://en.wikipedia.org/wiki/What%20Is%20Mathematics%3F
What Is Mathematics? is a mathematics book written by Richard Courant and Herbert Robbins, published in England by Oxford University Press. It is an introduction to mathematics, intended both for the mathematics student and for the general public. First published in 1941, it discusses number theory, geometry, topology and calculus. A second edition was published in 1996 with an additional chapter on recent progress in mathematics, written by Ian Stewart. Authorship The book was based on Courant's course material. Although Robbins assisted in writing a large part of the book, he had to fight for authorship. Nevertheless, Courant alone held the copyright for the book. This resulted in Robbins receiving a smaller share of the royalties. Title Michael Katehakis remembers Robbins' interest in the literature and Tolstoy in particular and he is convinced that the title of the book is most likely due to Robbins, who was inspired by the title of the essay What Is Art? by Leo Tolstoy. Robbins did the same in the book Great Expectations: The Theory of Optimal Stopping he co-authored with Yuan-Shih Chow and David Siegmund, where one can not miss the connection with the title of the novel Great Expectations by Charles Dickens. According to Constance Reid, Courant finalized the title after a conversation with Thomas Mann. Translations The first Russian translation Что такое математика? was published in 1947; there were 5 translations since then, the last one in 2010. The first Italian translation, Che cos'è la matematica?, was published in 1950. А translation of the second edition was issued in 2000. The first German translation Was ist Mathematik? by Iris Runge was published in 1962. A Spanish translation of the second edition, ¿Qué Son Las Matemáticas?, was published in 2002. The first Bulgarian translation, Що е математика?, was published in 1967. А second translation appeared in 1985. The first Romanian translation, Ce este matematica?, was published in 1969. The first
https://en.wikipedia.org/wiki/SIMMON
SIMMON (Simulation Monitor) was a proprietary software testing system developed in the late 1960s in the IBM Product Test Laboratory, then at Poughkeepsie, New York It was designed for the then-new line of System/360 computers as a vehicle for testing the software that IBM was developing for that architecture. SIMMON was first described at the IBM SimSymp 1968 symposium, held at Rye, New York. SIMMON was a hypervisor, similar to the IBM CP-40 system that was being independently developed at the Cambridge Scientific Center at about that same time. The chief difference from CP-40 was that SIMMON supported a single virtual machine for testing of a single guest program running there. CP-40 supported many virtual machines for time-sharing production work. CP-40 evolved by many stages into the present VM/CMS operating system. SIMMON was a useful test vehicle for many years. SIMMON was designed to dynamically include independently developed programs (test tools) for testing the target guest program. The SIMMON kernel maintained control over the hardware (and the guest) and coordinated invocation of the test tools. Processing modes Two modes of operation were provided: Full simulation Interrupt Full simulation mode In this mode, each instruction in the guest program was simulated without ever passing control directly to the guest. As an Instruction Set Simulator, SIMMON was unusual in that it simulated the same architecture as that on which it was running, i.e. that of the IBM System/360/370. While an order of magnitude slower than Interrupt mode (below), it allowed close attention to the operation of the guest. This would be the mode used by various instruction trace test tools. Interrupt mode Interrupt mode (a/k/a Bump mode) constrained the guest program to run in user program state, with the SIMMON kernel handling all hardware interrupts and simulating all privileged instructions the guest attempted to execute. This mode could be used, for example, by a test tool
https://en.wikipedia.org/wiki/Partitioned%20global%20address%20space
In computer science, partitioned global address space (PGAS) is a parallel programming model paradigm. PGAS is typified by communication operations involving a global memory address space abstraction that is logically partitioned, where a portion is local to each process, thread, or processing element. The novelty of PGAS is that the portions of the shared memory space may have an affinity for a particular process, thereby exploiting locality of reference in order to improve performance. A PGAS memory model is featured in various parallel programming languages and libraries, including: Coarray Fortran, Unified Parallel C, Split-C, Fortress, Chapel, X10, UPC++, Coarray C++, Global Arrays, DASH and SHMEM. The PGAS paradigm is now an integrated part of the Fortran language, as of Fortran 2008 which standardized coarrays. The various languages and libraries offering a PGAS memory model differ widely in other details, such as the base programming language and the mechanisms used to express parallelism. Many PGAS systems combine the advantages of a SPMD programming style for distributed memory systems (as employed by MPI) with the data referencing semantics of shared memory systems. In contrast to message passing, PGAS programming models frequently offer one-sided communication operations such as Remote Memory Access (RMA), whereby one processing element may directly access memory with affinity to a different (potentially remote) process, without explicit semantic involvement by the passive target process. PGAS offers more efficiency and scalability than traditional shared-memory approaches with a flat address space, because hardware-specific data locality can be explicitly exposed in the semantic partitioning of the address space. A variant of the PGAS paradigm, asynchronous partitioned global address space (APGAS) augments the programming model with facilities for both local and remote asynchronous task creation. Two programming languages that use this model are Chap
https://en.wikipedia.org/wiki/Quantum%20amplifier
In physics, a quantum amplifier is an amplifier that uses quantum mechanical methods to amplify a signal; examples include the active elements of lasers and optical amplifiers. The main properties of the quantum amplifier are its amplification coefficient and uncertainty. These parameters are not independent; the higher the amplification coefficient, the higher the uncertainty (noise). In the case of lasers, the uncertainty corresponds to the amplified spontaneous emission of the active medium. The unavoidable noise of quantum amplifiers is one of the reasons for the use of digital signals in optical communications and can be deduced from the fundamentals of quantum mechanics. Introduction An amplifier increases the amplitude of whatever goes through it. While classical amplifiers take in classical signals, quantum amplifiers take in quantum signals, such as coherent states. This does not necessarily mean that the output is a coherent state; indeed, typically it is not. The form of the output depends on the specific amplifier design. Besides amplifying the intensity of the input, quantum amplifiers can also increase the quantum noise present in the signal. Exposition The physical electric field in a paraxial single-mode pulse can be approximated with superposition of modes; the electric field of a single mode can be described as where is the spatial coordinate vector, with z giving the direction of motion, is the polarization vector of the pulse, is the wave number in the z direction, is the annihilation operator of the photon in a specific mode . The analysis of the noise in the system is made with respect to the mean value of the annihilation operator. To obtain the noise, one solves for the real and imaginary parts of the projection of the field to a given mode . Spatial coordinates do not appear in the solution. Assume that the mean value of the initial field is . Physically, the initial state corresponds to the coherent pulse at the input of the
https://en.wikipedia.org/wiki/Conference%20room%20pilot
Conference room pilot (CRP) is a term used in software procurement and software acceptance testing. A CRP may be used during the selection and implementation of a software application in an organisation or company. The purpose of the conference room pilot is to validate a software application against the business processes of end-users of the software, by allowing end-users to use the software to carry out typical or key business processes using the new software. A commercial advantage of a conference room pilot is that it may allow the customer to prove that the new software will do the job (meets business requirements and expectations) before committing to buying the software, thus avoiding buying an inappropriate application. The term is most commonly used in the context of 'out of the box' (OOTB) or 'commercial off-the-shelf' software (COTS). Compared to user acceptance testing Although a conference room pilot shares some features of user acceptance testing (UAT), it should not be considered a testing process – it validates that a design or solution is fit for purpose at a higher level than functional testing. Shared features of CRP and UAT include: End-to-end business processes are used as a "business input" for both Functionality demonstrations Non-functional validation(e.g. performance testing) Differences between a conference room pilot and a formal UAT: It is attempting to identify how well the application meets business needs, and identify gaps, whilst still in the design phase of the project There is an expectation that changes will be required before acceptance of the solution The software is ‘on trial’ and may be rejected completely in favour of another solution.
https://en.wikipedia.org/wiki/Blind%20equalization
Blind equalization is a digital signal processing technique in which the transmitted signal is inferred (equalized) from the received signal, while making use only of the transmitted signal statistics. Hence, the use of the word blind in the name. Blind equalization is essentially blind deconvolution applied to digital communications. Nonetheless, the emphasis in blind equalization is on online estimation of the equalization filter, which is the inverse of the channel impulse response, rather than the estimation of the channel impulse response itself. This is due to blind deconvolution common mode of usage in digital communications systems, as a means to extract the continuously transmitted signal from the received signal, with the channel impulse response being of secondary intrinsic importance. The estimated equalizer is then convolved with the received signal to yield an estimation of the transmitted signal. Problem statement Noiseless model Assuming a linear time invariant channel with impulse response , the noiseless model relates the received signal to the transmitted signal via The blind equalization problem can now be formulated as follows; Given the received signal , find a filter , called an equalization filter, such that where is an estimation of . The solution to the blind equalization problem is not unique. In fact, it may be determined only up to a signed scale factor and an arbitrary time delay. That is, if are estimates of the transmitted signal and channel impulse response, respectively, then give rise to the same received signal for any real scale factor and integral time delay . In fact, by symmetry, the roles of and are Interchangeable. Noisy model In the noisy model, an additional term, , representing additive noise, is included. The model is therefore Algorithms Many algorithms for the solution of the blind equalization problem have been suggested over the years. However, as one usually has access to only a finite number of s
https://en.wikipedia.org/wiki/Soak%20testing
Soak testing involves testing a system with a typical production load, over a continuous availability period, to validate system behavior under production use. It may be required to extrapolate the results, if not possible to conduct such an extended test. For example, if the system is required to process 10,000 transactions over 100 hours, it may be possible to complete processing the same 10,000 transactions in a shorter duration (say 50 hours) as representative (and conservative estimate) of the actual production use. A good soak test would also include the ability to simulate peak loads as opposed to just average loads. If manipulating the load over specific periods of time is not possible, alternatively (and conservatively) allow the system to run at peak production loads for the duration of the test. For example, in software testing, a system may behave exactly as expected when tested for one hour. However, when it is tested for three hours, problems such as memory leaks cause the system to fail or behave unexpectedly. Soak tests are used primarily to check the reaction of a subject under test under a possible simulated environment for a given duration and for a given threshold. Observations made during the soak test are used to improve the characteristics of the subject under further tests. In electronics, soak testing may involve testing a system up to or above its maximum ratings for a long period of time. Some companies may soak test a product for a period of many months, while also applying external stresses such as elevated temperatures. This falls under load testing.
https://en.wikipedia.org/wiki/CLC%20bio
CLC bio was a bioinformatics software company that developed a software suite subsequently purchased by QIAGEN. History CLC bio started commercial activities on January 1, 2005 headquartered in Aarhus, Denmark. Its product's development was also partly funded by collaborating with researchers on grant-funded projects. By 2012, it had additional offices in Cambridge, Massachusetts, Tokyo, Taipei and Delhi, with staff largely from research backgrounds (30% having a PhD) and had built a userbase of around 250,000 users in both academic institutions and biotechnology companies. CLC bio was acquired by QIAGEN in 2013 and merged into its bioinformatics research and development division with several other purchased platforms in 2014. Software CLC bio's main activities were in software development for desktop (Mac OS X, Windows, and Linux), enterprise, and cloud software for analysis of biological data. CLC bio developed some of their own open source algorithms, as well as their own SIMD-accelerated implementations of several existing popular applications. In 2010, CLC bio was notable as the first commercial platform for bioinformatics analysis that utilized a graphical user interface for building, managing, and deploying analysis workflows as well as command-line tools, a SOAP and REST API, and later, the ability to run containerized tools. As additional capabilities were added to the software platform, it was eventually split into several themed Workbenches and plugins with collections of features relevant to different applications (e.g. pathway analysis, genomics, and other omics). Features include read mapping and de novo assembly of high-throughput sequencing data, whole-genome detection of SNPs and structural variations, ChIP-seq, RNA-Seq, small RNA analysis, genome finishing, microbial genomics, structural biology, and functions to analyze, visualize, and compare genomic, transcriptomic, and epigenomic data. Cloud Computing In 2017, CLC bio launched their CL
https://en.wikipedia.org/wiki/Illumina%2C%20Inc.
Illumina, Inc. is an American biotechnology company, headquartered in San Diego, California, and it serves more than 140 countries. Incorporated on April 1, 1998, Illumina develops, manufactures, and markets integrated systems for the analysis of genetic variation and biological function. The company provides a line of products and services that serves the sequencing, genotyping and gene expression, and proteomics markets. Illumina's technology had purportedly reduced the cost of sequencing a human genome to by 2014. Its customers include genomic research centers, pharmaceutical companies, academic institutions, clinical research organizations, and biotechnology companies. History Illumina was founded in April 1998 by David Walt, Larry Bock, John Stuelpnagel, Anthony Czarnik, and Mark Chee. While working with CW Group, a venture-capital firm, Bock and Stuelpnagel uncovered what would become Illumina's BeadArray technology at Tufts University and negotiated an exclusive license to that technology. In 1999, Illumina acquired Spyder Instruments (founded by Michal Lebl, Richard Houghten, and Jutta Eichler) for their technology of high-throughput synthesis. Illumina completed its initial public offering in July 2000. Illumina began offering single nucleotide polymorphism (SNP) genotyping services in 2001 and launched its first system, the Illumina BeadLab, in 2002, using GoldenGate Genotyping technology. Illumina currently offers microarray-based products and services for an expanding range of genetic analysis sequencing, including SNP genotyping, gene expression, and protein analysis. Illumina's technologies are used by a broad range of academic, government, pharmaceutical, biotechnology, and other leading institutions around the globe. On January 26, 2007, the company completed the acquisition of the British company Solexa, Inc. for ~$650M. Solexa was founded in June 1998 by Shankar Balasubramanian and David Klenerman to develop and commercialize genome-sequenci