source
stringlengths 31
203
| text
stringlengths 28
2k
|
|---|---|
https://en.wikipedia.org/wiki/Soil%20zoology
|
Soil zoology or pedozoology is the study of animals living fully or partially in the soil (soil fauna). The field of study was developed in the 1940s by Mercury Ghilarov in Russia. Ghilarov noted inverse relationships between size and numbers of soil organisms. He also suggested that soil included water, air and solid phases and that soil may have provided the transitional environment between aquatic and terrestrial life. The phrase was apparently first used in the English speaking world at a conference of soil zoologists presenting their research at the University of Nottingham, UK, in 1955.
See also
Biogeochemical cycle
Soil ecology
Zoology
References
Bibliography
Safwat H. Shakir Hanna, ed, 2004, Soil Zoology For Sustainable Development In The 21st century: A Festschrift in Honour of Prof. Samir I. Ghabbour on the Occasion of His 70th Birthday, Cairo, .
External links
D. Keith McE. Kevan, Ethnoentomologist, Cultural Entomology Digest 3
Soil biology
Edaphology
Soil science
|
https://en.wikipedia.org/wiki/Lorenz%20system
|
The Lorenz system is a system of ordinary differential equations first studied by mathematician and meteorologist Edward Lorenz. It is notable for having chaotic solutions for certain parameter values and initial conditions. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. In popular media the "butterfly effect" stems from the real-world implications of the Lorenz attractor, namely that several different initial chaotic conditions evolve in phase space in a way that never repeats, so all chaos is unpredictable. This underscores that chaotic systems can be completely deterministic and yet still be inherently unpredictable over long periods of time. Because chaos continually increases in systems, we cannot predict the future of systems well. E.g., even the small flap of a butterfly’s wings could set the world on a vastly different trajectory, such as by causing a hurricane. The shape of the Lorenz attractor itself, when plotted in phase space, may also be seen to resemble a butterfly.
Overview
In 1963, Edward Lorenz, with the help of Ellen Fetter who was responsible for the numerical simulations and figures, and Margaret Hamilton who helped in the initial, numerical computations leading up to the findings of the Lorenz model, developed a simplified mathematical model for atmospheric convection. The model is a system of three ordinary differential equations now known as the Lorenz equations:
The equations relate the properties of a two-dimensional fluid layer uniformly warmed from below and cooled from above. In particular, the equations describe the rate of change of three quantities with respect to time: is proportional to the rate of convection, to the horizontal temperature variation, and to the vertical temperature variation. The constants , , and are system parameters proportional to the Prandtl number, Rayleigh number, and certain physical dimensions of the layer itself.
The Lorenz equations can arise in simplifi
|
https://en.wikipedia.org/wiki/Information%20theory%20and%20measure%20theory
|
This article discusses how information theory (a branch of mathematics studying the transmission, processing and storage of information) is related to measure theory (a branch of mathematics related to integration and probability).
Measures in information theory
Many of the concepts in information theory have separate definitions and formulas for continuous and discrete cases. For example, entropy is usually defined for discrete random variables, whereas for continuous random variables the related concept of differential entropy, written , is used (see Cover and Thomas, 2006, chapter 8). Both these concepts are mathematical expectations, but the expectation is defined with an integral for the continuous case, and a sum for the discrete case.
These separate definitions can be more closely related in terms of measure theory. For discrete random variables, probability mass functions can be considered density functions with respect to the counting measure. Thinking of both the integral and the sum as integration on a measure space allows for a unified treatment.
Consider the formula for the differential entropy of a continuous random variable with range and probability density function :
This can usually be interpreted as the following Riemann–Stieltjes integral:
where is the Lebesgue measure.
If instead, is discrete, with range a finite set, is a probability mass function on , and is the counting measure on , we can write:
The integral expression, and the general concept, are identical in the continuous case; the only difference is the measure used. In both cases the probability density function is the Radon–Nikodym derivative of the probability measure with respect to the measure against which the integral is taken.
If is the probability measure induced by , then the integral can also be taken directly with respect to :
If instead of the underlying measure μ we take another probability measure , we are led to the Kullback–Leibler divergence: le
|
https://en.wikipedia.org/wiki/Gambling%20and%20information%20theory
|
Statistical inference might be thought of as gambling theory applied to the world around us. The myriad applications for logarithmic information measures tell us precisely how to take the best guess in the face of partial information. In that sense, information theory might be considered a formal expression of the theory of gambling. It is no surprise, therefore, that information theory has applications to games of chance.
Kelly Betting
Kelly betting or proportional betting is an application of information theory to investing and gambling. Its discoverer was John Larry Kelly, Jr.
Part of Kelly's insight was to have the gambler maximize the expectation of the logarithm of his capital, rather than the expected profit from each bet. This is important, since in the latter case, one would be led to gamble all he had when presented with a favorable bet, and if he lost, would have no capital with which to place subsequent bets. Kelly realized that it was the logarithm of the gambler's capital which is additive in sequential bets, and "to which the law of large numbers applies."
Side information
A bit is the amount of entropy in a bettable event with two possible outcomes and even odds. Obviously we could double our money if we knew beforehand for certain what the outcome of that event would be. Kelly's insight was that no matter how complicated the betting scenario is, we can use an optimum betting strategy, called the Kelly criterion, to make our money grow exponentially with whatever side information we are able to obtain. The value of this "illicit" side information is measured as mutual information relative to the outcome of the betable event:
where Y is the side information, X is the outcome of the betable event, and I is the state of the bookmaker's knowledge. This is the average Kullback–Leibler divergence, or information gain, of the a posteriori probability distribution of X given the value of Y relative to the a priori distribution, or stated odds
|
https://en.wikipedia.org/wiki/Yitzhak%20Katznelson
|
Yitzhak Katznelson (; born 1934) is an Israeli mathematician.
Katznelson was born in Jerusalem. He received his doctoral degree from the University of Paris in 1956. He is a professor of mathematics at Stanford University.
He is the author of An Introduction to Harmonic Analysis, which won the Steele Prize for Mathematical Exposition in 2002.
In 2012 he became a fellow of the American Mathematical Society.
References
External links
An Introduction to Harmonic Analysis
1934 births
Living people
Israeli mathematicians
Jewish American scientists
Mathematical analysts
Stanford University Department of Mathematics faculty
Fellows of the American Mathematical Society
University of Paris alumni
|
https://en.wikipedia.org/wiki/Music%20and%20mathematics
|
Music theory analyzes the pitch, timing, and structure of music. It uses mathematics to study elements of music such as tempo, chord progression, form, and meter. The attempt to structure and communicate new ways of composing and hearing music has led to musical applications of set theory, abstract algebra and number theory.
While music theory has no axiomatic foundation in modern mathematics, the basis of musical sound can be described mathematically (using acoustics) and exhibits "a remarkable array of number properties".
History
Though ancient Chinese, Indians, Egyptians and Mesopotamians are known to have studied the mathematical principles of sound, the Pythagoreans (in particular Philolaus and Archytas) of ancient Greece were the first researchers known to have investigated the expression of musical scales in terms of numerical ratios, particularly the ratios of small integers. Their central doctrine was that "all nature consists of harmony arising out of numbers".
From the time of Plato, harmony was considered a fundamental branch of physics, now known as musical acoustics. Early Indian and Chinese theorists show similar approaches: all sought to show that the mathematical laws of harmonics and rhythms were fundamental not only to our understanding of the world but to human well-being. Confucius, like Pythagoras, regarded the small numbers 1,2,3,4 as the source of all perfection.
Time, rhythm, and meter
Without the boundaries of rhythmic structure – a fundamental equal and regular arrangement of pulse repetition, accent, phrase and duration – music would not be possible. Modern musical use of terms like meter and measure also reflects the historical importance of music, along with astronomy, in the development of counting, arithmetic and the exact measurement of time and periodicity that is fundamental to physics.
The elements of musical form often build strict proportions or hypermetric structures (powers of the numbers 2 and 3).
Musical form
Musical
|
https://en.wikipedia.org/wiki/Earth-centered%2C%20Earth-fixed%20coordinate%20system
|
The Earth-centered, Earth-fixed coordinate system (acronym ECEF), also known as the geocentric coordinate system, is a cartesian spatial reference system that represents locations in the vicinity of the Earth (including its surface, interior, atmosphere, and surrounding outer space) as X, Y, and Z measurements from its center of mass. Its most common use is in tracking the orbits of satellites and in satellite navigation systems for measuring locations on the surface of the Earth, but it is also used in applications such as tracking crustal motion.
The distance from a given point of interest to the center of Earth is called the geocentric distance, , which is a generalization of the geocentric radius, , not restricted to points on the reference ellipsoid surface.
The geocentric altitude is a type of altitude defined as the difference between the two aforementioned quantities: ; it is not to be confused for the geodetic altitude.
Conversions between ECEF and geodetic coordinates (latitude and longitude) are discussed at geographic coordinate conversion.
Structure
As with any spatial reference system, ECEF consists of an abstract coordinate system (in this case, a conventional three-dimensional right-handed system), and a geodetic datum that binds the coordinate system to actual locations on the Earth. The ECEF that is used for the Global Positioning System (GPS) is the geocentric WGS 84, which currently includes its own ellipsoid definition. Other local datums such as NAD 83 may also be used. Due to differences between datums, the ECEF coordinates for a location will be different for different datums, although the differences between most modern datums is relatively small, within a few meters.
The ECEF coordinate system has the following parameters:
The origin at the center of the chosen ellipsoid. In WGS 84, this is center of mass of the Earth.
The Z axis is the line between the North and South Poles, with positive values increasing northward. In WGS 84, this
|
https://en.wikipedia.org/wiki/Local%20tangent%20plane%20coordinates
|
Local tangent plane coordinates (LTP), also known as local ellipsoidal system, local geodetic coordinate system, or local vertical, local horizontal coordinates (LVLH), are a spatial reference system based on the tangent plane defined by the local vertical direction and the Earth's axis of rotation.
It consists of three coordinates: one represents the position along the northern axis, one along the local eastern axis, and one represents the vertical position.
Two right-handed variants exist: east, north, up (ENU) coordinates and north, east, down (NED) coordinates.
They serve for representing state vectors that are commonly used in aviation and marine cybernetics.
Axes
These frames are location dependent. For movements around the globe, like air or sea navigation, the frames are defined as tangent to the lines of geographical coordinates:
East–west tangent to parallels,
North–south tangent to meridians, and
Up–down in the direction normal to the oblate spheroid used as Earth's ellipsoid, which does not generally pass through the center of Earth.
Local east, north, up (ENU) coordinates
In many targeting and tracking applications the local East, North, Up (ENU) Cartesian coordinate system is far more intuitive and practical than ECEF or Geodetic coordinates. The local ENU coordinates are formed from a plane tangent to the Earth's surface fixed to a specific location and hence it is sometimes known as a "Local Tangent" or "local geodetic" plane. By convention the east axis is labeled , the north and the up .
Local north, east, down (NED) coordinates
In an airplane, most objects of interest are below the aircraft, so it is sensible to define down as a positive number. The North, East, Down (NED) coordinates allow this as an alternative to the ENU. By convention, the north axis is labeled , the east and the down . To avoid confusion between and , etc. in this article we will restrict the local coordinate frame to ENU.
The origin of this coordinate system i
|
https://en.wikipedia.org/wiki/AN/FSQ-32
|
The AN/FSQ-32 SAGE Solid State Computer (AN/FSQ-7A before December 1958, colloq. "Q-32") was a planned military computer central for deployment to Super Combat Centers in nuclear bunkers and to some above-ground military installations. In 1958, Air Defense Command planned to acquire 13 Q-32 centrals for several Air Divisions/Sectors.
Background
In 1956, ARDC sponsored "development of a transistorized, or solid-state, computer" by IBM and when announced in June 1958, the planned "SAGE Solid State Computer...was estimated to have a computing capability of seven times" the AN/FSQ-7. ADC's November 1958 plan to field—by April 1964—the 13 solid state AN/FSQ-7A was for each to network "a maximum of 20 long-range radar inputs [40 LRI telephone lines] and a maximum dimension of just over 1000 miles in both north-south and east-west directions." "Low rate Teletype data" could be accepted on 32 telephone lines (e.g., from "Alert Network Number 1"). On 17 November 1958, CINCNORAD "decided to request the solid state computer and hardened facilities", and the remaining vacuum-tube AN/FSQ-8 centrals for combat centers were cancelled (one was retrofitted to function as an AN/FSQ-7).
" AN/FSQ-32 computer would be"* used:
1. for "a combat center" (as with the vacuum-tube AN/FSQ-8),
2. to accept "radar and weapons connections" for weapons direction as with the AN/FSQ-7--e.g., for backup CIM-10 Bomarc guidance or manned interceptor GCI if above-ground Direction Center(s) could not function, and
3. for "air traffic control functions".
"Air Defense and Air Traffic Control Integration" was planned for airways modernization after the USAF, CAA, and AMB agreed on August 22, 1958, to "collocate air route traffic control centers and air defense facilities" (e.g., jointly use some Air Route Surveillance Radars at SAGE radar stations). The May 22, 1959, agreement between the USAF, DoD, and FAA designated emplacement of ATC facilities "in the hardened structure of the nine U. S. SCC'
|
https://en.wikipedia.org/wiki/Clean-in-place
|
Clean-in-place (CIP) is an automated method of cleaning the interior surfaces of pipes, vessels, equipment, filters and associated fittings, without major disassembly. CIP is commonly used for equipment such as piping, tanks, and fillers. CIP employs turbulent flow through piping, and/or spray balls for large surfaces. In some cases, CIP can also be accomplished with fill, soak and agitate.
Up to the 1950s, closed systems were disassembled and cleaned manually. The advent of CIP was a boon to industries that needed frequent internal cleaning of their processes. Industries that rely heavily on CIP are those requiring high levels of hygiene, and include: dairy, beverage, brewing, processed foods, pharmaceutical, and cosmetics.
The benefit to industries that use CIP is that the cleaning is faster, less labor-intensive and more repeatable, and poses less of a chemical exposure risk. CIP started as a manual practice involving a balance tank, centrifugal pump, and connection to the system being cleaned. Since the 1950s, CIP has evolved to include fully automated systems with programmable logic controllers, multiple balance tanks, sensors, valves, heat exchangers, data acquisition and specially designed spray nozzle systems. Simple, manually operated CIP systems can still be found in use today.
Depending on soil load and process geometry, the CIP design principle is one of the following:
deliver highly turbulent, high flow-rate solution to effect good cleaning (applies to pipe circuits and some filled equipment).
deliver solution as a low-energy spray to fully wet the surface (applies to lightly soiled vessels where a static spray ball may be used).
deliver a high energy impinging spray (applies to highly soiled or large diameter vessels where a dynamic spray device may be used).
Elevated temperature and chemical detergents are often employed to enhance cleaning effectiveness.
Factors affecting the effectiveness of the cleaning agents
Temperature of the cleaning so
|
https://en.wikipedia.org/wiki/Lute%20of%20Pythagoras
|
The lute of Pythagoras is a self-similar geometric figure made from a sequence of pentagrams.
Constructions
The lute may be drawn from a sequence of pentagrams.
The centers of the pentagraphs lie on a line and (except for the first and largest of them) each shares two vertices with the next larger one in the sequence.
An alternative construction is based on the golden triangle, an isosceles triangle with base angles of 72° and apex angle 36°. Two smaller copies of the same triangle may be drawn inside the given triangle, having the base of the triangle as one of their sides. The two new edges of these two smaller triangles, together with the base of the original golden triangle, form three of the five edges of the polygon. Adding a segment between the endpoints of these two new edges cuts off a smaller golden triangle, within which the construction can be repeated.
Some sources add another pentagram, inscribed within the inner pentagon of the largest pentagram of the figure. The other pentagons of the figure do not have inscribed pentagrams.
Properties
The convex hull of the lute is a kite shape with three 108° angles and one 36° angle. The sizes of any two consecutive pentagrams in the sequence are in the golden ratio to each other, and many other instances of the golden ratio appear within the lute.
History
The lute is named after the ancient Greek mathematician Pythagoras, but its origins are unclear. An early reference to it is in a 1990 book on the golden ratio by Boles and Newman.
See also
Spidron
References
Fractals
Golden ratio
|
https://en.wikipedia.org/wiki/List%20of%20limits
|
This is a list of limits for common functions such as elementary functions. In this article, the terms a, b and c are constants with respect to x.
Limits for general functions
Definitions of limits and related concepts
if and only if This is the (ε, δ)-definition of limit.
The limit superior and limit inferior of a sequence are defined as and .
A function, , is said to be continuous at a point, c, if
Operations on a single known limit
If then:
if L is not equal to 0.
if n is a positive integer
if n is a positive integer, and if n is even, then L > 0.
In general, if g(x) is continuous at L and then
Operations on two known limits
If and then:
Limits involving derivatives or infinitesimal changes
In these limits, the infinitesimal change is often denoted or . If is differentiable at ,
. This is the definition of the derivative. All differentiation rules can also be reframed as rules involving limits. For example, if g(x) is differentiable at x,
. This is the chain rule.
. This is the product rule.
If and are differentiable on an open interval containing c, except possibly c itself, and , L'Hôpital's rule can be used:
Inequalities
If for all x in an interval that contains c, except possibly c itself, and the limit of and both exist at c, then
If and for all x in an open interval that contains c, except possibly c itself,
This is known as the squeeze theorem. This applies even in the cases that f(x) and g(x) take on different values at c, or are discontinuous at c.
Polynomials and functions of the form xa
Polynomials in x
if n is a positive integer
In general, if is a polynomial then, by the continuity of polynomials, This is also true for rational functions, as they are continuous on their domains.
Functions of the form xa
In particular,
. In particular,
Exponential functions
Functions of the form ag(x)
, due to the continuity of
Functions of the form xg(x)
Functions of the form f(x)g(x)
. This limit can be deriv
|
https://en.wikipedia.org/wiki/Ali%20Baba%20and%2040%20Thieves%20%28video%20game%29
|
Ali Baba and 40 Thieves is a maze arcade video game released by Sega in 1982. Players take the role of the famous Arabian hero who must fend off and kill the forty thieves who are trying to steal his money. The game is based on the folk tale of the same name. It was ported to the MSX platform, and then a Vector-06C port was made based on the MSX version.
Legacy
A clone for the ZX Spectrum was published by Suzy Soft in 1985 under the name Ali Baba.
References
External links
Ali Baba and 40 Thieves at Arcade History
Ali Baba and 40 Thieves playable at the Internet Archive
1982 video games
Arcade video games
Works based on Ali Baba
Maze games
MSX games
Sega arcade games
Video games based on Arabian mythology
Video games based on One Thousand and One Nights
Video games developed in Japan
|
https://en.wikipedia.org/wiki/Karl%20K%C3%BCpfm%C3%BCller
|
Karl Küpfmüller (6 October 1897 – 26 December 1977) was a German electrical engineer, who was prolific in the areas of communications technology, measurement and control engineering, acoustics, communication theory, and theoretical electro-technology.
Biography
Küpfmüller was born in Nuremberg, where he studied at the Ohm-Polytechnikum. After returning from military service in World War I, he worked at the telegraph research division of the German Post in Berlin as a co-worker of Karl Willy Wagner, and, from 1921, he was lead engineer at the central laboratory of Siemens & Halske AG in the same city.
In 1928 he became full professor of general and theoretical electrical engineering at the Technische Hochschule in Danzig, and later held the same position in Berlin. Küpfmüller joined the National Socialist Motor Corps in 1933. In the following year he also joined the SA. In 1937 Küpfmüller joined the NSDAP and became a member of the SS, where he reached the rank of Obersturmbannführer.
Küpfmüller was appointed as director of communication technology Research & Development at the Siemens-Wernerwerk for telegraphy. In 1941–1945 he was director of the central R&D division at Siemens & Halske in 1937.
From 1952 until his retirement in 1963, he held the chair for general communications engineering at Technische Hochschule Darmstadt.
Later he was honorary professor at the Technische Hochschule Berlin. In 1968, he received the Werner von Siemens Ring for his contributions to the theory of telecommunications and other electro-technology.
He died at Darmstadt.
Studies in communication theory
About 1928, he did the same analysis that Harry Nyquist did, to show that not more than 2B independent pulses per second could be put through a channel of bandwidth B. He did this by quantifying the time-bandwidth product k of various communication signal types, and showing that k could never be less than 1/2. From his 1931 paper (rough translation from Swedish):
"The time law al
|
https://en.wikipedia.org/wiki/Conditional%20operator
|
The conditional operator is supported in many programming languages. This term usually refers to ?: as in C, C++, C#, and JavaScript. However, in Java, this term can also refer to && and ||.
&& and ||
In some programming languages, e.g. Java, the term conditional operator refers to short circuit boolean operators && and ||. The second expression is evaluated only when the first expression is not sufficient to determine the value of the whole expression.
Difference from bitwise operator
& and | are bitwise operators that occur in many programming languages. The major difference is that bitwise operations operate on the individual bits of a binary numeral, whereas conditional operators operate on logical operations. Additionally, expressions before and after a bitwise operator are always evaluated.
if (expression1 || expression2 || expression3)If expression 1 is true, expressions 2 and 3 are NOT checked.
if (expression1 | expression2 | expression3)This checks expressions 2 and 3, even if expression 1 is true.
Short circuit operators can reduce run times by avoiding unnecessary calculations. They can also avoid Null Exceptions when expression 1 checks whether an object is valid.
Usage in Java
class ConditionalDemo1 {
public static void main(String[] args) {
int value1 = 1;
int value2 = 2;
if ((value1 == 1) && (value2 == 2))
System.out.println("value1 is 1 AND value2 is 2");
if ((value1 == 1) || (value2 == 1))
System.out.println("value1 is 1 OR value2 is 1");
}
}
"?:"
In most programming languages, ?: is called the conditional operator. It is a type of ternary operator. However, ternary operator in most situations refers specifically to ?: because it is the only operator that takes three operands.
Regular usage of "?:"
?: is used in conditional expressions. Programmers can rewrite an if-then-else expression in a more concise way by using the conditional operator.
Syntax
condition ? expressio
|
https://en.wikipedia.org/wiki/Polyglycerol%20polyricinoleate
|
Polyglycerol polyricinoleate (PGPR), E476, is an emulsifier made from glycerol and fatty acids (usually from castor bean, but also from soybean oil). In chocolate, compound chocolate and similar coatings, PGPR is mainly used with another substance like lecithin to reduce viscosity. It is used at low levels (below 0.5%), and works by decreasing the friction between the solid particles (e.g. cacao, sugar, milk) in molten chocolate, reducing the yield stress so that it flows more easily, approaching the behaviour of a Newtonian fluid. It can also be used as an emulsifier in spreads and in salad dressings, or to improve the texture of baked goods. It is made up of a short chain of glycerol molecules connected by ether bonds, with ricinoleic acid side chains connected by ester bonds.
PGPR is a yellowish, viscous liquid, and is strongly lipophilic: it is soluble in fats and oils and insoluble in water and ethanol.
Manufacture
Glycerol is heated to above 200 °C in a reactor in the presence of an alkaline catalyst to create polyglycerol. Castor oil fatty acids are separately heated to above 200 °C, to create interesterified ricinoleic fatty acids. The polyglycerol and the interesterified ricinoleic fatty acids are then mixed to create PGPR.
Use in chocolate
Because PGPR improves the flow characteristics of chocolate and compound chocolate, especially near the melting point, it can improve the efficiency of chocolate coating processes: chocolate coatings with PGPR flow better around shapes of enrobed and dipped products, and it also improves the performance of equipment used to produce solid molded products: the chocolate flows better into the mold, and surrounds inclusions and releases trapped air more easily. PGPR can also be used to reduce the quantity of cocoa butter needed in chocolate formulations: the solid particles in chocolate are suspended in the cocoa butter, and by reducing the viscosity of the chocolate, less cocoa butter is required, which saves costs, be
|
https://en.wikipedia.org/wiki/Expansion%20joint
|
A expansion joint, or movement joint, is an assembly designed to hold parts together while safely absorbing temperature-induced expansion and contraction of building materials. They are commonly found between sections of buildings, bridges, sidewalks, railway tracks, piping systems, ships, and other structures.
Building faces, concrete slabs, and pipelines expand and contract due to warming and cooling from seasonal variation, or due to other heat sources. Before expansion joint gaps were built into these structures, they would crack under the stress induced.
Bridge expansion joints
Bridge expansion joints are designed to allow for continuous traffic between structures while accommodating movement, shrinkage, and temperature variations on reinforced and prestressed concrete, composite, and steel structures. They stop the bridge from bending out of place in extreme conditions, and also allow enough vertical movement to permit bearing replacement without the need to dismantle the bridge expansion joint. There are various types, which can accommodate movement from , including joints for small movement (EMSEAL BEJS, XJS, JEP, WR, WOSd, and Granor AC-AR), medium movement (ETIC EJ, Wd), and large movement (WP, ETIC EJF/Granor SFEJ).
Modular expansion joints are used when the movements of a bridge exceed the capacity of a single gap joint or a finger type joint. Modular multiple-gap expansion joints can accommodate movements in all directions and rotations about every axis. They can be used for longitudinal movements of as little as 160mm, or for very large movements of over 3000 mm. The total movement of the bridge deck is divided among a number of individual gaps which are created by horizontal surface beams. The individual gaps are sealed by watertight elastomeric profiles, and surface beam movements are regulated by an elastic control system. The drainage of the joint is via the drainage system of the bridge deck. Certain joints feature so-called “sinus plates”
|
https://en.wikipedia.org/wiki/Prosigns%20for%20Morse%20code
|
Procedural signs or prosigns are shorthand signals used in Morse code telegraphy, for the purpose of simplifying and standardizing procedural protocols for land-line and radio communication. The procedural signs are distinct from conventional Morse code abbreviations, which consist mainly of brevity codes that convey messages to other parties with greater speed and accuracy. However, some codes are used both as prosigns and as single letters or punctuation marks, and for those, the distinction between a prosign and abbreviation is ambiguous, even in context.
Overview
In the broader sense prosigns are just standardised parts of short form radio protocol, and can include any abbreviation. Examples would be for "okay, heard you, continue" or for "message, received". In a more restricted sense, "prosign" refers to something analogous to the nonprinting control characters in teleprinter and computer character sets, such as Baudot and ASCII. Different from abbreviations, those are universally recognizable across language barriers as distinct and well-defined symbols.
At the coding level, prosigns admit any form the Morse code can take, unlike abbreviations which have to be sent as a sequence of individual letters, like ordinary text. On the other hand, most prosigns codes are much longer than typical codes for letters and numbers. They are individual and indivisible code points within the broader Morse code, fully at par with basic letters and numbers.
The development of prosigns began in the 1860s for wired telegraphy. Since telegraphy preceded voice communications by several decades, many of the much older Morse prosigns have acquired precisely equivalent prowords for use in more recent voice protocols.
Not all prosigns used by telegraphers are standard: There are regional and community-specific variations of the coding convention used in certain radio networks to manage transmission and formatting of messages, and many unofficial prosign conventions exist; some o
|
https://en.wikipedia.org/wiki/Morse%20code%20abbreviations
|
Morse code abbreviations are used to speed up Morse communications by foreshortening textual words and phrases. Morse abbreviations are short forms, representing normal textual words and phrases formed from some (fewer) characters taken from the word or phrase being abbreviated. Many are typical English abbreviations, or short acronyms for often-used phrases.
Distinct from prosigns and commercial codes
Morse code abbreviations are not the same as prosigns. Morse abbreviations are composed of (normal) textual alpha-numeric character symbols with normal Morse code inter-character spacing; the character symbols in abbreviations, unlike the delineated character groups representing Morse code prosigns, are not "run together" or concatenated in the way most prosigns are formed.
Although a few abbreviations (such as for "dollar") are carried over from former commercial telegraph codes, almost all Morse abbreviations are not commercial codes. From 1845 until well into the second half of the 20th century, commercial telegraphic code books were used to shorten telegrams, e.g. = "Locals have plundered everything from the wreck." However, these cyphers are typically "fake" words six characters long, or more, used for replacing commonly used whole phrases, and are distinct from single-word abbreviations.
Word and phrase abbreviations
The following Table of Morse code abbreviations and further references to Brevity codes such as 92 Code, Q code, Z code, and R-S-T system serve to facilitate fast and efficient Morse code communications.
{| class="wikitable"
|+Table of selected Morse code abbreviations
|-
! Abbreviation
! Meaning
! Defined in
! Type of abbreviation
|-
|
| All after (used after question mark to request a repetition)
| ITU-R M.1172
| operating signal
|-
|
| All before (similarly)
| ITU-R M.1172
| operating signal
|-
|
| Address
| ITU-T Rec. F.1
| operating signal
|-
|
| Address
| ITU-R M.1172
| operating signal
|-
|
| Again
|
| operating signal
|-
|
| Ante
|
https://en.wikipedia.org/wiki/Multicopy%20single-stranded%20DNA
|
Multicopy single-stranded DNA (msDNA) is a type of extrachromosomal satellite DNA that consists of a single-stranded DNA molecule covalently linked via a 2'-5'phosphodiester bond to an internal guanosine of an RNA molecule. The resultant DNA/RNA chimera possesses two stem-loops joined by a branch similar to the branches found in RNA splicing intermediates. The coding region for msDNA, called a "retron", also encodes a type of reverse transcriptase, which is essential for msDNA synthesis.
Discovery
Before the discovery of msDNA in myxobacteria, a group of swarming, soil-dwelling bacteria, it was thought that the enzymes known as reverse transcriptases (RT) existed only in eukaryotes and viruses. The discovery led to an increase in research of the area. As a result, msDNA has been found to be widely distributed among bacteria, including various strains of Escherichia coli and pathogenic bacteria. Further research discovered similarities between HIV-encoded reverse transcriptase and an open reading frame (ORF) found in the msDNA coding region. Tests confirmed the presence of reverse transcriptase activity in crude lysates of retron-containing strains. Although an RNase H domain was tentatively identified in the retron ORF, it was later found that the RNase H activity required for msDNA synthesis is actually supplied by the host.
Retrons
The discovery of msDNA has led to broader questions regarding where reverse transcriptase originated, as genes encoding for reverse transcriptase (not necessarily associated with msDNA) have been found in prokaryotes, eukaryotes, viruses and even archaea. After a DNA fragment coding for the production of msDNA in E. coli was discovered, it was conjectured that bacteriophages might have been responsible for the introduction of the RT gene into E. coli. These discoveries suggest that reverse transcriptase played a role in the evolution of viruses from bacteria, with one hypothesis stating that, with the help of reverse transcriptas
|
https://en.wikipedia.org/wiki/The%20Regenerative%20Medicine%20Institute
|
The Regenerative Medicine Institute (REMEDI), was established in 2003 as a Centre for Science, Technology & Engineering in collaboration with National University of Ireland, Galway. It obtained an award of €14.9 million from Science Foundation Ireland over five years.
It conducts basic research and applied research in regenerative medicine, an emerging field that combines the technologies of gene therapy and adult stem cell therapy. The goal is to use cells and genes to regenerate healthy tissues that can be used to repair or replace other tissues and organs in a minimally invasive approach.
Centres for Science, Engineering & Technology help link scientists and engineers in partnerships across academia and industry to address crucial research questions, foster the development of new and existing Irish-based technology companies, attract industry that could make an important contribution to Ireland and its economy, and expand educational and career opportunities in Ireland in science and engineering. CSETs must exhibit outstanding research quality, intellectual breadth, active collaboration, flexibility in responding to new research opportunities, and integration of research and education in the fields that SFI supports.
References
External links
Regenerative Medicine Institute (REMEDI)
Science Foundation Ireland
National University of Ireland, Galway
Medical research institutes in the Republic of Ireland
Molecular biology
Biotechnology organizations
Bioethics research organizations
2003 establishments in Ireland
Organizations established in 2003
|
https://en.wikipedia.org/wiki/Arkanoid%3A%20Revenge%20of%20Doh
|
Arkanoid: Revenge of Doh (a.k.a. Arkanoid 2) is an arcade game released by Taito in 1987 as a sequel to Arkanoid.
Plot
The mysterious enemy known as DOH has returned to seek vengeance on the Vaus space vessel. The player must once again take control of the Vaus (paddle) and overcome many challenges in order to destroy DOH once and for all. Revenge of Doh sees the player battle through 34 rounds, taken from a grand total of 64.
Gameplay
Revenge of Doh differs from its predecessor with the introduction of "Warp Gates". Upon completion of a level or when the Break ("B") pill is caught, two gates appear at the bottom of the play area, on either side. The player can choose to go through either one of the gates - the choice will affect which version of the next level is provided. The fire-button is only used when the Laser Cannons ("L") or Catch ("C") pill is caught.
The game has new power-ups and enemy types, and two new types of bricks. Notched silver bricks, like normal silver bricks, take several hits to destroy. However, after a short period of time after destruction, they regenerate at full strength. These bricks do not need to be destroyed in order to complete a level. In addition, some bricks move left to right as long as their sides are not obstructed by other bricks.
The US version has an entirely different layout for Level 1 that feature an entire line of notched bricks, with all colored bricks above it moving from side to side.
On round 17, the player must defeat a giant brain as a mini-boss. After completing all 33 rounds, the player faces DOH in two forms as a final confrontation: its original, statue-like incarnation, then a creature with waving tentacles that break off and regenerate when struck.
Home versions include level editor, which players can use to create their own levels or edit and replace existing levels.
Release
Revenge of Doh initially released in arcades in June 1987. In June 1989, versions for the Tandy, Atari ST, Apple IIGS, and Co
|
https://en.wikipedia.org/wiki/Logic%20level
|
In digital circuits, a logic level is one of a finite number of states that a digital signal can inhabit. Logic levels are usually represented by the voltage difference between the signal and ground, although other standards exist. The range of voltage levels that represent each state depends on the logic family being used.
A logic-level shifter can be used to allow compatibility between different circuits.
2-level logic
In binary logic the two levels are logical high and logical low, which generally correspond to binary numbers 1 and 0 respectively or truth values true and false respectively. Signals with one of these two levels can be used in Boolean algebra for digital circuit design or analysis.
Active state
The use of either the higher or the lower voltage level to represent either logic state is arbitrary. The two options are active high (positive logic) and active low (negative logic). Active-high and active-low states can be mixed at will: for example, a read only memory integrated circuit may have a chip-select signal that is active-low, but the data and address bits are conventionally active-high. Occasionally a logic design is simplified by inverting the choice of active level (see De Morgan's laws).
The name of an active-low signal is historically written with a bar above it to distinguish it from an active-high signal. For example, the name Q, read Q bar or Q not, represents an active-low signal. The conventions commonly used are:
a bar above ()
a leading slash (/Q)
a lower-case n prefix or suffix (nQ or Q_n)
a trailing # (Q#), or
an _B or _L suffix (Q_B or Q_L).
Many control signals in electronics are active-low signals (usually reset lines, chip-select lines and so on). Logic families such as TTL can sink more current than they can source, so fanout and noise immunity increase. It also allows for wired-OR logic if the logic gates are open-collector/open-drain with a pull-up resistor. Examples of this are the I²C bus and the Controller Ar
|
https://en.wikipedia.org/wiki/Biological%20process
|
Biological processes are those processes that are vital for an organism to live, and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms. Metabolism and homeostasis are examples.
Biological processes within an organism can also work as bioindicators. Scientists are able to look at an individual's biological processes to monitor the effects of environmental changes.
Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule.
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells – the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
Interaction between organisms. the processes
|
https://en.wikipedia.org/wiki/Stewart%E2%80%93Walker%20lemma
|
The Stewart–Walker lemma provides necessary and sufficient conditions for the linear perturbation of a tensor field to be gauge-invariant. if and only if one of the following holds
1.
2. is a constant scalar field
3. is a linear combination of products of delta functions
Derivation
A 1-parameter family of manifolds denoted by with has metric . These manifolds can be put together to form a 5-manifold . A smooth curve can be constructed through with tangent 5-vector , transverse to . If is defined so that if is the family of 1-parameter maps which map and then a point can be written as . This also defines a pull back that maps a tensor field back onto . Given sufficient smoothness a Taylor expansion can be defined
is the linear perturbation of . However, since the choice of is dependent on the choice of gauge another gauge can be taken. Therefore the differences in gauge become . Picking a chart where and then which is a well defined vector in any and gives the result
The only three possible ways this can be satisfied are those of the lemma.
Sources
Describes derivation of result in section on Lie derivatives
Tensors
Lemmas in analysis
|
https://en.wikipedia.org/wiki/Wireless%20application%20service%20provider
|
A wireless application service provider (WASP) is the generic name for a firm that provides remote services, typically to handheld devices, such as cellphones or PDAs, that connect to wireless data networks. WASPs are a specific category of application service providers (ASPs), though the latter term may more often be associated with standard web services. They can also be used for wireless bridging between different types of network topologies.
Wireless networking
|
https://en.wikipedia.org/wiki/Biracks%20and%20biquandles
|
In mathematics, biquandles and biracks are sets with binary operations that generalize quandles and racks. Biquandles take, in the theory of virtual knots, the place that quandles occupy in the theory of classical knots. Biracks and racks have the same relation, while a biquandle is a birack which satisfies some additional conditions.
Definitions
Biquandles and biracks have two binary operations on a set written and . These satisfy the following three axioms:
1.
2.
3.
These identities appeared in 1992 in reference [FRS] where the object was called a species.
The superscript and subscript notation is useful here because it dispenses with the need for brackets. For example,
if we write for and for then the
three axioms above become
1.
2.
3.
If in addition the two operations are invertible, that is given in the set there are unique in the set such that and then the set together with the two operations define a birack.
For example, if , with the operation , is a rack then it is a birack if we define the other operation to be the identity, .
For a birack the function can be defined by
Then
1. is a bijection
2.
In the second condition, and are defined by and . This condition is sometimes known as the set-theoretic Yang-Baxter equation.
To see that 1. is true note that defined by
is the inverse to
To see that 2. is true let us follow the progress of the triple under . So
On the other hand, . Its progress under is
Any satisfying 1. 2. is said to be a switch (precursor of biquandles and biracks).
Examples of switches are the identity, the twist and where is the operation of a rack.
A switch will define a birack if the operations are invertible. Note that the identity switch does not do this.
Biquandles
A biquandle is a birack which satisfies some additional structure, as described by Nelson and Rische. The axioms of a biquandle are "minimal" in the sense that they are the weakest restrictions that can
|
https://en.wikipedia.org/wiki/Urea-to-creatinine%20ratio
|
In medicine, the urea-to-creatinine ratio (UCR), known in the United States as BUN-to-creatinine ratio, is the ratio of the blood levels of urea (BUN) (mmol/L) and creatinine (Cr) (μmol/L). BUN only reflects the nitrogen content of urea (MW 28) and urea measurement reflects the whole of the molecule (MW 60), urea is just over twice BUN (60/28 = 2.14). In the United States, both quantities are given in mg/dL The ratio may be used to determine the cause of acute kidney injury or dehydration.
The principle behind this ratio is the fact that both urea (BUN) and creatinine are freely filtered by the glomerulus; however, urea reabsorbed by the renal tubules can be regulated (increased or decreased) whereas creatinine reabsorption remains the same (minimal reabsorption).
Definition
Urea and creatinine are nitrogenous end products of metabolism. Urea is the primary metabolite derived from dietary protein and tissue protein turnover. Creatinine is the product of muscle creatine catabolism. Both are relatively small molecules (60 and 113 daltons, respectively) that distribute throughout total body water. In Europe, the whole urea molecule is assayed, whereas in the United States only the nitrogen component of urea (the blood or serum urea nitrogen, i.e., BUN or SUN) is measured. The BUN, then, is roughly one-half (7/15 or 0.466) of the blood urea.
The normal range of urea nitrogen in blood or serum is 5 to 20 mg/dl, or 1.8 to 7.1 mmol urea per liter. The range is wide because of normal variations due to protein intake, endogenous protein catabolism, state of hydration, hepatic urea synthesis, and renal urea excretion. A BUN of 15 mg/dl would represent significantly impaired function for a woman in the thirtieth week of gestation. Her higher glomerular filtration rate (GFR), expanded extracellular fluid volume, and anabolism in the developing fetus contribute to her relatively low BUN of 5 to 7 mg/dl. In contrast, the rugged rancher who eats in excess of 125 g protein each
|
https://en.wikipedia.org/wiki/Software%20security%20assurance
|
Software security assurance is a process that helps design and implement software that protects the data and resources contained in and controlled by that software. Software is itself a resource and thus must be afforded appropriate security.
What is software security assurance?
Software Security Assurance (SSA) is the process of ensuring that software is designed to operate at a level of security that is consistent with the potential harm that could result from the loss, inaccuracy, alteration, unavailability, or misuse of the data and resources that it uses, controls, and protects.
The software security assurance process begins by identifying and categorizing the information that is to be contained in, or used by, the software. The information should be categorized according to its sensitivity. For example, in the lowest category, the impact of a security violation is minimal (i.e. the impact on the software owner's mission, functions, or reputation is negligible). For a top category, however, the impact may pose a threat to human life; may have an irreparable impact on software owner's missions, functions, image, or reputation; or may result in the loss of significant assets or resources.
Once the information is categorized, security requirements can be developed. The security requirements should address access control, including network access and physical access; data management and data access; environmental controls (power, air conditioning, etc.) and off-line storage; human resource security; and audit trails and usage records.
What causes software security problems?
All security vulnerabilities in software are the result of security bugs, or defects, within the software. In most cases, these defects are created by two primary causes: (1) non-conformance, or a failure to satisfy requirements; and (2) an error or omission in the software requirements.
Non-conformance, or a failure to satisfy requirements
A non-conformance may be simple–the most common
|
https://en.wikipedia.org/wiki/Successive-approximation%20ADC
|
A successive-approximation ADC is a type of analog-to-digital converter that converts a continuous analog waveform into a discrete digital representation using a binary search through all possible quantization levels before finally converging upon a digital output for each conversion.
Algorithm
The successive-approximation analog-to-digital converter circuit typically consists of four chief subcircuits:
A sample-and-hold circuit to acquire the input voltage .
An analog voltage comparator that compares to the output of the internal DAC and outputs the result of the comparison to the successive-approximation register (SAR).
A successive-approximation register subcircuit designed to supply an approximate digital code of to the internal DAC.
An internal reference DAC that, for comparison with , supplies the comparator with an analog voltage equal to the digital code output of the .
The successive approximation register is initialized so that the most significant bit (MSB) is equal to a digital 1. This code is fed into the DAC, which then supplies the analog equivalent of this digital code into the comparator circuit for comparison with the sampled input voltage. If this analog voltage exceeds , then the comparator causes the SAR to reset this bit; otherwise, the bit is left as 1. Then the next bit is set to 1 and the same test is done, continuing this binary search until every bit in the SAR has been tested. The resulting code is the digital approximation of the sampled input voltage and is finally output by the SAR at the end of the conversion (EOC).
Mathematically, let , so in is the normalized input voltage. The objective is to approximately digitize to an accuracy of . The algorithm proceeds as follows:
Initial approximation .
approximation , where, is the signum function ( for , for ). It follows using mathematical induction that .
As shown in the above algorithm, a SAR ADC requires:
An input voltage source .
A reference voltage source to nor
|
https://en.wikipedia.org/wiki/Antibody%20microarray
|
An antibody microarray (also known as antibody array) is a specific form of protein microarray. In this technology, a collection of captured antibodies are spotted and fixed on a solid surface such as glass, plastic, membrane, or silicon chip, and the interaction between the antibody and its target antigen is detected. Antibody microarrays are often used for detecting protein expression from various biofluids including serum, plasma and cell or tissue lysates. Antibody arrays may be used for both basic research and medical and diagnostic applications.
Background
The concept and methodology of antibody microarrays were first introduced by Tse Wen Chang in 1983 in a scientific publication and a series of patents, when he was working at Centocor in Malvern, Pennsylvania. Chang coined the term “antibody matrix” and discussed “array” arrangement of minute antibody spots on small glass or plastic surfaces. He demonstrated that a 10×10 (100 in total) and 20×20 (400 in total) grid of antibody spots could be placed on a 1×1 cm surface. He also estimated that if an antibody is coated at a 10 μg/mL concentration, which is optimal for most antibodies, 1 mg of antibody can make 2,000,000 dots of 0.25 mm diameter. Chang's invention focused on the employment of antibody microarrays for the detection and quantification of cells bearing certain surface antigens, such as CD antigens and HLA allotypic antigens, particulate antigens, such as viruses and bacteria, and soluble antigens. The principle of "one sample application, multiple determinations", assay configuration, and mechanics for placing absorbent dots described in the paper and patents should be generally applicable to different kinds of microarrays. When Tse Wen Chang and Nancy T. Chang were setting up Tanox, Inc. in Houston, Texas in 1986, they purchased the rights on the antibody matrix patents from Centocor as part of the technology base to build their new startup. Their first product in development was an assay, te
|
https://en.wikipedia.org/wiki/Nairi%20%28computer%29
|
The first Nairi (, ) computer was developed and launched into production in 1964, at the Yerevan Research Institute of Mathematical Machines (Yerevan, Armenia), and were chiefly designed by Hrachya Ye. Hovsepyan. In 1965, a modified version called Nairi-M, and in 1967 versions called Nairi-S and Nairi-2, were developed. Nairi-3 and Nairi-3-1, which used integrated hybrid chips, were developed in 1970. These computers were used for a wide class of tasks in a variety of areas, including Mechanical Engineering and the Economics.
In 1971, the developers of the Nairi computer were awarded the State Prize of the USSR.
Nairi-1
The development of the machine began in 1962, completed in 1964. The chief designer is Hrachya Yesaevich Hovsepyan, the leading design engineer is Mikhail Artavazdovich Khachatryan.
The architectural solution used in this machine has been patented in England, Japan, France and Italy.
Specification
The processor is 36-bit.
The clock frequency is 50 kHz.
ROM (in the original documentation - DZU (long-term memory) of a cassette type, the volume of the cassette is 2048 words of 36 bits each; was used to store firmware (2048 72-bit cells) and firmware (12288 36-bit cells). Part of the ROM it was delivered "empty", with the ability for users to flash their most frequently used programs, thus getting rid of entering programs from the remote control or punched tape.
The amount of RAM is 1024 words (8 cassettes of 128 cells), plus 5 registers.
Operations speed on addition on fixed-point numbers - 2-3 thousand ops / s, multiplication - 100 ops / s, operations on floating point numbers - 100 ops / s.
Since 1964, the machine has been produced at two factories in Armenia, as well as at the Kazan computer plant (from 1964 to 1970, about 500 machines were produced in total). In the spring of 1965, the computer was presented at a fair in Leipzig (Germany).
There were a number machine's modifications:
"Nairi-M" (1965) - the photoreader FS-1501 and the
|
https://en.wikipedia.org/wiki/Amylolytic%20process
|
Amylolytic process or amylolysis is the conversion of starch into sugar by the action of acids or enzymes such as amylase.
Starch begins to pile up inside the leaves of plants during times of light when starch is able to be produced by photosynthetic processes. This ability to make starch disappears in the dark due to the lack of illumination; there is insufficient amount of light produced during the dark needed to carry this reaction forward. Turning starch into sugar is done by the enzyme amylase.
Different pathways of amylase & location of amylase activity
The process in which amylase breaks down starch for sugar consumption is not consistent with all organisms that use amylase to breakdown stored starch. There are different amylase pathways that are involved in starch degradation. The occurrence of starch degradation into sugar by the enzyme amylase was most commonly known to take place in the Chloroplast, but that has been proven wrong. One example is the spinach plant, in which the chloroplast contains both alpha and beta amylase (They are different versions of amylase involved in the breakdown of starch and they differ in their substrate specificity). In spinach leaves, the extrachloroplastic region contains the highest level of amylase degradation of starch. The difference between chloroplast and extrachloroplastic starch degradation is in the amylase pathway they prefer; either beta or alpha amylase. For spinach leaves, Alpha-amylase is preferred but for plants/organisms like wheat, barley, peas, etc. the Beta-amylase is preferred.
Usage
The amylolytic process is used in the brewing of alcohol from grains. Since grains contain starches but little to no simple sugars, the sugar needed to produce alcohol is derived from starch via the amylolytic process. In beer brewing, this is done through malting. In sake brewing, the mold Aspergillus oryzae provides amylolysis, and in Tapai, Saccharomyces cerevisiae. The amylolytic process can also be used to allow
|
https://en.wikipedia.org/wiki/Advanced%20Message%20Queuing%20Protocol
|
The Advanced Message Queuing Protocol (AMQP) is an open standard application layer protocol for message-oriented middleware. The defining features of AMQP are message orientation, queuing, routing (including point-to-point and publish-and-subscribe), reliability and security.
AMQP mandates the behavior of the messaging provider and client to the extent that implementations from different vendors are interoperable, in the same way as SMTP, HTTP, FTP, etc. have created interoperable systems. Previous standardizations of middleware have happened at the API level (e.g. JMS) and were focused on standardizing programmer interaction with different middleware implementations, rather than on providing interoperability between multiple implementations. Unlike JMS, which defines an API and a set of behaviors that a messaging implementation must provide, AMQP is a wire-level protocol. A wire-level protocol is a description of the format of the data that is sent across the network as a stream of bytes. Consequently, any tool that can create and interpret messages that conform to this data format can interoperate with any other compliant tool irrespective of implementation language.
Overview
AMQP is a binary, application layer protocol, designed to efficiently support a wide variety of messaging applications and communication patterns. It provides flow controlled, message-oriented communication with message-delivery guarantees such as at-most-once (where each message is delivered once or never), at-least-once (where each message is certain to be delivered, but may do so multiple times) and exactly-once (where the message will always certainly arrive and do so only once), and authentication and/or encryption based on SASL and/or TLS. It assumes an underlying reliable transport layer protocol such as Transmission Control Protocol (TCP).
The AMQP specification is defined in several layers: (i) a type system, (ii) a symmetric, asynchronous protocol for the transfer of messages fr
|
https://en.wikipedia.org/wiki/Apeirogon
|
In geometry, an apeirogon () or infinite polygon is a polygon with an infinite number of sides. Apeirogons are the two-dimensional case of infinite polytopes. In some literature, the term "apeirogon" may refer only to the regular apeirogon, with an infinite dihedral group of symmetries.
Definitions
Classical constructive definition
Given a point A0 in a Euclidean space and a translation S, define the point Ai to be the point obtained from i applications of the translation S to A0, so Ai = Si(A0). The set of vertices Ai with i any integer, together with edges connecting adjacent vertices, is a sequence of equal-length segments of a line, and is called the regular apeirogon as defined by H. S. M. Coxeter.
A regular apeirogon can be defined as a partition of the Euclidean line E1 into infinitely many equal-length segments. It generalizes the regular n-gon, which may be defined as a partition of the circle S1 into finitely many equal-length segments.
Modern abstract definition
An abstract polytope is a partially ordered set P (whose elements are called faces) with properties modeling those of the inclusions of faces of convex polytopes. The rank (or dimension) of an abstract polytope is determined by the length of the maximal ordered chains of its faces, and an abstract polytope of rank n is called an abstract n-polytope.
For abstract polytopes of rank 2, this means that: A) the elements of the partially ordered set are sets of vertices with either zero vertex (the empty set), one vertex, two vertices (an edge), or the entire vertex set (a two-dimensional face), ordered by inclusion of sets; B) each vertex belongs to exactly two edges; C) the undirected graph formed by the vertices and edges is connected.
An abstract polytope is called an abstract apeirotope if it has infinitely many elements; an abstract 2-apeirotope is called an abstract apeirogon.
In an abstract polytope, a flag is a collection of one face of each dimension, all incident to each other (that is
|
https://en.wikipedia.org/wiki/WAN%20optimization
|
WAN optimization is a collection of techniques for improving data transfer across wide area networks (WANs). In 2008, the WAN optimization market was estimated to be $1 billion, and was to grow to $4.4 billion by 2014 according to Gartner, a technology research firm. In 2015 Gartner estimated the WAN optimization market to be a $1.1 billion market.
The most common measures of TCP data-transfer efficiencies (i.e., optimization) are throughput, bandwidth requirements, latency, protocol optimization, and congestion, as manifested in dropped packets. In addition, the WAN itself can be classified with regards to the distance between endpoints and the amounts of data transferred. Two common business WAN topologies are Branch to Headquarters and Data Center to Data Center (DC2DC). In general, "Branch" WAN links are closer, use less bandwidth, support more simultaneous connections, support smaller connections and more short-lived connections, and handle a greater variety of protocols. They are used for business applications such as email, content management systems, database application, and Web delivery. In comparison, "DC2DC" WAN links tend to require more bandwidth, are more distant, and involve fewer connections, but those connections are bigger (100 Mbit/s to 1 Gbit/s flows) and of longer duration. Traffic on a "DC2DC" WAN may include replication, back up, data migration, virtualization, and other Business Continuity/Disaster Recovery (BC/DR) flows.
WAN optimization has been the subject of extensive academic research almost since the advent of the WAN. In the early 2000s, research in both the private and public sectors turned to improving the end-to-end throughput of TCP, and the target of the first proprietary WAN optimization solutions was the Branch WAN. In recent years, however, the rapid growth of digital data, and the concomitant needs to store and protect it, has presented a need for DC2DC WAN optimization. For example, such optimizations can be performed t
|
https://en.wikipedia.org/wiki/Demand%20characteristics
|
In social research, particularly in psychology, the term demand characteristic refers to an experimental artifact where participants form an interpretation of the experiment's purpose and subconsciously change their behavior to fit that interpretation. Typically, demand characteristics are considered an extraneous variable, exerting an effect on behavior other than that intended by the experimenter. Pioneering research was conducted on demand characteristics by Martin Orne.
A possible cause for demand characteristics is participants' expectations that they will somehow be evaluated, leading them to figure out a way to 'beat' the experiment to attain good scores in the alleged evaluation. Rather than giving an honest answer, participants may change some or all of their answers to match the experimenter's requirements, that demand characteristics can change participant's behaviour to appear more socially or morally responsible. Demand characteristics cannot be eliminated from experiments, but demand characteristics can be studied to see their effect on such experiments.
Examples of common demand characteristics
Common demand characteristics include:
Rumors of the study – any information, true or false, circulated about the experiment outside of the experiment itself.
Setting of the laboratory – the location where the experiment is being performed, if it is significant.
Explicit or implicit communication – any communication between the participant and experimenter, whether it be verbal or non-verbal, that may influence their perception of the experiment.
Weber and Cook have described some demand characteristics as involving the participant taking on a role in the experiment. These roles include:
The good-participant role (also known as the please-you effect) in which the participant attempts to discern the experimenter's hypotheses and to confirm them. The participant does not want to "ruin" the experiment.
The negative-participant role (also known as the screw
|
https://en.wikipedia.org/wiki/Paleobiology
|
Paleobiology (or palaeobiology) is an interdisciplinary field that combines the methods and findings found in both the earth sciences and the life sciences. Paleobiology is not to be confused with geobiology, which focuses more on the interactions between the biosphere and the physical Earth.
Paleobiological research uses biological field research of current biota and of fossils millions of years old to answer questions about the molecular evolution and the evolutionary history of life. In this scientific quest, macrofossils, microfossils and trace fossils are typically analyzed. However, the 21st-century biochemical analysis of DNA and RNA samples offers much promise, as does the biometric construction of phylogenetic trees.
An investigator in this field is known as a paleobiologist.
Important research areas
Paleobotany applies the principles and methods of paleobiology to flora, especially green land plants, but also including the fungi and seaweeds (algae). See also mycology, phycology and dendrochronology.
Paleozoology uses the methods and principles of paleobiology to understand fauna, both vertebrates and invertebrates. See also vertebrate and invertebrate paleontology, as well as paleoanthropology.
Micropaleontology applies paleobiologic principles and methods to archaea, bacteria, protists and microscopic pollen/spores. See also microfossils and palynology.
Paleovirology examines the evolutionary history of viruses on paleobiological timescales.
Paleobiochemistry uses the methods and principles of organic chemistry to detect and analyze molecular-level evidence of ancient life, both microscopic and macroscopic.
Paleoecology examines past ecosystems, climates, and geographies so as to better comprehend prehistoric life.
Taphonomy analyzes the post-mortem history (for example, decay and decomposition) of an individual organism in order to gain insight on the behavior, death and environment of the fossilized organism.
Paleoichnology analyzes the tracks, bo
|
https://en.wikipedia.org/wiki/GeeXboX
|
GeeXboX (stylized as GEExBox) is a free Linux distribution providing a media center software suite for personal computers. GeeXboX 2.0 and later uses XBMC for media playback and is implemented as Live USB and Live CD options. As such, the system does not need to be permanently installed to a hard drive, as most modern operating systems would. Instead, the computer can be booted with the GeeXboX CD when media playback is desired. It is based on the Debian distribution of Linux.
This is a reasonable approach for those who do not need media playback services while performing other tasks with the same computer, for users who wish to repurpose older computers as media centers, and for those seeking a free alternative to Windows XP Media Center Edition.
An unofficial port of GeeXboX 1.x also runs on the Wii.
History
See also
List of free television software
XBMC Media Center, the cross-platform open source media player software that GeeXboX 2.0 and later uses as a front end GUI.
References
External links
ARM operating systems
Embedded Linux distributions
Free media players
Linux distributions used in appliances
Linux-based devices
Linux distributions
|
https://en.wikipedia.org/wiki/Signed%20distance%20function
|
In mathematics and its applications, the signed distance function (or oriented distance function) is the orthogonal distance of a given point x to the boundary of a set Ω in a metric space, with the sign determined by whether or not x is in the interior of Ω. The function has positive values at points x inside Ω, it decreases in value as x approaches the boundary of Ω where the signed distance function is zero, and it takes negative values outside of Ω. However, the alternative convention is also sometimes taken instead (i.e., negative inside Ω and positive outside).
Definition
If Ω is a subset of a metric space X with metric d, then the signed distance function f is defined by
where denotes the boundary of For any
where denotes the infimum.
Properties in Euclidean space
If Ω is a subset of the Euclidean space Rn with piecewise smooth boundary, then the signed distance function is differentiable almost everywhere, and its gradient satisfies the eikonal equation
If the boundary of Ω is Ck for k ≥ 2 (see Differentiability classes) then d is Ck on points sufficiently close to the boundary of Ω. In particular, on the boundary f satisfies
where N is the inward normal vector field. The signed distance function is thus a differentiable extension of the normal vector field. In particular, the Hessian of the signed distance function on the boundary of Ω gives the Weingarten map.
If, further, Γ is a region sufficiently close to the boundary of Ω that f is twice continuously differentiable on it, then there is an explicit formula involving the Weingarten map Wx for the Jacobian of changing variables in terms of the signed distance function and nearest boundary point. Specifically, if T(∂Ω, μ) is the set of points within distance μ of the boundary of Ω (i.e. the tubular neighbourhood of radius μ), and g is an absolutely integrable function on Γ, then
where denotes the determinant and dSu indicates that we are taking the surface integral.
Algorithms
Algorit
|
https://en.wikipedia.org/wiki/Sergey%20Mergelyan
|
Sergey Mergelyan (; 19 May 1928 – 20 August 2008) was a Soviet and Armenian mathematician, who made major contributions to the Approximation theory. The modern Complex Approximation Theory is based on Mergelyan's classical work. Corresponding Member of the Academy of Sciences of the Soviet Union (since 1953), member of NAS ASSR (since 1956).
The surname "Mergelov" given at birth was changed for patriotic reasons to the more Armenian-sounding "Mergelyan" by the mathematician himself before his trip to Moscow.
He was a laureate of the Stalin Prize (1952) and the Order of St. Mesrop Mashtots (2008). He was the youngest Doctor of Sciences in the history of the USSR (at the age of 20), and the youngest corresponding member of the Academy of Sciences of the Soviet Union (the title was conferred at the age of 24). During his postgraduate studies, the 20-year-old Mergelyan solved one of the fundamental problems of the mathematical theory of functions, which had not been solved for more than 70 years. His theorem on the possibility of uniform polynomial approximation of functions of a complex variable is recognized by the classical Mergelyan theorem, and is included in the course of the theory of functions.
Although he himself was not a computer designer, Mergelyan was a pioneer in Soviet computational mathematics.
Biography
Early years
Sergey Mergelyan was born on 19 May 1928 in Simferopol in an Armenian family. His father Nikita (Mkrtich) Ivanovich Mergelov, a former private entrepreneur (Nepman), his mother Lyudmila Ivanovna Vyrodova, the daughter of the manager of the Azov-Black Sea bank, who was shot in 1918. In 1936 Sergey's father was building a paper mill in Yelets, but soon together with his family was deported to the Siberian settlement of Narym, Tomsk Oblast. In the Siberian frost, Sergey suffered from a serious illness and narrowly survived. In 1937, the mother and son were acquitted by the court's decision and returned to Kerch, and in 1938 Lyudmila Ivanov
|
https://en.wikipedia.org/wiki/Progressive%20muscular%20atrophy
|
Progressive muscular atrophy (PMA), also called Duchenne–Aran disease and Duchenne–Aran muscular atrophy, is a disorder characterised by the degeneration of lower motor neurons, resulting in generalised, progressive loss of muscle function.
PMA is classified among motor neuron diseases (MND) where it is thought to account for around 4% of all MND cases.
PMA affects only the lower motor neurons, in contrast to amyotrophic lateral sclerosis (ALS), the most common MND, which affects both the upper and lower motor neurons, or primary lateral sclerosis, another MND, which affects only the upper motor neurons. The distinction is important because PMA is associated with a better prognosis than ALS.
Signs and symptoms
As a result of lower motor neuron degeneration, the symptoms of PMA include:
muscle weakness
muscle atrophy
fasciculations
Some patients have symptoms restricted only to the arms or legs (or in some cases just one of either). These cases are referred to as flail limb (either flail arm or flail leg) and are associated with a better prognosis.
Diagnosis
PMA is a diagnosis of exclusion, there is no specific test which can conclusively establish whether a patient has the condition. Instead, a number of other possibilities have to be ruled out, such as multifocal motor neuropathy or spinal muscular atrophy. Tests used in the diagnostic process include MRI, clinical examination, and EMG. EMG tests in patients who do have PMA usually show denervation (neuron death) in most affected body parts, and in some unaffected parts too.
It typically takes longer to be diagnosed with PMA than ALS, an average of 20 months for PMA vs 15 months in ALS.
Differential diagnosis
In contrast to amyotrophic lateral sclerosis or primary lateral sclerosis, PMA is distinguished by the absence of:
brisk reflexes
spasticity
Babinski's sign
emotional lability
The importance of correctly recognizing progressive muscular atrophy as opposed to ALS is important for several reasons.
|
https://en.wikipedia.org/wiki/Immunoglobulin%20superfamily
|
The immunoglobulin superfamily (IgSF) is a large protein superfamily of cell surface and soluble proteins that are involved in the recognition, binding, or adhesion processes of cells. Molecules are categorized as members of this superfamily based on shared structural features with immunoglobulins (also known as antibodies); they all possess a domain known as an immunoglobulin domain or fold. Members of the IgSF include cell surface antigen receptors, co-receptors and co-stimulatory molecules of the immune system, molecules involved in antigen presentation to lymphocytes, cell adhesion molecules, certain cytokine receptors and intracellular muscle proteins. They are commonly associated with roles in the immune system. Otherwise, the sperm-specific protein IZUMO1, a member of the immunoglobulin superfamily, has also been identified as the only sperm membrane protein essential for sperm-egg fusion.
Immunoglobulin domains
Proteins of the IgSF possess a structural domain known as an immunoglobulin (Ig) domain. Ig domains are named after the immunoglobulin molecules. They contain about 70-110 amino acids and are categorized according to their size and function. Ig-domains possess a characteristic Ig-fold, which has a sandwich-like structure formed by two sheets of antiparallel beta strands. Interactions between hydrophobic amino acids on the inner side of the sandwich and highly conserved disulfide bonds formed between cysteine residues in the B and F strands, stabilize the Ig-fold.
Classification
The Ig like domains can be classified as IgV, IgC1, IgC2, or IgI.
Most Ig domains are either variable (IgV) or constant (IgC).
IgV: IgV domains with 9 beta strands are generally longer than IgC domains with 7 beta strands.
IgC1 and IgC2: Ig domains of some members of the IgSF resemble IgV domains in the amino acid sequence, yet are similar in size to IgC domains. These are called IgC2 domains, while standard IgC domains are called IgC1 domains.
IgI: Other Ig domains exi
|
https://en.wikipedia.org/wiki/Dry%20run%20%28testing%29
|
A dry run (or practice run) is a software testing process used to make sure that a system works correctly and will not result in severe failure. For example, rsync, a utility for transferring and synchronizing data between networked computers or storage drives, has a "dry-run" option users can use to check that their command-line arguments are valid and to simulate what would happen when actually copying the data.
In acceptance procedures (such as factory acceptance testing, for example), a "dry run" is when the factory, a subcontractor, performs a complete test of the system it has to deliver before it is actually accepted by the customer.
Etymology
The term dry run appears to have originated from fire departments in the US. In order to practice, they would carry out dispatches of the fire brigade where water was not pumped. A run with real fire and water was referred to as a wet run. The more general usage of the term seems to have arisen from widespread use by the United States Armed Forces during World War II.
See also
Code review
Pilot experiment
Preview (computing)
References
External links
World Wide Words: Dry Run
Wiktionary - dry run
Tests
Software testing
|
https://en.wikipedia.org/wiki/Lorcon
|
lorcon (acronym for Loss Of Radio CONnectivity) is an open source network tool. It is a library for injecting 802.11 (WLAN) frames, capable of injecting via multiple driver frameworks, without the need to change the application code. Lorcon is built by patching the third-party MadWifi-driver for cards based on the Qualcomm Atheros wireless chipset.
The project is maintained by Joshua Wright and Michael Kershaw ("dragorn").
References
External links
Official Home Page
Network analyzers
Unix security-related software
Unix network-related software
Computer security exploits
IEEE 802.11
|
https://en.wikipedia.org/wiki/Quasi-open%20map
|
In topology a branch of mathematics, a quasi-open map or quasi-interior map is a function which has similar properties to continuous maps.
However, continuous maps and quasi-open maps are not related.
Definition
A function between topological spaces and is quasi-open if, for any non-empty open set , the interior of in is non-empty.
Properties
Let be a map between topological spaces.
If is continuous, it need not be quasi-open. Conversely if is quasi-open, it need not be continuous.
If is open, then is quasi-open.
If is a local homeomorphism, then is quasi-open.
The composition of two quasi-open maps is again quasi-open.
See also
Notes
References
Topology
|
https://en.wikipedia.org/wiki/Gain%20%28projection%20screens%29
|
Gain is a property of a projection screen, and is one of the specifications quoted by projection screen manufacturers.
Interpretation
The number that is typically measured is called the peak gain at zero degrees viewing axis, and represents the gain value for a viewer seated along a line perpendicular to the screen's viewing surface. The gain value represents the ratio of brightness of the screen relative to a set standard (in this case, a sheet of magnesium carbonate). Screens with a higher brightness than this standard are rated with a gain higher than 1.0, while screens with lower brightness are rated from 0.0 to 1.0. Since a projection screen is designed to scatter the impinging light back to the viewers, the scattering can either be highly diffuse or highly concentrated. Highly concentrated scatter results in a higher screen gain (a brighter image) at the cost of a more limited viewing angle (as measured by the half-gain viewing angle), whereas highly diffuse scattering results in lower screen gain (a dimmer image) with the benefit of a wider viewing angle.
Sources
Display technology
|
https://en.wikipedia.org/wiki/Principal%20meridian
|
A principal meridian is a meridian used for survey control in a large region.
Canada
The Dominion Land Survey of Western Canada took its origin at the First (or Principal) Meridian, located at 97°27′28.41″ west of Greenwich, just west of Winnipeg, Manitoba. This line is exactly ten miles west of the Red River at the Canada–United States border.
Six other meridians were designated at four-degree intervals westward, with the seventh located in British Columbia; the second and fourth meridians form the general eastern border and the western border of Saskatchewan.
United States
In the United States Public Land Survey System, a principal meridian is the principal north-south line used for survey control in a large region, and which divides townships between east and west. The meridian meets its corresponding baseline at the point of origin, or initial point, for the land survey. For example, the Mount Diablo Meridian, used for surveys in California and Nevada, runs north-south through the summit of Mount Diablo.
Often, meridians are marked with roads, such as the Meridian Avenue in San Jose, California, Meridian Road in Vacaville, California, both on the Mount Diablo Meridian, Meridian Road in Wichita, Kansas on the Sixth Principal Meridian, and Meridian Avenue in several western Washington counties generally following the Willamette Meridian. Baseline Road or Base Line Street extends for about from Highland, California east of San Bernardino to La Verne, California where it meets Foothill Boulevard.
See also
Cardo
Baseline (surveying)
List of principal and guide meridians and base lines of the United States
External links
The Principal Meridian Project (US)
History of the Rectangular Survey System Note: this is a large file, approximately 46MB. Searchable PDF prepared by the author, C. A. White.
Resources page of the U.S. Department of the Interior, Bureau of Land Management
Surveying
Meridians (geography)
|
https://en.wikipedia.org/wiki/De%20novo%20synthesis
|
In chemistry, de novo synthesis () refers to the synthesis of complex molecules from simple molecules such as sugars or amino acids, as opposed to recycling after partial degradation. For example, nucleotides are not needed in the diet as they can be constructed from small precursor molecules such as formate and aspartate. Methionine, on the other hand, is needed in the diet because while it can be degraded to and then regenerated from homocysteine, it cannot be synthesized de novo.
Nucleotide
De novo pathways of nucleotides do not use free bases: adenine (abbreviated as A), guanine (G), cytosine (C), thymine (T), or uracil (U). The purine ring is built up one atom or a few atoms at a time and attached to ribose throughout the process. Pyrimidine ring is synthesized as orotate and attached to ribose phosphate and later converted to common pyrimidine nucleotides.
Cholesterol
Cholesterol is an essential structural component of animal cell membranes. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D. In mammals cholesterol is either absorbed from dietary sources or is synthesized de novo. Up to 70-80% of de novo cholesterol synthesis occurs in the liver, and about 10% of de novo cholesterol synthesis occurs in the small intestine. Cancer cells require cholesterol for cell membranes, so cancer cells contain many enzymes for de novo cholesterol synthesis from acetyl-CoA.
Fatty-acid (de novo lipogenesis)
De novo lipogenesis (DNL) is the process by which carbohydrates (primarily, especially after a high-carbohydrate meal) from the circulation are converted into fatty acids, which can be further converted into triglycerides or other lipids. Acetate and some amino acids (notably leucine and isoleucine) can also be carbon sources for DNL.
Normally, de novo lipogenesis occurs primarily in adipose tissue. But in conditions of obesity, insulin resistance, or type 2 diabetes de novo lipogenesis is reduced in adipos
|
https://en.wikipedia.org/wiki/System%20of%20systems%20engineering
|
System of systems engineering (SoSE) is a set of developing processes, tools, and methods for designing, re-designing and deploying solutions to system-of-systems challenges.
Overview
System of Systems Engineering (SoSE) methodology is heavily used in U.S. Department of Defense applications, but is increasingly being applied to non-defense related problems such as architectural design of problems in air and auto transportation, healthcare, global communication networks, search and rescue, space exploration, industry 4.0 and many other System of Systems application domains. SoSE is more than systems engineering of monolithic, complex systems because design for System-of-Systems problems is performed under some level of uncertainty in the requirements and the constituent systems, and it involves considerations in multiple levels and domains. Whereas systems engineering focuses on building the system right, SoSE focuses on choosing the right system(s) and their interactions to satisfy the requirements.
System-of-Systems Engineering and Systems Engineering are related but different fields of study. Whereas systems engineering addresses the development and operations of monolithic products, SoSE addresses the development and operations of evolving programs. In other words, traditional systems engineering seeks to optimize an individual system (i.e., the product), while SoSE seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. SoSE should enable the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time; thus, effective SoSE methodology should prepare decision-makers to design informed architectural solutions for System-of-Systems problems.
Due to varied methodology and domains of applications in existing literature, there does not exist a single unified consensus for processes involved in System-of-Sys
|
https://en.wikipedia.org/wiki/Laboratory%20automation
|
Laboratory automation is a multi-disciplinary strategy to research, develop, optimize and capitalize on technologies in the laboratory that enable new and improved processes. Laboratory automation professionals are academic, commercial and government researchers, scientists and engineers who conduct research and develop new technologies to increase productivity, elevate experimental data quality, reduce lab process cycle times, or enable experimentation that otherwise would be impossible.
The most widely known application of laboratory automation technology is laboratory robotics. More generally, the field of laboratory automation comprises many different automated laboratory instruments, devices (the most common being autosamplers), software algorithms, and methodologies used to enable, expedite and increase the efficiency and effectiveness of scientific research in laboratories.
The application of technology in today's laboratories is required to achieve timely progress and remain competitive. Laboratories devoted to activities such as high-throughput screening, combinatorial chemistry, automated clinical and analytical testing, diagnostics, large-scale biorepositories, and many others, would not exist without advancements in laboratory automation. Some universities offer entire programs that focus on lab technologies. For example, Indiana University-Purdue University at Indianapolis offers a graduate program devoted to Laboratory Informatics. Also, the Keck Graduate Institute in California offers a graduate degree with an emphasis on development of assays, instrumentation and data analysis tools required for clinical diagnostics, high-throughput screening, genotyping, microarray technologies, proteomics, imaging and other applications.
History
At least since 1875 there have been reports of automated devices for scientific investigation. These first devices were mostly built by scientists themselves in order to solve problems in the laboratory. After the s
|
https://en.wikipedia.org/wiki/Global%20Environment%20for%20Network%20Innovations
|
The Global Environment for Network Innovations (GENI) is a facility concept being explored by the United States computing community with support from the National Science Foundation. The goal of GENI is to enhance experimental research in computer networking and distributed systems, and to accelerate the transition of this research into products and services that will improve the economic competitiveness of the United States.
GENI planning efforts are organized around several focus areas, including facility architecture, the backbone network, distributed services, wireless/mobile/sensor subnetworks, and research coordination amongst these.
See also
Internet2
Future Internet
AKARI Project in Japan
References
External links
GENI home page
NSF GENI Initiative overview.
NSF GENI Project Office solicitation.
Foreign, independent presentation on GENI.
A news article describing GENI plans.
A news article referring to GENI.
Another news article regarding GENI.
Computer network organizations
|
https://en.wikipedia.org/wiki/Turboexpander
|
A turboexpander, also referred to as a turbo-expander or an expansion turbine, is a centrifugal or axial-flow turbine, through which a high-pressure gas is expanded to produce work that is often used to drive a compressor or generator.
Because work is extracted from the expanding high-pressure gas, the expansion is approximated by an isentropic process (i.e., a constant-entropy process), and the low-pressure exhaust gas from the turbine is at a very low temperature, −150 °C or less, depending upon the operating pressure and gas properties. Partial liquefaction of the expanded gas is not uncommon.
Turboexpanders are widely used as sources of refrigeration in industrial processes such as the extraction of ethane and natural-gas liquids (NGLs) from natural gas, the liquefaction of gases (such as oxygen, nitrogen, helium, argon and krypton) and other low-temperature processes.
Turboexpanders currently in operation range in size from about 750 W to about 7.5 MW (1 hp to about 10,000 hp).
Applications
Although turboexpanders are commonly used in low-temperature processes, they are used in many other applications. This section discusses one of the low-temperature processes, as well as some of the other applications.
Extracting hydrocarbon liquids from natural gas
Raw natural gas consists primarily of methane (CH4), the shortest and lightest hydrocarbon molecule, along with various amounts of heavier hydrocarbon gases such as ethane (C2H6), propane (C3H8), normal butane (n-C4H10), isobutane (i-C4H10), pentanes and even higher-molecular-mass hydrocarbons. The raw gas also contains various amounts of acid gases such as carbon dioxide (CO2), hydrogen sulfide (H2S) and mercaptans such as methanethiol (CH3SH) and ethanethiol (C2H5SH).
When processed into finished by-products (see Natural-gas processing), these heavier hydrocarbons are collectively referred to as NGL (natural-gas liquids). The extraction of the NGL often involves a turboexpander and a low-temperature d
|
https://en.wikipedia.org/wiki/List%20of%20people%20with%20breast%20cancer
|
This list of notable people with breast cancer includes people who made significant contributions to their chosen field and who were diagnosed with breast cancer at some point in their lives, as confirmed by public information. Diagnosis dates are listed where the information is known. Breast cancer is the second most common cancer in women after skin cancer. According to the United States National Cancer Institute, the rate of new cases of female breast cancer was 129.1 per 100,000 women per year. The death rate was 19.9 per 100,000 women per year. These rates are age-adjusted and based on 2014–2018 cases and 2015–2019 deaths. Approximately 12.9 percent of women will be diagnosed with female breast cancer at some point during their lifetime, based on 2016–2018 data. In 2018, there were an estimated 3,676,262 women living with female breast cancer in the United States.
Acting, directing, and filmmaking
Business
Miscellaneous
Music
Politics and government
Royalty
Science
Sports
Television and radio
Visual arts
Writing
See also
List of breast cancer patients by survival status
Notes
Breast cancer
Lists of people by medical condition
Medical lists
|
https://en.wikipedia.org/wiki/Color-tagged%20structure
|
A color-tagged structure is a structure which has been classified by a color to represent the severity of damage or the overall condition of the building. The exact definition for each color may be different in different countries and jurisdictions.
A "red-tagged" structure has been severely damaged to the degree that the structure is too dangerous to inhabit. Similarly, a structure is "yellow-tagged" if it has been moderately damaged to the degree that its habitability is limited (only during the day, for example). A "green-tagged" structure may mean the building is either undamaged or has suffered slight damage, although differences exist at local levels when to use a green tag.
Tagging is performed by government building officials, or, occasionally during disasters, by engineers deputized by the building official. Natural disasters such as earthquakes, floods and mudslides are among the most common causes of a building being red-, yellow- or green-tagged. Usually, after such incidents, the local government body responsible for enforcing the building safety code examines the affected structures and tags them as appropriate.
In some areas of the United States, buildings are marked with a rectangular sign that is red with a white border and a white "X". Such signs provide the same information as "red-tagging" a building. Tagging structures in these ways can warn firefighters and others about hazardous buildings before the buildings are entered.
References
Building engineering
Structural engineering
Disaster management tools
|
https://en.wikipedia.org/wiki/Stream%20capacity
|
The capacity of a stream or river is the total amount of sediment a stream is able to transport. This measurement usually corresponds to the stream power and the width-integrated bed shear stress across section along a stream profile. Note that capacity is greater than the load, which is the amount of sediment carried by the stream. Load is generally limited by the sediment available upstream.
Stream capacity is often mistaken for the stream competency, which is a measure of the maximum size of the particles that the stream can transport, or for the total load, which is the load that a stream carries.
The sediment transported by the stream depends upon the intensity of rainfall and land characteristics.
See also
Bed load
Sediment transport
Suspended load
Wash load
Hydrology
Sedimentology
|
https://en.wikipedia.org/wiki/Information%20Systems%20Research
|
Information Systems Research is a quarterly peer-reviewed academic journal that covers research in the areas of information systems and information technology, including cognitive psychology, economics, computer science, operations research, design science, organization theory and behavior, sociology, and strategic management. It is published by the Institute for Operations Research and the Management Sciences and was in 2007 ranked as one of the most prestigious journals in the information systems discipline. In 2008 it was selected as one of the top 20 professional/academic journals by Bloomberg Businessweek. The current editor-in-chief is Suprateek Sarker, who was preceded by Alok Gupta (University of Minnesota), Ritu Agarwal (2011-2016; University of Maryland, College Park), Vallabh Sambamurthy (2005-2010; Michigan State University), Chris F. Kemerer (2002-2004), Izak Benbasat (1999-2001), John Leslie King (1993-1998), and E. Burton Swanson (1990-1992). According to the Journal Citation Reports, the journal has a 2018 impact factor of 2.457. The journal is member of the Senior Scholar's 'Basket of Eight'.
References
External links
Academic journals established in 1990
Quarterly journals
Information systems journals
English-language journals
INFORMS academic journals
|
https://en.wikipedia.org/wiki/Godunov%27s%20theorem
|
In numerical analysis and computational fluid dynamics, Godunov's theorem — also known as Godunov's order barrier theorem — is a mathematical theorem important in the development of the theory of high-resolution schemes for the numerical solution of partial differential equations.
The theorem states that:
Linear numerical schemes for solving partial differential equations (PDE's), having the property of not generating new extrema (monotone scheme), can be at most first-order accurate.
Professor Sergei Godunov originally proved the theorem as a Ph.D. student at Moscow State University. It is his most influential work in the area of applied and numerical mathematics and has had a major impact on science and engineering, particularly in the development of methods used in computational fluid dynamics (CFD) and other computational fields. One of his major contributions was to prove the theorem (Godunov, 1954; Godunov, 1959), that bears his name.
The theorem
We generally follow Wesseling (2001).
Aside
Assume a continuum problem described by a PDE is to be computed using a numerical scheme based upon a uniform computational grid and a one-step, constant step-size, M grid point, integration algorithm, either implicit or explicit. Then if and , such a scheme can be described by
In other words, the solution at time and location is a linear function of the solution at the previous time step . We assume that determines uniquely. Now, since the above equation represents a linear relationship between and we can perform a linear transformation to obtain the following equivalent form,
Theorem 1: Monotonicity preserving
The above scheme of equation (2) is monotonicity preserving if and only if
Proof - Godunov (1959)
Case 1: (sufficient condition)
Assume (3) applies and that is monotonically increasing with .
Then, because it therefore follows that because
This means that monotonicity is preserved for this case.
Case 2: (necessary condition)
We prove the ne
|
https://en.wikipedia.org/wiki/List%20of%20aging%20processes
|
Accumulation of lipofuscin
Aging brain
Calorie restriction
Cross-link
Crosslinking of DNA
Degenerative disease
DNA damage theory of aging
Exposure to ultraviolet light
Free-radical damage
Glycation
Life expectancy
Longevity
Maximum life span
Senescence
Stem cell theory of aging
See also
Index of topics related to life extension
Aging processes
|
https://en.wikipedia.org/wiki/Stopped-flow
|
Stopped-flow is an experimental technique for studying chemical reactions with a half time of the order of 1 ms, introduced by Britton Chance and extended by Quentin Gibson (Other techniques, such as the temperature-jump method, are available for much faster processes.)
Description of the method
Summary
Stopped-flow spectrometry allows chemical kinetics of fast reactions (with half times of the order of milliseconds) to be studied in solution. It was first used primarily to study enzyme-catalyzed reactions. Then the stopped-flow rapidly found its place in almost all biochemistry, biophysics, and chemistry laboratories with a need to follow chemical reactions in the millisecond time scale.
In its simplest form, a stopped-flow mixes two solutions. Small volumes of solutions are rapidly and continuously driven into a high-efficiency mixer. This mixing process then initiates an extremely fast reaction. The newly mixed solution travels to the observation cell and pushes out the contents of the cell (the solution remaining from the previous experiment or from necessary washing steps). The time required for this solution to pass from the mixing point to the observation point is known as dead time. The minimum injection volume will depend on the volume of the mixing cell. Once enough solution has been injected to completely remove the previous solution, the instrument reaches a stationary state and the flow can be stopped. Depending on the syringe drive technology, the flow stop is achieved by using a stop valve called the hard-stop or by using a stop syringe. The stopped-flow also sends a ‘start signal’ to the detector called the trigger so the reaction can be observed. The timing of the trigger is usually software controlled so the user can trigger at the same time the flow stops or a few milliseconds before the stop to check the stationary state has been reached.
So this is a very economical technique.
Reactant syringes
Two syringes are filled with solutions that
|
https://en.wikipedia.org/wiki/Block%20and%20bleed%20manifold
|
A Block and bleed manifold is a hydraulic manifold that combines one or more block/isolate valves, usually ball valves, and one or more bleed/vent valves, usually ball or needle valves, into one component for interface with other components (pressure measurement transmitters, gauges, switches, etc.) of a hydraulic (fluid) system. The purpose of the block and bleed manifold is to isolate or block the flow of fluid in the system so the fluid from upstream of the manifold does not reach other components of the system that are downstream. Then they bleed off or vent the remaining fluid from the system on the downstream side of the manifold. For example, a block and bleed manifold would be used to stop the flow of fluids to some component, then vent the fluid from that component’s side of the manifold, in order to effect some kind of work (maintenance/repair/replacement) on that component.
Types of valves
Block and Bleed
A block and bleed manifold with one block valve and one bleed valve is also known as an isolation valve or block and bleed valve; a block and bleed manifold with multiple valves is also known as an isolation manifold. This valve is used in combustible gas trains in many industrial applications. Block and bleed needle valves are used in hydraulic and pneumatic systems because the needle valve allows for precise flow regulation when there is low flow in a non-hazardous environment.
Double Block and Bleed (DBB Valves)
These valves replace existing traditional techniques employed by pipeline engineers to generate a double block and bleed configuration in the pipeline. Two block valves and a bleed valve are as a unit, or manifold, to be installed for positive isolation. Used for critical process service, DBB valves are for high pressure systems or toxic/hazardous fluid processes. Applications that use DBB valves include instrument drain, chemical injection connection, chemical seal isolation, and gauge isolation. DBB valves do the work of three separa
|
https://en.wikipedia.org/wiki/%C3%89cole%20nationale%20sup%C3%A9rieure%20d%27ing%C3%A9nieurs%20de%20constructions%20a%C3%A9ronautiques
|
The École nationale supérieure d'ingénieurs de constructions aéronautiques (ENSICA), meaning National Higher School of aeronautical constructions, was a French engineering school founded in 1945. It was located in Toulouse. In 2007, Ensica merged with Supaéro to form the Institut supérieur de l'aéronautique et de l'espace (ISAE).
Ensica recruited its students from the French "Concours des Grandes Écoles". A competitive examination which requires studies at the "classes préparatoires". Classes préparatoires last two years where students are to work intensively on mathematics and physics.
Studies at Ensica lasted for 3 years where students eventually got a Master in Aeronautics.
Area of studies cover all the fundamentals of aeronautics, including: aerodynamics, structures, fluid dynamics, thermal power, electronics, control theory, airframe systems, IT...
Students are also trained to management, manufacturing, certification, and foreign languages.
Main employers are Airbus, Thales, Dassault, Safran (Sagem, Snecma), Rolls-Royce, Astrium, Eurocopter.
History
The decree giving birth to the "Ecole Nationale des Travaux Aéronautiques" (ENTA) was signed in 1945. The text was then ratified by Charles de Gaulle, president of the temporary government, and by René Pleven, Finance Minister. There were 25 students in the first class and 24 of them joined the "Ingénieurs Militaires des Travaux de l'Air" (IMTA).
In 1957, the school changed its name to the "Ecole Nationale d'Ingénieurs des Constructions Aéronautiques" (ENICA).The course was extended to three years and the school embarked on its new civil vocation welcoming a higher proportion of civil students.
In 1961, ENICA was transferred to Toulouse, the director at that time being Emile Blouin. It then took on a new dimension and established its identity. In 1969, the school joined the competitive entrance examination system organised by the Ecoles Nationales Supérieures d'Ingénieurs (ENSI). It thus increased its recrui
|
https://en.wikipedia.org/wiki/Utah%20Education%20Network
|
The Utah Education Network (UEN) is a broadband and digital broadcast network serving public education, higher education, applied technology campuses, libraries, and public charter schools throughout the state of Utah. The Network facilitates interactive video conferencing, provides instructional support services, and operates a public television station (KUEN) on behalf of the Utah State Board of Regents. UEN services benefit more than 60,000 faculty and staff, and more than 780,000 students from pre-schoolers in Head Start programs through grandparents in graduate school. UEN headquarters are in Salt Lake City at the Eccles Broadcast Center on the University of Utah campus.
History
The Utah State Legislature formally established UEN in 1989, but the statewide collaboration of public education and higher education started more than two decades earlier when KUED-Channel 7 signed on the air in 1958. The station built translator towers to beam its signal to remote communities, and eventually the station placed analog microwave equipment on some of those towers enabling two-way teleconferencing for education and government. The system was initially named SETOC (State Educational Technical Operations Center), renamed as EDNET and is now UEN IVC (Interactive Video Conferencing). In December 1986, KULC-Channel 9 (now UEN-TV) started broadcasting as Utah’s Learning Channel, and in 1994 UEN started UtahLINK, the Internet component of the Network. All of those services now operate as the Utah Education Network. UEN-TV and sister station KUED were broadcasting digitally by June 2012, but experiments in statewide digital broadcasting were underway as early as 2004.
Services
Three infrastructure services are integral to UEN’s mission of networking for education. Students, parents, educators, and local communities all benefit from these services. They include:
Networking Services, to extend and maintain UEN’s broadband and digital TV networks, including the UEN Wide Area Net
|
https://en.wikipedia.org/wiki/Instrument%20control
|
Instrument control consists of connecting a desktop instrument to a computer and taking measurements.
History
In the late 1960s the first bus used for communication was developed by Hewlett-Packard and was called HP-IB (Hewlett-Packard Interface Bus). Since HP-IB was originally designed to only work with HP instruments, the need arose for a standard, high-speed interface for communication between instruments and controllers from a variety of vendors. This need was addressed in 1975 by the Institute of Electrical and Electronics Engineers (IEEE) published ANSI/IEEE Standard 488-1975, IEEE Standard Digital Interface for Programmable Instrumentation, which contained the electrical, mechanical, and functional specifications of an interfacing system. The standard was updated in 1987 and again in 1992 This bus is known by three different names, General Purpose Interface Bus (GPIB), Hewlett-Packard Interface Bus (HP-IB), and IEEE-488 Bus, and is used worldwide.
Today, there are several other buses in addition to the GPIB that can be used for instrument control. These include: Ethernet, USB, Serial, PCI, and PXI.
Software
In addition to the hardware bus to control an instrument, software for the PC is also needed. Virtual Instrument Software Architecture, or VISA, was developed by the VME eXtensions for Instrumentation (VXI) plug and play Systems Alliance as a specification for I/O software. VISA was a step toward industry-wide software compatibility. The VISA specification defines a software standard for VXI, and for GPIB, serial, Ethernet and other interfaces. More than 35 of the largest instrumentation companies in the industry endorse VISA as the standard. The alliance created distinct frameworks by grouping the most popular operating systems, application development environments, and programming languages and defined in-depth specifications to guarantee interoperability of components within each framework.
Instruments can be programmed by sending and receiving text
|
https://en.wikipedia.org/wiki/Ichnotaxon
|
An ichnotaxon (plural ichnotaxa) is "a taxon based on the fossilized work of an organism", i.e. the non-human equivalent of an artifact. Ichnotaxa comes from the Greek , ichnos meaning track and , taxis meaning ordering.
Ichnotaxa are names used to identify and distinguish morphologically distinctive ichnofossils, more commonly known as trace fossils. They are assigned genus and species ranks by ichnologists, much like organisms in Linnaean taxonomy. These are known as ichnogenera and ichnospecies, respectively. "Ichnogenus" and "ichnospecies" are commonly abbreviated as "igen." and "isp.". The binomial names of ichnospecies and their genera are to be written in italics.
Most researchers classify trace fossils only as far as the ichnogenus rank, based upon trace fossils that resemble each other in morphology but have subtle differences. Some authors have constructed detailed hierarchies up to ichnosuperclass, recognizing such fine detail as to identify ichnosuperorder and ichnoinfraclass, but such attempts are controversial.
Naming
Due to the chaotic nature of trace fossil classification, several ichnogenera hold names normally affiliated with animal body fossils or plant fossils. For example, many ichnogenera are named with the suffix -phycus due to misidentification as algae.
Edward Hitchcock was the first to use the now common -ichnus suffix in 1858, with Cochlichnus.
History
Due to trace fossils' history of being difficult to classify, there have been several attempts to enforce consistency in the naming of ichnotaxa.
In 1961, the International Commission on Zoological Nomenclature ruled that most trace fossil taxa named after 1930 would be no longer available.
See also
Bird ichnology
Trace fossil classification
Glossary of scientific naming
References
External links
Comments on the draft proposal to amend the Code with respect to trace fossils
Trace Fossils - Kansas University Catalogue of Ichnotaxa
Biological classification
Trace fossils
Zoolo
|
https://en.wikipedia.org/wiki/Integral%20windup
|
Integral windup, also known as integrator windup or reset windup, refers to the situation in a PID controller where a large change in setpoint occurs (say a positive change) and the integral term accumulates a significant error during the rise (windup), thus overshooting and continuing to increase as this accumulated error is unwound (offset by errors in the other direction). The specific problem is the excess overshooting.
Solutions
This problem can be addressed by
Initializing the controller integral to a desired value, for instance to the value before the problem
Increasing the setpoint in a suitable ramp
Conditional Integration: disabling the integral function until the to-be-controlled process variable (PV) has entered the controllable region
Preventing the integral term from accumulating above or below pre-determined bounds
Back-calculating the integral term to constrain the process output within feasible bounds.
Clegg Integrator: Zeroing the integral value every time the error is equal to, or crosses zero. This avoids having the controller attempt to drive the system to have the same error integral in the opposite direction as was caused by a perturbation, but induces oscillation if a non-zero control value required to maintain the process at setpoint.
Occurrence
Integral windup particularly occurs as a limitation of physical systems, compared with ideal systems, due to the ideal output being physically impossible (process saturation: the output of the process being limited at the top or bottom of its scale, making the error constant). For example, the position of a valve cannot be any more open than fully open and also cannot be closed any more than fully closed. In this case, anti-windup can actually involve the integrator being turned off for periods of time until the response falls back into an acceptable range.
This usually occurs when the controller's output can no longer affect the controlled variable, or if the controller is part of a selec
|
https://en.wikipedia.org/wiki/Flow-based%20programming
|
In computer programming, flow-based programming (FBP) is a programming paradigm that defines applications as networks of black box processes, which exchange data across predefined connections by message passing, where the connections are specified externally to the processes. These black box processes can be reconnected endlessly to form different applications without having to be changed internally. FBP is thus naturally component-oriented.
FBP is a particular form of dataflow programming based on bounded buffers, information packets with defined lifetimes, named ports, and separate definition of connections.
Introduction
Flow-based programming defines applications using the metaphor of a "data factory". It views an application not as a single, sequential process, which starts at a point in time, and then does one thing at a time until it is finished, but as a network of asynchronous processes communicating by means of streams of structured data chunks, called "information packets" (IPs). In this view, the focus is on the application data and the transformations applied to it to produce the desired outputs. The network is defined externally to the processes, as a list of connections which is interpreted by a piece of software, usually called the "scheduler".
The processes communicate by means of fixed-capacity connections. A connection is attached to a process by means of a port, which has a name agreed upon between the process code and the network definition. More than one process can execute the same piece of code. At any point in time, a given IP can only be "owned" by a single process, or be in transit between two processes. Ports may either be simple, or array-type, as used e.g. for the input port of the Collate component described below. It is the combination of ports with asynchronous processes that allows many long-running primitive functions of data processing, such as Sort, Merge, Summarize, etc., to be supported in the form of software black bo
|
https://en.wikipedia.org/wiki/Pseudorandom%20graph
|
In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider.
Pseudorandom properties were first formally considered by Andrew Thomason in 1987. He defined a condition called "jumbledness": a graph is said to be -jumbled for real and with if
for every subset of the vertex set , where is the number of edges among (equivalently, the number of edges in the subgraph induced by the vertex set ). It can be shown that the Erdős–Rényi random graph is almost surely -jumbled. However, graphs with less uniformly distributed edges, for example a graph on vertices consisting of an -vertex complete graph and completely independent vertices, are not -jumbled for any small , making jumbledness a reasonable quantifier for "random-like" properties of a graph's edge distribution.
Connection to local conditions
Thomason showed that the "jumbled" condition is implied by a simpler-to-check condition, only depending on the codegree of two vertices and not every subset of the vertex set of the graph. Letting be the number of common neighbors of two vertices and , Thomason showed that, given a graph on vertices with minimum degree , if for every and , then is -jumbled. This result shows how to check the jumbledness condition algorithmically in polynomial time in the number of vertices, and can be used to show pseudorandomness of specific graphs.
Chung–Graham–Wilson theorem
In the spirit of the conditions considered by Thomason and their alternately global and local nature, several weaker conditions were considered by Chung, Graham, and Wilson in 1989: a graph on vertices with edge density and some can satisfy each of these conditions if
Discrepancy: for any subsets of the vertex set , the number of edges between and is within of .
Discrepa
|
https://en.wikipedia.org/wiki/Robin%20Hartshorne
|
Robin Cope Hartshorne ( ; born March 15, 1938) is an American mathematician who is known for his work in algebraic geometry.
Career
Hartshorne was a Putnam Fellow in Fall 1958 while he was an undergraduate at Harvard University (under the name Robert C. Hartshorne). He received a Ph.D. in mathematics from Princeton University in 1963 after completing a doctoral dissertation titled Connectedness of the Hilbert scheme under the supervision of John Coleman Moore and Oscar Zariski. He then became a Junior Fellow at Harvard University, where he taught for several years. In 1972, he was appointed to the faculty at the University of California, Berkeley, where he is a Professor Emeritus as of 2020.
Hartshorne is the author of the text Algebraic Geometry.
Awards
In 1979, Hartshorne was awarded the Leroy P. Steele Prize for "his expository research article Equivalence relations on algebraic cycles and subvarieties of small codimension, Proceedings of Symposia in Pure Mathematics, volume 29, American Mathematical Society, 1975, pp. 129-164; and his book Algebraic geometry, Springer-Verlag, Berlin and New York, 1977." In 2012, Hartshorne became a fellow of the American Mathematical Society.
Personal life
Hartshorne attended high school at Phillips Exeter Academy, graduating in 1955. Hartshorne is married to Edie Churchill and has two sons and an adopted daughter. He is a mountain climber and amateur flute and shakuhachi player.
Selected publications
Foundations of Projective Geometry, New York: W. A. Benjamin, 1967;
Ample Subvarieties of Algebraic Varieties, New York: Springer-Verlag. 1970;
Algebraic Geometry, New York: Springer-Verlag, 1977; corrected 6th printing, 1993. GTM 52,
Families of Curves in P3 and Zeuthen's Problem. Vol. 617. American Mathematical Society, 1997.
Geometry: Euclid and Beyond, New York: Springer-Verlag, 2000; corrected 2nd printing, 2002; corrected 4th printing, 2005.
Local Cohomology: A Seminar Given by A. Grothendieck, Harvard University. Fa
|
https://en.wikipedia.org/wiki/Decentralized%20computing
|
Decentralized computing is the allocation of resources, both hardware and software, to each individual workstation, or office location. In contrast, centralized computing exists when the majority of functions are carried out, or obtained from a remote centralized location. Decentralized computing is a trend in modern-day business environments. This is the opposite of centralized computing, which was prevalent during the early days of computers.
A decentralized computer system has many benefits over a conventional centralized network. Desktop computers have advanced so rapidly, that their potential performance far exceeds the requirements of most business applications. This results in most desktop computers remaining idle (in relation to their full potential). A decentralized system can use the potential of these systems to maximize efficiency. However, it is debatable whether these networks increase overall effectiveness.
All computers have to be updated individually with new software, unlike a centralized computer system. Decentralized systems still enable file sharing and all computers can share peripherals such as printers and scanners as well as modems, allowing all the computers in the network to connect to the internet.
A collection of decentralized computers systems are components of a larger computer network, held together by local stations of equal importance and capability. These systems are capable of running independently of each other.
Origins of decentralized computing
The origins of decentralized computing originate from the work of David Chaum.
During 1979 he conceived the first concept of a decentralized computer system known as Mix Network. It provided an anonymous email communications network, which decentralized the authentication of the messages in a protocol that would become the precursor to Onion Routing, the protocol of the TOR browser. Through this initial development of an anonymous communications network, David Chaum applied his Mi
|
https://en.wikipedia.org/wiki/Ubique%20%28company%29
|
Ubique was a software company based in Israel.
In 1994 the company launched the first social-networking software, which included instant messaging, voice over IP (Commonly known as VoIP), chat rooms, web-based events, collaborative browsing. It is best known for the Virtual Places software product and the technology used by
Lotus Sametime. It is now part of IBM Haifa Labs.
Technology
Virtual Places
Ubique's best-known product is Virtual Places, a presence-based chat program in which users explore web sites together. It is used by providers such as VPChat and Digital Space and eventually evolved into Lotus Sametime.
Virtual Places requires a server and client software. Users start Virtual Places along with a web
browser and sign into the Virtual Places server. Avatars are overlaid onto the web browser and
users are able to collaborate with each other while they all visit web sites in real time.
Some Virtual Places consumer-oriented communities are still alive on the Web and are using the old version of it.
Instant Messaging and Chat
With the technology developed for Virtual Places, Ubique created an instant messaging and
presence technology platform which evolved into Lotus Sametime.
History
1994 – Ubique Ltd was founded in Israel by Ehud Shapiro and a group of scientists from
the Weizmann Institute to develop real-time, distributed computing products. The
company developed a presence-based chat system known as Virtual Places along with real-time
instant messaging and presence technology software. These were the very early days of the web, which at the time had only static data. Ubique's mission was "to add people to the web".
1995 – America Online Inc. purchased Ubique with the intention to use Ubique's Virtual
Places technology to enhance and expand its existing live online interactive communication for both the AOL consumer online service and the new GNN brand service. Only the GNN-branded Virtual Places product was ever released.
1996 – GNN was disco
|
https://en.wikipedia.org/wiki/Grid%20method%20multiplication
|
The grid method (also known as the box method) of multiplication is an introductory approach to multi-digit multiplication calculations that involve numbers larger than ten. Because it is often taught in mathematics education at the level of primary school or elementary school, this algorithm is sometimes called the grammar school method.
Compared to traditional long multiplication, the grid method differs in clearly breaking the multiplication and addition into two steps, and in being less dependent on place value.
Whilst less efficient than the traditional method, grid multiplication is considered to be more reliable, in that children are less likely to make mistakes. Most pupils will go on to learn the traditional method, once they are comfortable with the grid method; but knowledge of the grid method remains a useful "fall back", in the event of confusion. It is also argued that since anyone doing a lot of multiplication would nowadays use a pocket calculator, efficiency for its own sake is less important; equally, since this means that most children will use the multiplication algorithm less often, it is useful for them to become familiar with a more explicit (and hence more memorable) method.
Use of the grid method has been standard in mathematics education in primary schools in England and Wales since the introduction of a National Numeracy Strategy with its "numeracy hour" in the 1990s. It can also be found included in various curricula elsewhere. Essentially the same calculation approach, but not with the explicit grid arrangement, is also known as the partial products algorithm or partial products method.
Calculations
Introductory motivation
The grid method can be introduced by thinking about how to add up the number of points in a regular array, for example the number of squares of chocolate in a chocolate bar. As the size of the calculation becomes larger, it becomes easier to start counting in tens; and to represent the calculation as a box whi
|
https://en.wikipedia.org/wiki/Colossal%20Typewriter
|
Colossal Typewriter by John McCarthy and Roland Silver was one of the earliest computer text editors. The program ran on the PDP-1 at Bolt, Beranek and Newman (BBN) by December 1960.
About this time, both authors were associated with the Massachusetts Institute of Technology, but it is unclear whether the editor ran on the TX-0 on loan to MIT from Lincoln Laboratory or on the PDP-1 donated to MIT in 1961 by Digital Equipment Corporation. A "Colossal Typewriter Program" is in the BBN Program Library, and, under the same name, in the DECUS Program Library as BBN- 6 (CT).
See also
Expensive Typewriter
TECO
RUNOFF
TJ-2
Notes
1960 software
Text editors
History of software
|
https://en.wikipedia.org/wiki/Triton%20%28content%20delivery%29
|
Triton was a digital delivery and digital rights management service created by Digital Interactive Streams, which abruptly went out of business in early October 2006.
Triton was a new competitor in the rapidly growing market for electronic distribution of video games. Triton was being used to serve budget-oriented games from such publishers as Strategy First and Global Star Software, and was most known for distributing Prey.
History
Triton was launched on November 10, 2004, under the name Game xStream. The service signed several smaller publishers shortly thereafter, and announced its first high-profile deal in May 2005, signing 3D Realms and its then in-development FPS Prey.
Game xStream was renamed to its current title of Triton on May 8, 2006.
In early October, 2006, owners of Prey who had purchased it via Triton began to complain about problems purchasing the game, activating it, and reaching customer service. 3D Realms' webmaster Joe Siegler managed to find out that Triton and Digital Interactive Streams had gone out of business suddenly and apparently without warning.
A follow-up from Royal O'Brien of Triton said that Prey owners who use Triton won't lose their game. A patch was in development to remove the dependency from the live system and allow you to back up/copy and play your games. However, customers who purchased the game through Triton will receive a retail copy.
Prey was released on Valve's Steam service, which allows any existing Prey owners to register their game through Steam by entering the activation code, including those who bought Prey through Triton. The game is, however, no longer available for purchase through Steam.
Technology
Although similar to competing services, the primary selling point of Triton was its "dynamic streaming" technology, which allows for games to be played before they have been completely downloaded - new content is sent to the client as it is needed. All games on the service required the user to be online to be
|
https://en.wikipedia.org/wiki/Expensive%20Tape%20Recorder
|
Expensive Tape Recorder is a digital audio program written by David Gross while a student at the Massachusetts Institute of Technology. Gross developed the idea with Alan Kotok, a fellow member of the Tech Model Railroad Club. The recorder and playback system ran in the late 1950s or early 1960s on MIT's TX-0 computer on loan from Lincoln Laboratory.
The name
Gross referred to this project by this name casually in the context of Expensive Typewriter and other programs that took their names in the spirit of "Colossal Typewriter". It is unclear if the typewriters were named for the 3 million USD development cost of the TX-0. Or they could have been named for the retail price of the DEC PDP-1, a descendant of the TX-0, installed next door at MIT in 1961. The PDP-1 was one of the least expensive computers money could buy, about 120,000 in 1962 USD. The program has been referred to as a hack, perhaps in the historical sense or in the MIT hack sense. Or the term may have been applied to it in the sense of Hackers: Heroes of the Computer Revolution, a book by Steven Levy.
The project
Gross recalled and very briefly described the project in a 1984 Computer Museum meeting. A person associated with the Tixo Web site spoke with Gross and Kotok, and posted the only other description known.
Influence
According to Kotok, the project was, "digital recording more than 20 years ahead of its time." In 1984, when Jack Dennis asked if they could recognize Beethoven, Computer Museum meeting minutes record the authors as saying, "It wasn't bad, considering." Digital audio pioneer Thomas Stockham worked with Dennis and like Kotok helped develop a contemporary debugger. Whether he was first influenced by Expensive Tape Recorder or more by the work of Kenneth N. Stevens is unknown.
See also
PDP-1
Digital recording
Expensive Typewriter
Expensive Desk Calculator
Expensive Planetarium
Harmony Compiler
Notes
References
Digital audio
History of software
|
https://en.wikipedia.org/wiki/Immunodermatology
|
Immunodermatology studies skin as an organ of immunity in health and disease. Several areas have special attention, such as photo-immunology (effects of UV light on skin defense), inflammatory diseases such as Hidradenitis suppurativa, allergic contact dermatitis and atopic eczema, presumably autoimmune skin diseases such as vitiligo and psoriasis, and finally the immunology of microbial skin diseases such as retrovirus infections and leprosy. New therapies in development for the immunomodulation of common immunological skin diseases include biologicals aimed at neutralizing TNF-alfa and chemokine receptor inhibitors.
Testing sites
There are multiple universities currently do Immunodermatology:
University of Utah Health.
University of North Carolina.
See also
Dermatology
Immune response
References
Branches of immunology
Dermatology
|
https://en.wikipedia.org/wiki/Harmony%20Compiler
|
Harmony Compiler was written by Peter Samson at the Massachusetts Institute of Technology (MIT). The compiler was designed to encode music for the PDP-1 and built on an earlier program Samson wrote for the TX-0 computer.
Jack Dennis noticed and had mentioned to Samson that the sound on or off state of the TX-0's speaker could be enough to play music. They succeeded in building a WYSIWYG program for one voice before or by 1960.
For the PDP-1 which arrived at MIT in September 1961, Samson designed the Harmony Compiler which synthesizes four voices from input in a text-based notation. Although it created music in many genres, it was optimized for baroque music. PDP-1 music is merged from four channels and played back in stereo. Notes are on pitch and each has an undertone. The music does not stop for errors. Mistakes are greeted with a message from the typewriter's red ribbon, "To err is human, to forgive divine."
Samson joined the PDP-1 restoration project at the Computer History Museum in 2004 to recreate the music player.
References
Samson's description begins at 1:20.
Notes
Audio programming languages
History of software
|
https://en.wikipedia.org/wiki/318%20%28number%29
|
318 is the natural number following 317 and preceding 319.
In mathematics
318 is:
a sphenic number
a nontotient
the number of posets with 6 unlabeled elements
the sum of 12 consecutive primes, 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47.
In religion
In Genesis 14, Abraham takes 318 men to rescue his brother Lot.
References
Integers
|
https://en.wikipedia.org/wiki/UCT%20Mathematics%20Competition
|
The UCT Mathematics Competition is an annual mathematics competition for schools in the Western Cape province of South Africa, held at the University of Cape Town.
Around 7000 participants from Grade 8 to Grade 12 take part, writing a multiple-choice paper. Individual and pair entries are accepted, but all write the same paper for their grade.
The current holder of the School Trophy is Rondebosch Boys High School, with Diocesan College achieving second place in the 2022 competition. These two schools have held the top positions in the competition for a number of years.
The competition was established in 1977 by Mona Leeuwenberg and Shirley Fitton, who were teachers at Diocesan College and Westerford High School, and since 1987 has been run by Professor John Webb of the University of Cape Town.
Awards
Mona Leeuwenburg Trophy
The Mona Leeuwenburg Trophy is awarded to the school with the best overall performance in the competition.
UCT Trophy
The UCT Trophy is awarded to the school with the best performance that has not participated in the competition more than twice before.
Diane Tucker Trophy
The Diane Tucker Trophy is awarded to the girl with the best performance in the competition. This trophy was first made in year 2000.
Moolla Trophy
The Moolla Trophy was donated to the competition by the Moolla family. Saadiq, Haroon and Ashraf Moolla represented Rondebosch Boys' High School and achieved Gold Awards from 2003 to 2011. The trophy is awarded to a school from a disadvantaged community that shows a notable performance in the competition.
Lesley Reeler Trophy
The Lesley Reeler Trophy is awarded for the best individual performance over five years (grades 8 to 12).
References
External links
UCT Mathematics Competition
University of Cape Town
Mathematics competitions
|
https://en.wikipedia.org/wiki/Delta%20robot
|
A delta robot is a type of parallel robot that consists of three arms connected to universal joints at the base. The key design feature is the use of parallelograms in the arms, which maintains the orientation of the end effector. In contrast, Stewart platform can change the orientation of its end effector.
Delta robots have popular usage in picking and packaging in factories because they can be quite fast, some executing up to 300 picks per minute.
History
The delta robot (a parallel arm robot) was invented in the early 1980s by a research team led by professor Reymond Clavel at the École Polytechnique Fédérale de Lausanne (EPFL, Switzerland). After a visit to a chocolate maker, a team member wanted to develop a robot to place pralines in their packages. The purpose of this new type of robot was to manipulate light and small objects at a very high speed, an industrial need at that time.
In 1987, the Swiss company Demaurex purchased a license for the delta robot and started the production of delta robots for the packaging industry. In 1991, Reymond Clavel presented his doctoral thesis 'Conception d'un robot parallèle rapide à 4 degrés de liberté', and received the golden robot award in 1999 for his work and development of the delta robot. Also in 1999, ABB Flexible Automation started selling its delta robot, the FlexPicker. By the end of 1999, delta robots were also sold by Sigpack Systems.
In 2017, researchers from Harvard's Microrobotics Lab miniaturized it with piezoelectric actuators to 0.43 grams for 15 mm x 15 mm x 20 mm, capable of moving a 1.3 g payload around a 7 cubic millimeter workspace with a 5 micrometers precision, reaching 0.45 m/s speeds with 215 m/s² accelerations and repeating patterns at 75 Hz.
Design
The delta robot is a parallel robot, i.e. it consists of multiple kinematic chains connecting the base with the end-effector. The robot can also be seen as a spatial generalisation of a four-bar linkage.
The key concept of the delta robot
|
https://en.wikipedia.org/wiki/BBGKY%20hierarchy
|
In statistical physics, the BBGKY hierarchy (Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy, sometimes called Bogoliubov hierarchy) is a set of equations describing the dynamics of a system of a large number of interacting particles. The equation for an s-particle distribution function (probability density function) in the BBGKY hierarchy includes the (s + 1)-particle distribution function, thus forming a coupled chain of equations. This formal theoretic result is named after Nikolay Bogolyubov, Max Born, Herbert S. Green, John Gamble Kirkwood, and Jacques Yvon.
Formulation
The evolution of an N-particle system in absence of quantum fluctuations is given by the Liouville equation for the probability density function in 6N-dimensional phase space (3 space and 3 momentum coordinates per particle)
where are the coordinates and momentum for -th particle with mass , and the net force acting on the -th particle is
where is the pair potential for interaction between particles, and is the external-field potential. By integration over part of the variables, the Liouville equation can be transformed into a chain of equations where the first equation connects the evolution of one-particle probability density function with the two-particle probability density function, second equation connects the two-particle probability density function with the three-particle probability density function, and generally the s-th equation connects the s-particle probability density function
with the (s + 1)-particle probability density function:
The equation above for s-particle distribution function is obtained by integration of the Liouville equation over the variables . The problem with the above equation is that it is not closed. To solve , one has to know , which in turn demands to solve and all the way back to the full Liouville equation. However, one can solve , if could be modeled. One such case is the Boltzmann equation for , where is modeled based on the molecu
|
https://en.wikipedia.org/wiki/Timing%20margin
|
Timing margin is an electronics term that defines the difference between the actual change in a signal and the latest time at which the signal can change in order for an electronic circuit to function correctly. It is used in the design of digital electronics.
Illustration
In this image, the lower signal is the clock and the upper signal is the data. Data is recognized by the circuit at the positive edge of the clock. There are two time intervals illustrated in this image. One is the setup time, and the other is the timing margin. The setup time is illustrated in red in this image; the timing margin is illustrated in green.
The edges of the signals can shift around in a real-world electronic system for various reasons. If the clock and the data signal are shifted relative to each other, this may increase or reduce the timing margin; as long as the data signal changes before the setup time is entered, the data will be interpreted correctly. If it is known from experience that the signals can shift relative to each other by as much as 2 microseconds, for instance, designing the system with at least 2 microseconds of timing margin will prevent incorrect interpretation of the data signal by the receiver.
If the physical design of the circuit is changed, for example by giving more wire that the clock signal is transmitted on, the edge of the data signal will move closer to the positive edge of the clock signal, reducing the timing margin. If the signals have been designed with enough timing margin, only the correct data will be received.
See also
Static timing analysis
References
Electrical engineering
|
https://en.wikipedia.org/wiki/Trinucleotide%20repeat%20expansion
|
A trinucleotide repeat expansion, also known as a triplet repeat expansion, is the DNA mutation responsible for causing any type of disorder categorized as a trinucleotide repeat disorder. These are labelled in dynamical genetics as dynamic mutations. Triplet expansion is caused by slippage during DNA replication, also known as "copy choice" DNA replication. Due to the repetitive nature of the DNA sequence in these regions, 'loop out' structures may form during DNA replication while maintaining complementary base pairing between the parent strand and daughter strand being synthesized. If the loop out structure is formed from the sequence on the daughter strand this will result in an increase in the number of repeats. However, if the loop out structure is formed on the parent strand, a decrease in the number of repeats occurs. It appears that expansion of these repeats is more common than reduction. Generally, the larger the expansion the more likely they are to cause disease or increase the severity of disease. Other proposed mechanisms for expansion and reduction involve the interaction of RNA and DNA molecules.
In addition to occurring during DNA replication, trinucleotide repeat expansion can also occur during DNA repair. When a DNA trinucleotide repeat sequence is damaged, it may be repaired by processes such as homologous recombination, non-homologous end joining, mismatch repair or base excision repair. Each of these processes involves a DNA synthesis step in which strand slippage might occur leading to trinucleotide repeat expansion.
The number of trinucleotide repeats appears to predict the progression, severity, and age of onset of Huntington's disease and similar trinucleotide repeat disorders. Other human diseases in which triplet repeat expansion occurs are fragile X syndrome, several spinocerebellar ataxias, myotonic dystrophy and Friedreich's ataxia.
History
The first documentation of anticipation in genetic disorders was in the 1800s. However, fro
|
https://en.wikipedia.org/wiki/%C3%89tale%20topology
|
In algebraic geometry, the étale topology is a Grothendieck topology on the category of schemes which has properties similar to the Euclidean topology, but unlike the Euclidean topology, it is also defined in positive characteristic. The étale topology was originally introduced by Alexander Grothendieck to define étale cohomology, and this is still the étale topology's most well-known use.
Definitions
For any scheme X, let Ét(X) be the category of all étale morphisms from a scheme to X. This is the analog of the category of open subsets of X (that is, the category whose objects are varieties and whose morphisms are open immersions). Its objects can be informally thought of as étale open subsets of X. The intersection of two objects corresponds to their fiber product over X. Ét(X) is a large category, meaning that its objects do not form a set.
An étale presheaf on X is a contravariant functor from Ét(X) to the category of sets. A presheaf F is called an étale sheaf if it satisfies the analog of the usual gluing condition for sheaves on topological spaces. That is, F is an étale sheaf if and only if the following condition is true. Suppose that is an object of Ét(X) and that is a jointly surjective family of étale morphisms over X. For each i, choose a section xi of F over Ui. The projection map , which is loosely speaking the inclusion of the intersection of Ui and Uj in Ui, induces a restriction map . If for all i and j the restrictions of xi and xj to are equal, then there must exist a unique section x of F over U which restricts to xi for all i.
Suppose that X is a Noetherian scheme. An abelian étale sheaf F on X is called finite locally constant if it is a representable functor which can be represented by an étale cover of X. It is called constructible if X can be covered by a finite family of subschemes on each of which the restriction of F is finite locally constant. It is called torsion if F(U) is a torsion group for all étale covers U of X.
|
https://en.wikipedia.org/wiki/Omega%20Chi%20Epsilon
|
Omega Chi Epsilon (or , sometimes simplified to OXE) is an American honor society for chemical engineering students.
History
The first chapter of Omega Chi Epsilon was formed at the University of Illinois in 1931 by a group of chemical engineering students. These Founders were:
F. C. Howard
A. Garrell Deem
Ethan M. Stifle
John W. Bertetti
They were aided in their efforts by professors D.B. Keyes and Norman Krase. The second chapter was formed at the Iowa State University in 1932.
The Society grew slowly at first. Baird's Manual indicates there were six chapters by 1957, of which three were inactive. However interest was revived in the 1960 which allowed a period of sustained growth that has continued to the present day. There are approximately 80 active chapters of the society as of 2021.
Omega Chi Epsilon amended its constitution to permit women to become members as of 1966.
The organization became a member of the Association of College Honor Societies in 1967.
Membership is limited to chemical engineering juniors, seniors and graduate students. Associate membership may be offered to professors or other members of the staff of institutions within the field.
Governance
The Society's annual meeting is held at the same time and place as the annual meeting of the American Institute of Chemical Engineers.
Governance is vested in a national president, vice-president, executive secretary and treasurer. With the immediate past president, these constitute the Executive Committee.
National officers
President - Dr. Christi Patton-Luks, Missouri Science and Technology
Vice-president - Dr. Troy Vogel, University of Notre Dame
Treasurer - Dr. G. Glenn Lipscomb, University of Toledo
Executive Secretary - Dr. Richard A. Davis, University of Minnesota at Duluth
Symbolism and traditions
The Society's badge is a black Maltese cross background, on which is superimposed a circular crest. The crest bears the letters ΩΧΕ on a white band passing across the horizo
|
https://en.wikipedia.org/wiki/Wall%20stud
|
A wall stud is a vertical repetitive framing member in a building's wall of smaller cross section than a post. It is a fundamental element in frame building.
Etymology
Stud is an ancient word related to similar words in Old English, Old Norse, Middle High German, and Old Teutonic generally meaning prop or support. Other historical words with similar meaning are quarter and scantling (one sense meaning a smaller timber, not necessarily the same use). Stick is a colloquial term for both framing lumber (timber) and a "timber tree" (a tree trunk good for using as lumber (timber)); thus, the names "stick and platform", "stick and frame", "stick and box", or simply stick framing. The stud height usually determines the ceiling height, thus sayings like: "...These rooms were usually high in stud..."
Purpose
Studs form walls and may carry vertical structural loads or be non load-bearing, such as in partition walls, which only separate spaces. They hold in place the windows, doors, interior finish, exterior sheathing or siding, insulation and utilities and help give shape to a building. Studs run from sill plate to wall plate. In modern construction, studs are anchored to the plates in a way, such as using fasteners, to prevent the building from being lifted off the foundation by severe wind or earthquake.
Properties
Studs are usually slender, so more studs are needed than in post and beam framing. Sometimes studs are long, as in balloon framing, where the studs extend two stories and carry a ledger which carries joists. Balloon framing has been made illegal in new construction in many jurisdictions for fire safety reasons because the open wall cavities allow fire to quickly spread such as from a basement to an attic; the plates and platforms in platform framing provide a passive fire stop inside the walls, and so are deemed much safer by fire safety officials. Being thinner and lighter, stick construction techniques are easier to cut and carry and is speedier than the tim
|
https://en.wikipedia.org/wiki/Dowker%E2%80%93Thistlethwaite%20notation
|
In the mathematical field of knot theory, the Dowker–Thistlethwaite (DT) notation or code, for a knot is a sequence of even integers. The notation is named after Clifford Hugh Dowker and Morwen Thistlethwaite, who refined a notation originally due to Peter Guthrie Tait.
Definition
To generate the Dowker–Thistlethwaite notation, traverse the knot using an arbitrary starting point and direction. Label each of the n crossings with the numbers 1, ..., 2n in order of traversal (each crossing is visited and labelled twice), with the following modification: if the label is an even number and the strand followed crosses over at the crossing, then change the sign on the label to be a negative. When finished, each crossing will be labelled a pair of integers, one even and one odd. The Dowker–Thistlethwaite notation is the sequence of even integer labels associated with the labels 1, 3, ..., 2n − 1 in turn.
Example
For example, a knot diagram may have crossings labelled with the pairs (1, 6) (3, −12) (5, 2) (7, 8) (9, −4) and (11, −10). The Dowker–Thistlethwaite notation for this labelling is the sequence: 6 −12 2 8 −4 −10.
Uniqueness and counting
Dowker and Thistlethwaite have proved that the notation specifies prime knots uniquely, up to reflection.
In the more general case, a knot can be recovered from a Dowker–Thistlethwaite sequence, but the recovered knot may differ from the original by either being a reflection or by having any connected sum component reflected in the line between its entry/exit points – the Dowker–Thistlethwaite notation is unchanged by these reflections. Knots tabulations typically consider only prime knots and disregard chirality, so this ambiguity does not affect the tabulation.
The ménage problem, posed by Tait, concerns counting the number of different number sequences possible in this notation.
See also
Alexander–Briggs notation
Conway notation
Gauss notation
References
Further reading
External links
DT Notation, Knotinfo
What are Ga
|
https://en.wikipedia.org/wiki/Parking%20sensor
|
Parking sensors are proximity sensors for road vehicles designed to alert the driver of obstacles while parking. These systems use either electromagnetic or ultrasonic sensors.
Ultrasonic systems
These systems feature ultrasonic proximity detectors to measure the distances to nearby objects via sensors located in the front and/or rear bumper fascias or visually minimized within adjacent grills or recesses.
The sensors emit acoustic pulses, with a control unit measuring the return interval of each reflected signal and calculating object distances. The system in turns warns the driver with acoustic tones, the frequency indicating object distance, with faster tones indicating closer proximity and a continuous tone indicating a minimal pre-defined distance. Systems may also include visual aids, such as LED or LCD readouts to indicate object distance. A vehicle may include a vehicle pictogram on the car's infotainment screen, with a representation of the nearby objects as coloured blocks.
Rear sensors may be activated when reverse gear is selected and deactivated as soon as any other gear is selected. Front sensors may be activated manually and deactivated automatically when the vehicle reaches a pre-determined speed to avoid subsequent nuisance warnings.
As an ultrasonic systems relies on the reflection of sound waves, the system may not detect flat objects or object insufficiently large to reflect sound (e.g., a narrow pole or a longitudinal object pointed directly at the vehicle or near an object). Objects with flat surfaces angled from the vertical may deflect return sound waves away from the sensors, hindering detection. Also soft object with strong sound absorption may have weaker detection, e.g. wool or moss.
Electromagnetic systems
The electromagnetic parking sensor (EPS) was re-invented and patented in 1992 by Mauro Del Signore. Electromagnetic sensors rely on the vehicle moving slowly and smoothly towards the object to be avoided. Once an obstacle is
|
https://en.wikipedia.org/wiki/TRANZ%20330
|
The TRANZ 330 is a popular point-of-sale device manufactured by VeriFone in 1985. The most common application for these units is bank and credit card processing, however, as a general purpose computer, they can perform other novel functions. Other applications include gift/benefit card processing, prepaid phone cards, payroll and employee timekeeping, and even debit and ATM cards. They are programmed in a proprietary VeriFone TCL language (Terminal Control Language), which is unrelated to the Tool Command Language used in UNIX environments.
Point of sale companies
Embedded systems
Payment systems
Banking equipment
|
https://en.wikipedia.org/wiki/Cartan%E2%80%93Kuranishi%20prolongation%20theorem
|
Given an exterior differential system defined on a manifold M, the Cartan–Kuranishi prolongation theorem says that after a finite number of prolongations the system is either in involution (admits at least one 'large' integral manifold), or is impossible.
History
The theorem is named after Élie Cartan and Masatake Kuranishi.
Applications
This theorem is used in infinite-dimensional Lie theory.
See also
Cartan-Kähler theorem
References
M. Kuranishi, On É. Cartan's prolongation theorem of exterior differential systems, Amer. J. Math., vol. 79, 1957, p. 1–47
Partial differential equations
Theorems in analysis
|
https://en.wikipedia.org/wiki/Antique%20radio
|
An antique radio is a radio receiving set that is collectible because of its age and rarity.
Types of antique radio
Morse receivers
The first radio receivers used a coherer and sounding board, and were only able to receive continuous wave (CW) transmissions, encoded with Morse code (wireless telegraphy). Later wireless telephony|transmission and reception of speech became possible, although Morse code transmission continued in use until the 1990s.
All the following sections concern speech-capable radio, or wireless telephony.
Early home-made sets
The idea of radio as entertainment took off in 1920, with the opening of the first stations established specifically for broadcast to the public such as KDKA in Pittsburgh and WWJ in Detroit. More stations opened in cities across North America in the following years and radio ownership steadily gained in popularity. Radio sets from before 1920 are rarities, and are probably military artifacts. Sets made prior to approximately 1924 were usually made on wooden breadboards, in small cupboard style cabinets, or sometimes on an open sheet metal chassis. Homemade sets remained a strong sector of radio production until the early 1930s. Until then there were more homemade sets in use than commercial sets.
Early sets used any of the following technologies:
Crystal set
Crystal set with carbon or mechanical amplifier
Basic Tuned Radio Frequency (TRF) Sets
Reaction Sets
Super-Regenerative Receiver
Superheterodyne Receiver
Crystal sets
These basic radios used no battery, had no amplification and could operate only high-impedance headphones. They would receive only very strong signals from a local station. They were popular among the less wealthy due to their low build cost and zero run cost. Crystal sets had minimal ability to separate stations, and where more than one high power station was present, inability to receive one without the other was a common problem.
Some crystal set users added a carbon amplifier or a mecha
|
https://en.wikipedia.org/wiki/Comet%20assay
|
The single cell gel electrophoresis assay (SCGE, also known as comet assay) is an uncomplicated and sensitive technique for the detection of DNA damage at the level of the individual eukaryotic cell. It was first developed by Östling & Johansson in 1984 and later modified by Singh et al. in 1988. It has since increased in popularity as a standard technique for evaluation of DNA damage/repair, biomonitoring and genotoxicity testing. It involves the encapsulation of cells in a low-melting-point agarose suspension, lysis of the cells in neutral or alkaline (pH>13) conditions, and electrophoresis of the suspended lysed cells. The term "comet" refers to the pattern of DNA migration through the electrophoresis gel, which often resembles a comet.
The comet assay (single-cell gel electrophoresis) is a simple method for measuring deoxyribonucleic acid (DNA) strand breaks in eukaryotic cells. Cells embedded in agarose on a microscope slide are lysed with detergent and high salt to form nucleoids containing supercoiled loops of DNA linked to the nuclear matrix. Electrophoresis at high pH results in structures resembling comets, observed by fluorescence microscopy; the intensity of the comet tail relative to the head reflects the number of DNA breaks. The likely basis for this is that loops containing a break lose their supercoiling and become free to extend toward the anode. This is followed by visual analysis with staining of DNA and calculating fluorescence to determine the extent of DNA damage. This can be performed by manual scoring or automatically by imaging software.
Procedure
Encapsulation
A sample of cells, either derived from an in vitro cell culture or from an in vivo test subject is dispersed into individual cells and suspended in molten low-melting-point agarose at 37 °C. This mono-suspension is cast on a microscope slide. A glass cover slip is held at an angle and the mono-suspension applied to the point of contact between the coverslip and the slide. As the c
|
https://en.wikipedia.org/wiki/Instruction%20step
|
An instruction step is a method of executing a computer program one step at a time to determine how it is functioning. This might be to determine if the correct program flow is being followed in the program during the execution or to see if variables are set to their correct values after a single step has completed.
Hardware instruction step
On earlier computers, a knob on the computer console may have enabled step-by-step execution mode to be selected and execution would then proceed by pressing a "single step" or "single cycle" button. Program status word / Memory or general purpose register read-out could then be accomplished by observing and noting the console lights.
Software instruction step
On later platforms with multiple users, this method was impractical and so single step execution had to be performed using software techniques.
Software techniques
Instrumentation - requiring code to be added during compile or assembly to achieve statement stepping. Code can be added manually to achieve similar results in interpretive languages such as JavaScript.
instruction set simulation - requiring no code modifications for instruction or statement stepping
In some software products which facilitate debugging of High level languages, it is possible to execute an entire HLL statement at a time. This frequently involves many machine instructions and execution pauses after the last instruction in the sequence, ready for the next 'instruction' step. This requires integration with the compilation output to determine the scope of each statement.
Full Instruction set simulators however could provide instruction stepping with or without any source, since they operate at machine code level, optionally providing full trace and debugging information to whatever higher level was available through such integration. In addition they may also optionally allow stepping through each assembly (machine) instruction generated by a HLL statement.
Programs composed of multiple 'modu
|
https://en.wikipedia.org/wiki/List%20of%20silicon%20producers
|
This is a list of silicon producers. The industry involves several very different stages of production. Production starts at silicon metal, which is the material used to gain high purity silicon. High purity silicon in different grades of purity is used for growing silicon ingots, which are sliced to wafers in a process called wafering. Compositionally pure polycrystalline silicon wafers are useful for photovoltaics. Dislocation-free and extremely flat single-crystal silicon wafers are required in the manufacture of computer chips.
Polysilicon producers
Polysilicon producers:
Elkem
JFE Steel
Nitol Solar (Russia), bankrupt since 2019
SunEdison
SolarWorld
High-purity silicon
Producers of high-purity silicon, an intermediate in the manufacture of polysilicon
Hemlock Semiconductor Corporation
Renewable Energy Corporation (REC)
SunEdison
Tokuyama Corporation
Wacker Chemie AG
Silicon wafer manufacturers
A partial list of major producers of wafers (made of high purity silicon, mono- or polycrystalline) includes:
GlobalWafers
Okmetic
Renewable Energy Corporation
Shin-Etsu Handotai
Siltronic
SUMCO
WaferPro
Prolog Semicor
See also
List of photovoltaics companies
References
Silicon
Silicon
|
https://en.wikipedia.org/wiki/6LoWPAN
|
6LoWPAN (acronym of "IPv6 over Low-Power Wireless Personal Area Networks") was a working group of the Internet Engineering Task Force (IETF).
It was created with the intention of applying the Internet Protocol (IP) even to the smallest devices, enabling low-power devices with limited processing capabilities to participate in the Internet of Things.
The 6LoWPAN group defined encapsulation, header compression, neighbor discovery and other mechanisms that allow IPv6 to operate over IEEE 802.15.4 based networks. Although IPv4 and IPv6 protocols do not generally care about the physical and MAC layers they operate over, the low power devices and small packet size defined by IEEE 802.15.4 make it desirable to adapt to these layers.
The base specification developed by the 6LoWPAN IETF group is (updated by with header compression, with neighbor discovery optimization, with selective fragment recovery and with smaller changes in and ). The problem statement document is . IPv6 over Bluetooth Low Energy using 6LoWPAN techniques is described in .
Application areas
The targets for IPv6 networking for low-power radio communication are devices that need wireless connectivity to many other devices at lower data rates for devices with very limited power consumption. One real-world example is Tado°'s individual room heating controllers. The header compression mechanisms in are used to allow IPv6 packets to travel over such networks.
IPv6 is also in use on the smart grid enabling smart meters and other devices to build a micro mesh network before sending the data back to the billing system using the IPv6 backbone. Some of these networks run over IEEE 802.15.4 radios, and therefore use the header compression and fragmentation as specified by RFC6282.
Thread
Thread is a standard from a group of more than fifty companies for a protocol running over 6LoWPAN to enable home automation. The specification is available at no cost , but paid membership is required to implement the p
|
https://en.wikipedia.org/wiki/Maze%20runner
|
In electronic design automation, maze runner is a connection routing method that represents the entire routing space as a grid. Parts of this grid are blocked by components, specialised areas, or already present wiring. The grid size corresponds to the wiring pitch of the area. The goal is to find a chain of grid cells that go from point A to point B.
A maze runner may use the Lee algorithm. It uses a wave propagation style (a wave are all cells that can be reached in n steps) throughout the routing space. The wave stops when the target is reached, and the path is determined by backtracking through the cells.
See also
Autorouter
References
. One of the first descriptions of a maze router.
Electronic engineering
Electronic design automation
Electronics optimization
|
https://en.wikipedia.org/wiki/Active%20appearance%20model
|
An active appearance model (AAM) is a computer vision algorithm for matching a statistical model of object shape and appearance to a new image. They are built during a training phase. A set of images, together with coordinates of landmarks that appear in all of the images, is provided to the training supervisor.
The model was first introduced by Edwards, Cootes and Taylor in the context of face analysis at the 3rd International Conference on Face and Gesture Recognition, 1998. Cootes, Edwards and Taylor further described the approach as a general method in computer vision at the European Conference on Computer Vision in the same year. The approach is widely used for matching and tracking faces and for medical image interpretation.
The algorithm uses the difference between the current estimate of appearance and the target image to drive an optimization process.
By taking advantage of the least squares techniques, it can match to new images very swiftly.
It is related to the active shape model (ASM). One disadvantage of ASM is that it only uses shape constraints (together with some information about the image structure near the landmarks), and does not take advantage of all the available information – the texture across the target object. This can be modelled using an AAM.
References
Some reading
T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham. Training models of shape from sets of examples. In Proceedings of BMVC'92, pages 266–275, 1992
S. C. Mitchell, J. G. Bosch, B. P. F. Lelieveldt, R. J. van der Geest, J. H. C. Reiber, and M. Sonka. 3-d active appearance models: Segmentation of cardiac MR and ultrasound images. IEEE Trans. Med. Imaging, 21(9):1167–1178, 2002
T.F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. ECCV, 2:484–498, 1998[pdf]
External links
Professor Tim Cootes AAM Code Free Tools for experimenting with AAMs from Manchester University (for research use only).
Professor Tim Cootes AAM Page Co-creator of AAM page f
|
https://en.wikipedia.org/wiki/Oleanane
|
Oleanane is a natural triterpenoid. It is commonly found in woody angiosperms and as a result is often used as an indicator of these plants in the fossil record. It is a member of the oleanoid series, which consists of pentacyclic triterpenoids (such as beta-amyrin and taraxerol) where all rings are six-membered.
Structure
Oleanane is a pentacyclic triterpenoid, a class of molecules made up of six connected isoprene units. The naming of both the ring structures and individual carbon atoms in oleanane is the same as in steroids. As such, it consists of a A, B, C, D, and E ring, all of which are six-membered rings.
The structure of oleanane contains a number of different methyl groups, that vary in orientation between different oleananes. For example, in 18-alpha-oleanane contains a downward facing methyl group for the 18th carbon atom, while 18-beta-oleanane contains an upward facing methyl group at the same position.
A and B rings of the oleanane structure are identical to that of hopane. As a result, both molecules produce a fragment of m/z 191. Because this fragment is often used to identify hopanes, oleanane can be mis-identified in hopane analysis.
Synthesis
Like other triterpenoids, are formed from six combined isoprene units. These isoprene units can be combined via a number of different pathways. In eukaryotes (including plants), this pathway is the mevalonate (MVA) pathway. For the formation of steroids and other triterpenoids the isoprenoids are combined into a precursor known as squalene, which then undergoes enzymatic cyclization to produce the various different triterpenoids, including oleanane.
Once the oleananes have been transported into rocks or sediments they will undergo further alteration before they are measured.
Measurement in Rock Samples
Oleananes can be identified in extracts from rock samples (or plants) using GC/MS. A GC/MS is a gas chromatograph coupled with a mass spectrometer. The sample is first injected into the system, then ru
|
https://en.wikipedia.org/wiki/Academic%20Dictionary%20of%20Lithuanian
|
The Academic Dictionary of Lithuanian ( or ) is a comprehensive thesaurus of the Lithuanian language and one of the most extensive lexicographical works in the world. The 20 volumes encompassing 22,000 pages were published between 1941 and 2002 by the Institute of the Lithuanian Language. An online and a CD version was made available in 2005. It contains about 236,000 headwords, or 500,000 if counting sub-headwords, reflecting modern and historical language both from published texts since the first published book in 1547 until 2001 and recorded from the vernacular. Definitions, usage notes, and examples are given for most words. The entry length varies from one sentence to almost a hundred pages. For example, 46 pages are devoted to 298 different meanings of taisyti (to fix) and its derivatives.
History
Lithuanian philologist Kazimieras Būga started collecting material for a dictionary in 1902. When he returned from Russia to Lithuania in 1920, he started writing a dictionary that would contain all known Lithuanian words as well as hydronyms, toponyms, and surnames. However, he died in 1924 having published only two fascicules with a lengthy introduction and the dictionary up to the word anga. Būga attempted to write down everything that was known to science about each word, including etymology and history. He was critical of his own efforts realizing that the dictionary was not comprehensive or consistent, and considered the publication to be only a "draft" of a better dictionary in the future.
Būga collected about 600,000 index cards with words, but Juozas Balčikonis, who was selected by the Ministry of Education to continue the work on the dictionary in 1930, realized that more data is needed and organized a campaign to collect additional words from literary works as well as the spoken language. The focus was on older texts, mostly ignoring contemporary literature and periodicals. Balčikonis asked Lithuanian public (teachers, students, etc.) to record words fro
|
https://en.wikipedia.org/wiki/ORiNOCO
|
ORiNOCO was the brand name for a family of wireless networking technology by Proxim Wireless (previously Lucent). These integrated circuits (codenamed Hermes) provide wireless connectivity for 802.11-compliant Wireless LANs.
Variants
Lucent offered several variants of the PC Card, referred to by different color-based monikers:
White/Bronze: WaveLAN IEEE Standard 2 Mbit/s PC Cards with 802.11 support.
Silver: WaveLAN IEEE Turbo 11 Mbit/s PC Cards with 802.11b and 64-bit WEP support.
Gold: WaveLAN IEEE Turbo 11 Mbit/s PC Cards with 802.11b and 128-bit WEP support.
Later models dropped the 'Turbo' moniker due to 802.11b 11 Mbit/s becoming widespread.
Proxim, after taking over Lucent's wireless division, rebranded all their wireless cards to ORiNOCO - even cards not based on Lucent/Agere's Hermes chipset. Proxim still offers ORiNOCO-based cards under the 'Classic' brand.
Rebranded products
The WaveLAN chipsets that power ORiNOCO-branded cards were commonly used to power other wireless networking devices, and are compatible with a number of other access points, routers and wireless cards. The following brand and models utilise the chipset, or are rebrands of an ORiNOCO product:
3Com AirConnect
Apple AirPort and AirMac cards (original only, not AirPort Extreme). Modified to remove the antenna stub.
AVAYA World Card
Cabletron RoamAbout 802.11 DS
Compaq WL100 11 Mbit/s Wireless Adapter
D-Link DWL-650
ELSA AirLancer MC-11
Enterasys RoamAbout
Ericsson WLAN Card C11
Farallon SkyLINE
Fujitsu RoomWave
HyperLink Wireless PC Card 11Mbit/s
Intel PRO/Wireless 2011
Lucent Technologies WaveLAN/IEEE Orinoco
Melco WLI-PCM-L11
Microsoft Wireless Notebook Adapter MN-520
NCR WaveLAN/IEEE Adapter
Proxim LAN PC CARD HARMONY 80211B
Samsung 11Mbit/s WLAN Card
Symbol LA4111 Spectrum24 Wireless LAN PC Card
Toshiba Wireless LAN Mini PCI Card
Preferred wireless chipset for wardriving
The ORiNOCO (and their derivatives) is preferred by wardrivers, due to their high
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.