source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Time-scale%20calculus | In mathematics, time-scale calculus is a unification of the theory of difference equations with that of differential equations, unifying integral and differential calculus with the calculus of finite differences, offering a formalism for studying hybrid systems. It has applications in any field that requires simultaneous modelling of discrete and continuous data. It gives a new definition of a derivative such that if one differentiates a function defined on the real numbers then the definition is equivalent to standard differentiation, but if one uses a function defined on the integers then it is equivalent to the forward difference operator.
History
Time-scale calculus was introduced in 1988 by the German mathematician Stefan Hilger. However, similar ideas have been used before and go back at least to the introduction of the Riemann–Stieltjes integral, which unifies sums and integrals.
Dynamic equations
Many results concerning differential equations carry over quite easily to corresponding results for difference equations, while other results seem to be completely different from their continuous counterparts. The study of dynamic equations on time scales reveals such discrepancies, and helps avoid proving results twice—once for differential equations and once again for difference equations. The general idea is to prove a result for a dynamic equation where the domain of the unknown function is a so-called time scale (also known as a time-set), which may be an arbitrary closed subset of the reals. In this way, results apply not only to the set of real numbers or set of integers but to more general time scales such as a Cantor set.
The three most popular examples of calculus on time scales are differential calculus, difference calculus, and quantum calculus. Dynamic equations on a time scale have a potential for applications such as in population dynamics. For example, they can model insect populations that evolve continuously while in season, die out in winter |
https://en.wikipedia.org/wiki/Skywave | In radio communication, skywave or skip refers to the propagation of radio waves reflected or refracted back toward Earth from the ionosphere, an electrically charged layer of the upper atmosphere. Since it is not limited by the curvature of the Earth, skywave propagation can be used to communicate beyond the horizon, at intercontinental distances. It is mostly used in the shortwave frequency bands.
As a result of skywave propagation, a signal from a distant AM broadcasting station, a shortwave station, or – during sporadic E propagation conditions (principally during the summer months in both hemispheres) – a distant VHF FM or TV station can sometimes be received as clearly as local stations. Most long-distance shortwave (high frequency) radio communication – between 3 and 30 MHz – is a result of skywave propagation. Since the early 1920s amateur radio operators (or "hams"), limited to lower transmitter power than broadcast stations, have taken advantage of skywave for long-distance (or "DX") communication.
Skywave propagation is distinct from line-of-sight propagation, in which radio waves travel in a straight line, and from non-line-of-sight propagation.
Local and distant skywave propagation
Skywave transmissions can be used for long-distance communications (DX) by waves directed at a low angle as well as relatively local communications via nearly vertically directed waves (near vertical incidence skywaves – NVIS).
Low-angle skywaves
The ionosphere is a region of the upper atmosphere, from about 80 km (50 miles) to 1000 km (600 miles) in altitude, where neutral air is ionized by solar photons, solar particles, and cosmic rays. When high-frequency signals enter the ionosphere at a low angle they are bent back towards the Earth by the ionized layer. If the peak ionization is strong enough for the chosen frequency, a wave will exit the bottom of the layer earthwards – as if obliquely reflected from a mirror. Earth's surface (ground or water) then reflects |
https://en.wikipedia.org/wiki/Ram%20air%20turbine | A ram air turbine (RAT) is a small wind turbine that is connected to a hydraulic pump, or electrical generator, installed in an aircraft and used as a power source. The RAT generates power from the airstream by ram pressure due to the speed of the aircraft. It may be called an air driven generator (ADG) on some aircraft.
Operation
Modern aircraft generally use RATs only in an emergency. In case of the loss of both primary and auxiliary power sources the RAT will power vital systems (flight controls, linked hydraulics and also flight-critical instrumentation). Some RATs produce only hydraulic power, which is in turn used to power electrical generators.
In some early aircraft (including airships), small RATs were permanently mounted and operated a small electrical generator or fuel pump. Some constant-speed propellers, such as those of the Argus As 410 engines used in the Focke-Wulf Fw 189, used a propeller turbine on the spinner to power a self-contained pitch governor controlling this constant speed.
Modern aircraft generate power in the main engines or an additional fuel-burning turbine engine called an auxiliary power unit, which is often mounted in the rear of the fuselage or in the main-wheel well. The RAT generates power from the airstream due to the speed of the aircraft. If aircraft speeds are low, the RAT will produce less power. In normal conditions the RAT is retracted into the fuselage (or wing), and is deployed manually or automatically following complete loss of power. In the time between power loss and RAT deployment, batteries are used.
Military use
RATs are common in military aircraft, which must be capable of surviving sudden and complete loss of power.
They also power pod-fitted systems such as the M61A1 Vulcan cannon. Some free-fall nuclear weapons, such as the British Yellow Sun and Red Beard, used RATs to power radar altimeters and firing circuits; these were a more reliable alternative to batteries.
Wing mount
High-powered electronics |
https://en.wikipedia.org/wiki/Module%20%28mathematics%29 | In mathematics, a module is a generalization of the notion of vector space in which the field of scalars is replaced by a ring. The concept of module generalizes also the notion of abelian group, since the abelian groups are exactly the modules over the ring of integers.
Like a vector space, a module is an additive abelian group, and scalar multiplication is distributive over the operation of addition between elements of the ring or module and is compatible with the ring multiplication.
Modules are very closely related to the representation theory of groups. They are also one of the central notions of commutative algebra and homological algebra, and are used widely in algebraic geometry and algebraic topology.
Introduction and definition
Motivation
In a vector space, the set of scalars is a field and acts on the vectors by scalar multiplication, subject to certain axioms such as the distributive law. In a module, the scalars need only be a ring, so the module concept represents a significant generalization. In commutative algebra, both ideals and quotient rings are modules, so that many arguments about ideals or quotient rings can be combined into a single argument about modules. In non-commutative algebra, the distinction between left ideals, ideals, and modules becomes more pronounced, though some ring-theoretic conditions can be expressed either about left ideals or left modules.
Much of the theory of modules consists of extending as many of the desirable properties of vector spaces as possible to the realm of modules over a "well-behaved" ring, such as a principal ideal domain. However, modules can be quite a bit more complicated than vector spaces; for instance, not all modules have a basis, and even those that do, free modules, need not have a unique rank if the underlying ring does not satisfy the invariant basis number condition, unlike vector spaces, which always have a (possibly infinite) basis whose cardinality is then unique. (These last two ass |
https://en.wikipedia.org/wiki/SAP | SAP SE (; ) is a German multinational software company based in Walldorf, Baden-Württemberg. It develops enterprise software to manage business operations and customer relations. The company is the world's leading enterprise resource planning (ERP) software vendor. SAP is the largest non-American software company by revenue and the world's third-largest publicly traded software company by revenue. Apart from ERP software, the company also sells database software and technology (particularly its own brands), cloud-engineered systems, and other ERP software products, such as human capital management (HCM) software, customer relationship management (CRM) software (also known as customer experience), enterprise performance management (EPM) software, product lifecycle management (PLM) software, supplier relationship management (SRM) software, supply chain management (SCM) software, business technology platform (BTP) software and programming environment SAP AppGyver for business.
Historical references include Systems, Applications, and Products in Data Processing, SAP AG and SAP SE.
Company overview
Founded in 1972 as a private partnership named (; SAP GbR), which in 1981 fully became (SAP GmbH) after a five-year transition period beginning in 1976. In 2005, it further restructured itself as SAP AG. Since 7 July 2014, its corporate structure is that of a pan-European societas Europaea (SE); as such, its former German corporate identity is now a subsidiary, SAP Deutschland SE & Co. KG.
SAP is headquartered in Walldorf, Baden-Württemberg, Germany, with regional offices in 180 countries. The company has over 111,961 employees in over 180 countries and is a component of the Euro Stoxx 50 stock market index.
History
Formation
When Xerox exited the computer hardware manufacturing industry in 1971, it asked IBM to migrate its business systems to IBM technology. As part of IBM's compensation for the migration, IBM was given the rights to the Scientific Data Systems (SDS) |
https://en.wikipedia.org/wiki/Second%20audio%20program | Second audio program (SAP), also known as secondary audio programming, is an auxiliary audio channel for analog television that can be broadcast or transmitted both over-the-air and by cable television. Used mostly for audio description or other languages, SAP is part of the multichannel television sound (MTS) standard originally set by the National Television Systems Committee (NTSC) in 1984 in the United States. The NTSC video format and MTS are also used in Canada and Mexico.
Usage
SAP is often used to provide audio tracks in languages other than the native language included in the program. In the United States, this is sometimes used for Spanish-language audio (especially during sports telecasts), often leading to the function being referred to facetiously as the "Spanish audio program". Likewise, some Spanish-language programs may, in rare cases, offer English on SAP. Some stations may relay NOAA Weather Radio services, or, particularly in the case of PBS stations, a local National Public Radio (NPR) sister station, on the audio channel when SAP is not being used. In Canada, parliamentary and public affairs channel CPAC similarly uses SAP to carry both English and French-language audio.
SAP is also a means of distribution for audio description of programs for the visually impaired. Under the Twenty-First Century Communications and Video Accessibility Act of 2010, top U.S. television networks and cable networks have been gradually required to broadcast quotas of audio described programming per-quarter, Since May 26, 2015, broadcasters have been required under the Act to provide dictations on SAP of any "emergency information" displayed in a textual format outside of the Emergency Alert System and newscasts.
Frequencies
MTS features, including stereo and SAP, travel on subcarriers of the video carrier, much like color for television. It is not carried on the audio carrier in the manner of stereo sound for an FM radio broadcast, however, as it only has a freque |
https://en.wikipedia.org/wiki/Windward%20and%20leeward | In geography and seamanship, windward () and leeward () are directions relative to the wind. Windward is upwind from the point of reference, i.e., towards the direction from which the wind is coming; leeward is downwind from the point of reference, i.e., along the direction towards which the wind is going.
The side of a ship that is towards the leeward is its "lee side". If the vessel is heeling under the pressure of crosswind, the lee side will be the "lower side". During the Age of Sail, the term weather was used as a synonym for windward in some contexts, as in the weather gage.
Since it captures rainfall, the windward side of a mountain tends to be wetter than the leeward side it blocks. The drier leeward area is said to be in a rain shadow.
Origin
The term "lee" comes from the middle-low German word // meaning "where the sea is not exposed to the wind" or "mild". The terms Luv and Lee (engl. Windward and Leeward) have been in use since the 17th century.
Usage
Windward and leeward directions (and the points of sail they create) are important factors to consider in such wind-powered or wind-impacted activities as sailing, wind-surfing, gliding, hang-gliding, and parachuting. Other terms with broadly the same meaning are widely used, particularly upwind and downwind.
Nautical
Among sailing craft, the windward vessel is normally the more maneuverable. For this reason, rule 12 of the International Regulations for Preventing Collisions at Sea, applying to sailing vessels, stipulates that where two are sailing in similar directions in relation to the wind, the windward vessel gives way to the leeward vessel.
Naval warfare
In naval warfare during the Age of Sail, a vessel always sought to use the wind to its advantage, maneuvering if possible to attack from windward. This was particularly important for less maneuverable square-rigged warships, which had limited ability to sail upwind, and sought to "hold the weather gage" entering battle.
This was particula |
https://en.wikipedia.org/wiki/Extreme%20value%20theorem | In calculus, the extreme value theorem states that if a real-valued function is continuous on the closed interval , then must attain a maximum and a minimum, each at least once. That is, there exist numbers and in such that:
The extreme value theorem is more specific than the related boundedness theorem, which states merely that a continuous function on the closed interval is bounded on that interval; that is, there exist real numbers and such that:
This does not say that and are necessarily the maximum and minimum values of on the interval which is what the extreme value theorem stipulates must also be the case.
The extreme value theorem is used to prove Rolle's theorem. In a formulation due to Karl Weierstrass, this theorem states that a continuous function from a non-empty compact space to a subset of the real numbers attains a maximum and a minimum.
History
The extreme value theorem was originally proven by Bernard Bolzano in the 1830s in a work Function Theory but the work remained unpublished until 1930. Bolzano's proof consisted of showing that a continuous function on a closed interval was bounded, and then showing that the function attained a maximum and a minimum value. Both proofs involved what is known today as the Bolzano–Weierstrass theorem.
Functions to which the theorem does not apply
The following examples show why the function domain must be closed and bounded in order for the theorem to apply. Each fails to attain a maximum on the given interval.
defined over is not bounded from above.
defined over is bounded but does not attain its least upper bound .
defined over is not bounded from above.
defined over is bounded but never attains its least upper bound .
Defining in the last two examples shows that both theorems require continuity on .
Generalization to metric and topological spaces
When moving from the real line to metric spaces and general topological spaces, the appropriate generalization of a close |
https://en.wikipedia.org/wiki/Uniqueness%20quantification | In mathematics and logic, the term "uniqueness" refers to the property of being the one and only object satisfying a certain condition. This sort of quantification is known as uniqueness quantification or unique existential quantification, and is often denoted with the symbols "∃!" or "∃=1". For example, the formal statement
may be read as "there is exactly one natural number such that ".
Proving uniqueness
The most common technique to prove the unique existence of a certain object is to first prove the existence of the entity with the desired condition, and then to prove that any two such entities (say, and ) must be equal to each other (i.e. ).
For example, to show that the equation has exactly one solution, one would first start by establishing that at least one solution exists, namely 3; the proof of this part is simply the verification that the equation below holds:
To establish the uniqueness of the solution, one would then proceed by assuming that there are two solutions, namely and , satisfying . That is,
Then since equality is a transitive relation,
Subtracting 2 from both sides then yields
which completes the proof that 3 is the unique solution of .
In general, both existence (there exists at least one object) and uniqueness (there exists at most one object) must be proven, in order to conclude that there exists exactly one object satisfying a said condition.
An alternative way to prove uniqueness is to prove that there exists an object satisfying the condition, and then to prove that every object satisfying the condition must be equal to .
Reduction to ordinary existential and universal quantification
Uniqueness quantification can be expressed in terms of the existential and universal quantifiers of predicate logic, by defining the formula to mean
which is logically equivalent to
An equivalent definition that separates the notions of existence and uniqueness into two clauses, at the expense of brevity, is
Another equivalent defin |
https://en.wikipedia.org/wiki/Division%20sign | The division sign () is a symbol consisting of a short horizontal line with a dot above and another dot below, used in Anglophone countries to indicate mathematical division. This usage, though widespread in some countries, is not universal and the symbol has a different meaning in other countries. Its use to denote division is not recommended in the ISO 80000-2 standard for mathematical notation.
In mathematics
The obelus, a historical glyph consisting of a horizontal line with (or without) one or more dots, was first used as a symbol for division in 1659, in the algebra book by Johann Rahn, although previous writers had used the same symbol for subtraction. Some near-contemporaries believed that John Pell, who edited the book, may have been responsible for this use of the symbol. Other symbols for division include the slash or solidus , the colon , and the fraction bar (the horizontal bar in a vertical fraction). The ISO 80000-2 standard for mathematical notation recommends only the solidus or "fraction bar" for division, or the "colon" for ratios; it says that the sign "should not be used" for division.
In Italy, Poland and Russia, the sign was sometimes used to denote a range of values, and in Scandinavian countries it was used as a negation sign.
The same symbol has been used to represent subtraction in north-eastern Europe: the Unicode Consortium has allocated a separate code point, for this usage uniquely; the exact form of the symbol displayed is typeface (font) dependent.
In computer systems
Encoding
The symbol was assigned to code point 0xF7 in ISO 8859-1, as the "division sign". This encoding was transferred to Unicode as U+00F7. In HTML, it can be encoded as or (at HTML level 3.2), or as .
Keyboard entry
In Microsoft Windows, this division sign is produced with Alt+0247 (or 246 with no zero) on the number pad, or by pressing when an appropriate keyboard layout is in use. In classic Mac OS and macOS, it is produced with .
On UNIX-based |
https://en.wikipedia.org/wiki/Studio%20transmitter%20link | A studio transmitter link (or STL) sends a radio station's or television station's audio and video from the broadcast studio or origination facility to a radio transmitter, television transmitter or uplink facility in another location. This is accomplished through the use of terrestrial microwave links or by using fiber optic or other telecommunication connections to the transmitter site.
This is often necessary because the best locations for an antenna are on top of a mountain, where a much shorter radio tower is required, but where locating a studio may be impractical. Even in flat regions, the center of the station's allowed coverage area may not be near the studio location or may lie within a populated area where a transmitter would be frowned upon by the community, so the antenna must be placed at a distance from the studio.
Depending on the locations that must be connected, a station may choose either a point to point (PTP) link on another special radio frequency, or a newer all-digital wired link via a dedicated data transmission circuit. Radio links can also be digital, or the older analog type, or a hybrid of the two. Even on older all-analog systems, multiple audio and data channels can be sent using subcarriers.
Stations that employ an STL usually also have a transmitter/studio link (TSL) to return telemetry information. Both the STL and TSL are considered broadcast auxiliary services (BAS).
Transmitter/studio link
The transmitter/studio link (or TSL) of a radio station or television station is a return link which sends telemetry data from the remotely located radio transmitter or television transmitter back to the studio for monitoring purposes. The TSL may return the same way as the STL, or it can be embedded in the station's regular broadcast signal as a subcarrier (for analog stations) or a separate data channel (for digital stations).
Analog or digital data such as transmitter power, temperature, VSWR, voltage, modulation level, and other |
https://en.wikipedia.org/wiki/Multichannel%20Television%20Sound | Multichannel Television Sound, better known as MTS, is the method of encoding three additional audio channels into analog 4.5 MHz audio carriers on System M and System N. It was developed by the Broadcast Television Systems Committee, an industry group, and sometimes known as BTSC as a result.
MTS worked by adding additional audio signals in otherwise empty portions of the television signal. MTS allowed up to a total of four audio channels. Normally two were broadcast to produce the left and right stereo channels. An additional second audio program (SAP), could be used to broadcast other languages or entirely different audio like weather alerts that could be accessed by the user, typically through a button on their remote control. The fourth channel, PRO, was only used by the broadcasters.
History
Initial work on design and testing of a stereophonic audio system began in 1975 when Telesonics approached Chicago public television station WTTW. WTTW was producing a music show titled Soundstage at that time, and was simulcasting the stereo audio mix on local FM stations. Telesonics offered a way to send the same stereo audio over the existing television signals, thereby removing the need for the FM simulcast.
Telesonics and WTTW formed a working relationship and began developing the system which was similar to FM stereo modulation. Twelve WTTW studio and transmitter engineers added the needed broadcast experience to the relationship. The Telesonics system was tested and refined using the WTTW transmitter facilities on the Sears Tower.
In 1979, WTTW had installed a stereo Grass Valley master control switcher and had added a second audio channel to the microwave STL (Studio Transmitter Link). By that time, WTTW engineers had further developed stereo audio on videotape recorders in their plant, using split audio track heads manufactured to their specifications, outboard record electronics, and Dolby noise reduction that allowed Soundstage to be recorded and electronic |
https://en.wikipedia.org/wiki/Combat%20engineer | A combat engineer (also called pioneer or sapper) is a type of soldier who performs military engineering tasks in support of land forces combat operations. Combat engineers perform a variety of military engineering, tunnel and mine warfare tasks, as well as construction and demolition duties in and out of combat zones.
Combat engineers facilitate the mobility of friendly forces while impeding that of the enemy. They also work to assure the survivability of friendly forces, building fighting positions, fortifications, and roads. They conduct demolition missions and clear minefields manually or through use of specialized vehicles. Common combat engineer missions include construction and breaching of trenches, tank traps and other obstacles and fortifications; obstacle emplacement and bunker construction; route clearance and reconnaissance; bridge and road construction or destruction; emplacement and clearance of land mines; and combined arms breaching. Typically, combat engineers are also trained in infantry tactics and, when required, serve as provisional infantry.
Combat engineer organization
Combat engineers play a key role in all armed forces of the world. They are invariably found closely integrated into the force structure of divisions, combat brigades, and smaller fighting units.
Combat support formations
In many countries, combat engineers provide combat support members of a broader military engineering corps or branch. Other nations have distinct combat engineering corps or branches; they are separate from other types of military engineers. The Danish military engineers' corps, for example, is almost entirely organized into one regiment of combat engineers, simply named Ingeniørregimentet ("The Engineering Regiment").
Combat arms formations
Combat engineer battalions are usually a part of a brigade combat team. During the War in Afghanistan and the 2003–2011 Iraq War, the U.S. Army tasked its combat engineers with route clearance missions designed to c |
https://en.wikipedia.org/wiki/Superrationality | In economics and game theory, a participant is considered to have superrationality (or renormalized rationality) if they have perfect rationality (and thus maximize their utility) but assume that all other players are superrational too and that a superrational individual will always come up with the same strategy as any other superrational thinker when facing the same problem. Applying this definition, a superrational player playing against a superrational opponent in a prisoner's dilemma will cooperate while a rationally self-interested player would defect.
This decision rule is not a mainstream model within the game theory and was suggested by Douglas Hofstadter in his article, series, and book Metamagical Themas as an alternative type of rational decision making different from the widely accepted game-theoretic one. Superrationality is a form of Immanuel Kant's categorical imperative, and is closely related to the concept of Kantian equilibrium proposed by the economist and analytic Marxist John Roemer. Hofstadter provided this definition: "Superrational thinkers, by recursive definition, include in their calculations the fact that they are in a group of superrational thinkers." This is equivalent to reasoning as if everyone in the group obeys Kant's categorical imperative: "one should take those actions and only those actions that one would advocate all others take as well."
Unlike the supposed "reciprocating human", the superrational thinker will not always play the equilibrium that maximizes the total social utility and is thus not a philanthropist.
Prisoner's dilemma
The idea of superrationality is that two logical thinkers analyzing the same problem will think of the same correct answer. For example, if two people are both good at math and both have been given the same complicated problem to do, both will get the same right answer. In math, knowing that the two answers are going to be the same doesn't change the value of the problem, but in the game theo |
https://en.wikipedia.org/wiki/Mathematical%20notation | Mathematical notation consists of using symbols for representing operations, unspecified numbers, relations, and any other mathematical objects and assembling them into expressions and formulas. Mathematical notation is widely used in mathematics, science, and engineering for representing complex concepts and properties in a concise, unambiguous, and accurate way.
For example, Albert Einstein's equation is the quantitative representation in mathematical notation of the mass–energy equivalence.
Mathematical notation was first introduced by François Viète at the end of the 16th century and largely expanded during the 17th and 18th centuries by René Descartes, Isaac Newton, Gottfried Wilhelm Leibniz, and overall Leonhard Euler.
Symbols
The use of many symbols is the basis of mathematical notation. They play a similar role as words in natural languages. They may play different roles in mathematical notation similarly as verbs, adjective and nouns play different roles in a sentence.
Letters as symbols
Letters are typically used for naming—in mathematical jargon, one says representing—mathematical objects. This is typically the Latin and Greek alphabets that are used, but some letters of Hebrew alphabet are sometimes used. Uppercase and lowercase letters are considered as different symbols. For Latin alphabet, different typefaces provide also different symbols. For example, and could theoretically appear in the same mathematical text with six different meanings. Normally, roman upright typeface is not used for symbols, except for symbols that are formed of several letters, such as the symbol "" of the sine function.
In order to have more symbols, and for allowing related mathematical objects to be represented by related symbols, diacritics, subscripts and superscripts are often used. For example, may denote the Fourier transform of the derivative of a function called
Other symbols
Symbols are not only used for naming mathematical objects. They can be used fo |
https://en.wikipedia.org/wiki/Two-body%20problem | In classical mechanics, the two-body problem is to predict the motion of two massive objects which are abstractly viewed as point particles. The problem assumes that the two objects interact only with one another; the only force affecting each object arises from the other one, and all other objects are ignored.
The most prominent example of the classical two-body problem is the gravitational case (see also Kepler problem), arising in astronomy for predicting the orbits (or escapes from orbit) of objects such as satellites, planets, and stars. A two-point-particle model of such a system nearly always describes its behavior well enough to provide useful insights and predictions.
A simpler "one body" model, the "central-force problem", treats one object as the immobile source of a force acting on the other. One then seeks to predict the motion of the single remaining mobile object. Such an approximation can give useful results when one object is much more massive than the other (as with a light planet orbiting a heavy star, where the star can be treated as essentially stationary).
However, the one-body approximation is usually unnecessary except as a stepping stone. For many forces, including gravitational ones, the general version of the two-body problem can be reduced to a pair of one-body problems, allowing it to be solved completely, and giving a solution simple enough to be used effectively.
By contrast, the three-body problem (and, more generally, the n-body problem for n ≥ 3) cannot be solved in terms of first integrals, except in special cases.
Results for prominent cases
Gravitation and other inverse-square examples
The two-body problem is interesting in astronomy because pairs of astronomical objects are often moving rapidly in arbitrary directions (so their motions become interesting), widely separated from one another (so they will not collide) and even more widely separated from other objects (so outside influences will be small enough to be ignore |
https://en.wikipedia.org/wiki/Strength%20of%20materials | The field of strength of materials (also called mechanics of materials) typically refers to various methods of calculating the stresses and strains in structural members, such as beams, columns, and shafts. The methods employed to predict the response of a structure under loading and its susceptibility to various failure modes takes into account the properties of the materials such as its yield strength, ultimate strength, Young's modulus, and Poisson's ratio. In addition, the mechanical element's macroscopic properties (geometric properties) such as its length, width, thickness, boundary constraints and abrupt changes in geometry such as holes are considered.
The theory began with the consideration of the behavior of one and two dimensional members of structures, whose states of stress can be approximated as two dimensional, and was then generalized to three dimensions to develop a more complete theory of the elastic and plastic behavior of materials. An important founding pioneer in mechanics of materials was Stephen Timoshenko.
Definition
In the mechanics of materials, the strength of a material is its ability to withstand an applied load without failure or plastic deformation. The field of strength of materials deals with forces and deformations that result from their acting on a material. A load applied to a mechanical member will induce internal forces within the member called stresses when those forces are expressed on a unit basis. The stresses acting on the material cause deformation of the material in various manners including breaking them completely. Deformation of the material is called strain when those deformations too are placed on a unit basis.
The stresses and strains that develop within a mechanical member must be calculated in order to assess the load capacity of that member. This requires a complete description of the geometry of the member, its constraints, the loads applied to the member and the properties of the material of which the memb |
https://en.wikipedia.org/wiki/Electron%20diffraction | Electron diffraction is a general term for phenomena associated with changes in the direction of electron beams due to elastic interactions with atoms. Close to the atoms the changes are described as Fresnel diffraction; far away they are called Fraunhofer diffraction. The resulting map of the directions of the electrons far from the sample (Fraunhofer diffraction) is called a diffraction pattern, see for instance . These patterns are similar to x-ray and neutron diffraction patterns, and are used to study the atomic structure of gases, liquids, surfaces and bulk solids. Electron diffraction also plays a major role in the contrast of images in electron microscopes.
Electron diffraction occurs due to elastic scattering, when there is no change in the energy of the electrons during their interactions with atoms. The negatively charged electrons are scattered due to Coulomb forces when they interact with both the positively charged atomic core and the negatively charged electrons around the atoms; most of the interaction occurs quite close to the atoms, within about one angstrom. In comparison, x-rays are scattered after interactions with the electron density while neutrons are scattered by the atomic nuclei through the strong nuclear force.
Description
All matter can be thought of as matter waves, from small particles such as electrons up to macroscopic objects – although it is impossible to measure any of the "wave-like" behavior of macroscopic objects. Waves can move around objects and create interference patterns, and a classic example is the Young's two-slit experiment shown in , where a wave impinges upon two slits in the first of the two images (blue waves). After going through the slits there are directions where the wave is stronger, ones where it is weaker – the wave has been diffracted. If instead of two slits there are a number of small points then similar phenomena can occur as shown in the second image where the wave (red and blue) is coming in from th |
https://en.wikipedia.org/wiki/Trace%20metal | Trace metals are the metals subset of trace elements; that is, metals normally present in small but measurable amounts in animal and plant cells and tissues and that are a necessary part of nutrition and physiology. Some biometals are trace metals. Ingestion of, or exposure to, excessive quantities can be toxic. However, insufficient plasma or tissue levels of certain trace metals can cause pathology, as is the case with iron.
Trace metals within the human body include iron, lithium, zinc, copper, chromium, nickel, cobalt, vanadium, molybdenum, manganese and others.
Trace metals are metals needed by living organisms to function properly and are depleted through the expenditure of energy by various metabolic processes of living organisms. They are replenished in animals through diet as well as environmental exposure, and in plants through the uptake of nutrients from the soil in which the plant grows. Human vitamin pills and plant fertilizers can be a source of trace metals.
Trace metals are sometimes referred to as trace elements, although the latter includes minerals and is a broader category. See also Dietary mineral. Trace elements are required by the body for specific functions. Things such as vitamins, sports drinks, fresh fruits and vegetables are sources. Taken in excessive amounts, trace elements can cause problems. For example, fluorine is required for the formation of bones and enamel on teeth. However, when taken in an excessive amount can cause a disease called "Fluorosis', in which bone deformations and yellowing of teeth are seen. Fluorine can occur naturally in some areas in ground water.
Iron
Humans
Roughly 5 grams of iron are present in the human body and is the most abundant trace metal. It is absorbed in the intestine as heme or non-heme iron depending on the food source. Heme iron is derived from the digestion of hemoproteins in meat. Non-heme iron is mainly derived from plants and exist as iron(II) or iron(III) ions.
Iron is essential for |
https://en.wikipedia.org/wiki/Tarantism | Tarantism is a form of hysteric behaviour originating in Southern Italy, popularly believed to result from the bite of the wolf spider Lycosa tarantula (distinct from the broad class of spiders also called tarantulas).
A better candidate cause is Latrodectus tredecimguttatus, commonly known as the Mediterranean black widow or steppe spider, although no link between such bites and the behaviour of tarantism has ever been demonstrated. However, the term historically is used to refer to a dancing mania – characteristic of Southern Italy – which likely had little to do with spider bites. The tarantella dance supposedly evolved from a therapy for tarantism.
History
It was originally described in the 11th century. The condition was common in Southern Italy, especially in the province of Taranto, during the 16th and 17th centuries. There were strong suggestions that there is no organic cause for the heightened excitability and restlessness that gripped the victims. The stated belief of the time was that victims needed to engage in frenzied dancing to prevent death from tarantism. Supposedly a particular kind of dance, called the tarantella, evolved from this therapy. A prime location for such outbursts was the church at Galatina, particularly at the time of the Feast of Saints Peter and Paul on 29 June. "The dancing is placed under the sign of Saint Paul, whose chapel serves as a "theatre" for the tarantulees' public meetings. The spider seems constantly interchangeable with Saint Paul; the female tarantulees dress as "brides of Saint Paul". As a climax, "the tarantulees, after having danced for a long time, meet together in the chapel of Saint Paul and communally attain the paroxysm of their trance, ... the general and desperate agitation was dominated by the stylised cry of the tarantulees, the 'crisis cry', an ahiii uttered with various modulations".
Francesco Cancellieri, in his exhaustive treatise on Tarantism, takes note of semi-scientific, literary, and popular ob |
https://en.wikipedia.org/wiki/Outline%20of%20neuroscience | The following outline is provided as an overview of and topical guide to neuroscience:
Neuroscience is the scientific study of the structure and function of the nervous system. It encompasses the branch of biology that deals with the anatomy, biochemistry, molecular biology, and physiology of neurons and neural circuits. It also encompasses cognition, and human behavior. Neuroscience has multiple concepts that each relate to learning abilities and memory functions. Additionally, the brain is able to transmit signals that cause conscious/unconscious behaviors that are responses verbal or non-verbal. This allows people to communicate with one another.
Branches of neuroscience
Neurophysiology
Neurophysiology is the study of the function (as opposed to structure) of the nervous system.
Brain mapping
Electrophysiology
Extracellular recording
Intracellular recording
Brain stimulation
Electroencephalography
Intermittent rhythmic delta activity
:Category: Neurophysiology
:Category: Neuroendocrinology
:Neuroendocrinology
Neuroanatomy
Neuroanatomy is the study of the anatomy of nervous tissue and neural structures of the nervous system.
Immunostaining
:Category: Neuroanatomy
Neuropharmacology
Neuropharmacology is the study of how drugs affect cellular function in the nervous system.
Drug
Psychoactive drug
Anaesthetic
Narcotic
Behavioral neuroscience
Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is the application of the principles of biology to the study of mental processes and behavior in human and non-human animals.
Neuroethology
Developmental neuroscience
Developmental neuroscience aims to describe the cellular basis of brain development and to address the underlying mechanisms. The field draws on both neuroscience and developmental biology to provide insight into the cellular and molecular mechanisms by which complex nervous systems develop.
Aging and memory
Cognitive neuroscience
Cognitive ne |
https://en.wikipedia.org/wiki/Formula%20SAE | Formula SAE is a student design competition organized by SAE International (previously known as the Society of Automotive Engineers, SAE). The competition was started in 1980 by the SAE student branch at the University of Texas at Austin after a prior asphalt racing competition proved to be unsustainable.
Concept
The concept behind Formula SAE is that a fictional manufacturing company has contracted a student design team to develop a small Formula-style race car. The prototype race car is to be evaluated for its potential as a production item. The target marketing group for the race car is the non-professional weekend autocross racer. Each student team designs, builds and tests a prototype based on a series of rules, whose purpose is both ensuring on-track safety (the cars are driven by the students themselves) and promoting clever problem solving. There are combustion and electric divisions of the competition, primarily only differing in their rules for powertrain.
The prototype race car is judged in a number of different events. The points schedule for most Formula SAE events is:
In addition to these events, various sponsors of the competition provide awards for superior design accomplishments. For example, best use of E-85 ethanol fuel, innovative use of electronics, recyclability, crash worthiness, analytical approach to design, and overall dynamic performance are some of the awards available. At the beginning of the competition, the vehicle is checked for rule compliance during the Technical Inspection. Its braking ability, rollover stability and noise levels are checked before the vehicle is allowed to compete in the dynamic events (Skidpad, Autocross, Acceleration, and Endurance).
Large companies, such as General Motors, Ford, and Chrysler, can have staff interact with more than 1000 student engineers. Working in teams of anywhere between two and 30, these students have proven themselves to be capable of producing a functioning prototype vehicle.
The v |
https://en.wikipedia.org/wiki/Moment%20%28physics%29 | In physics, a moment is a mathematical expression involving the product of a distance and physical quantity. Moments are usually defined with respect to a fixed reference point and refer to physical quantities located some distance from the reference point. In this way, the moment accounts for the quantity's location or arrangement. For example, the moment of force, often called torque, is the product of a force on an object and the distance from the reference point to the object. In principle, any physical quantity can be multiplied by a distance to produce a moment. Commonly used quantities include forces, masses, and electric charge distributions.
Elaboration
In its most basic form, a moment is the product of the distance to a point, raised to a power, and a physical quantity
(such as force or electrical charge) at that point:
where is the physical quantity such as a force applied at a point, or a point charge, or a point mass, etc. If the quantity is not concentrated solely at a single point, the moment is the integral of that quantity's density over space:
where is the distribution of the density of charge, mass, or whatever quantity is being considered.
More complex forms take into account the angular relationships between the distance and the physical quantity, but the above equations capture the essential feature of a moment, namely the existence of an underlying or equivalent term. This implies that there are multiple moments (one for each value of n) and that the moment generally depends on the reference point from which the distance is measured, although for certain moments (technically, the lowest non-zero moment) this dependence vanishes and the moment becomes independent of the reference point.
Each value of n corresponds to a different moment: the 1st moment corresponds to n = 1; the 2nd moment to n = 2, etc. The 0th moment (n = 0) is sometimes called the monopole moment; the 1st moment (n = 1) is sometimes called the dipole moment, a |
https://en.wikipedia.org/wiki/Mathematics%20and%20architecture | Mathematics and architecture are related, since, as with other arts, architects use mathematics for several reasons. Apart from the mathematics needed when engineering buildings, architects use geometry: to define the spatial form of a building; from the Pythagoreans of the sixth century BC onwards, to create forms considered harmonious, and thus to lay out buildings and their surroundings according to mathematical, aesthetic and sometimes religious principles; to decorate buildings with mathematical objects such as tessellations; and to meet environmental goals, such as to minimise wind speeds around the bases of tall buildings.
In ancient Egypt, ancient Greece, India, and the Islamic world, buildings including pyramids, temples, mosques, palaces and mausoleums were laid out with specific proportions for religious reasons. In Islamic architecture, geometric shapes and geometric tiling patterns are used to decorate buildings, both inside and outside. Some Hindu temples have a fractal-like structure where parts resemble the whole, conveying a message about the infinite in Hindu cosmology. In Chinese architecture, the tulou of Fujian province are circular, communal defensive structures. In the twenty-first century, mathematical ornamentation is again being used to cover public buildings.
In Renaissance architecture, symmetry and proportion were deliberately emphasized by architects such as Leon Battista Alberti, Sebastiano Serlio and Andrea Palladio, influenced by Vitruvius's De architectura from ancient Rome and the arithmetic of the Pythagoreans from ancient Greece.
At the end of the nineteenth century, Vladimir Shukhov in Russia and Antoni Gaudí in Barcelona pioneered the use of hyperboloid structures; in the Sagrada Família, Gaudí also incorporated hyperbolic paraboloids, tessellations, catenary arches, catenoids, helicoids, and ruled surfaces. In the twentieth century, styles such as modern architecture and Deconstructivism explored different geometries to achi |
https://en.wikipedia.org/wiki/Uptime | Uptime is a measure of system reliability, expressed as the percentage of time a machine, typically a computer, has been working and available. Uptime is the opposite of downtime.
It is often used as a measure of computer operating system reliability or stability, in that this time represents the time a computer can be left unattended without crashing, or needing to be rebooted for administrative or maintenance purposes.
Conversely, long uptime may indicate negligence, because some critical updates can require reboots on some platforms.
Records
In 2005, Novell reported a server with a 6-year uptime. Although that might sound unusual, that is actually common when servers are maintained under an industrial context and host critical applications such as banking systems.
Netcraft maintains the uptime records for many thousands of web hosting computers.
A server running Novell NetWare has been reported to have been shut down after 16 years of uptime due to a failing hard disk.
A Cisco router has been reported to have been running continuously for 21 years as of 2018. As of April, 11, 2023, the uptime has increased to 26 years, 25 weeks, 1 day, 1 hour, and 8 minutes.
Determining system uptime
Microsoft Windows
Windows Task Manager
Some versions of Microsoft Windows include an uptime field in Windows Task Manager, under the "Performance" tab. The format is D:HH:MM:SS (days, hours, minutes, seconds).
systeminfo
The output of the systeminfo command includes a "System Up Time" or "System Boot Time" field.
C:\>systeminfo | findstr "Time:"
System Up Time: 0 days, 8 hours, 7 minutes, 19 seconds
The exact text and format is dependent on the language and locale. The time given by systeminfo is not reliable. It does not take into account time spent in sleep or hibernation. Thus, the boot time will drift forward every time the computer sleeps or hibernates.
NET command
The NET command with its STATISTICS sub-command provides the date and time the computer |
https://en.wikipedia.org/wiki/Crash%20%28computing%29 | In computing, a crash, or system crash, occurs when a computer program such as a software application or an operating system stops functioning properly and exits. On some operating systems or individual applications, a crash reporting service will report the crash and any details relating to it (or give the user the option to do so), usually to the developer(s) of the application. If the program is a critical part of the operating system, the entire system may crash or hang, often resulting in a kernel panic or fatal system error.
Most crashes are the result of a software bug. Typical causes include accessing invalid memory addresses, incorrect address values in the program counter, buffer overflow, overwriting a portion of the affected program code due to an earlier bug, executing invalid machine instructions (an illegal or unauthorized opcode), or triggering an unhandled exception. The original software bug that started this chain of events is typically considered to be the cause of the crash, which is discovered through the process of debugging. The original bug can be far removed from the code that actually triggered the crash.
In early personal computers, attempting to write data to hardware addresses outside the system's main memory could cause hardware damage. Some crashes are exploitable and let a malicious program or hacker execute arbitrary code, allowing the replication of viruses or the acquisition of data which would normally be inaccessible.
Application crashes
An application typically crashes when it performs an operation that is not allowed by the operating system. The operating system then triggers an exception or signal in the application. Unix applications traditionally responded to the signal by dumping core. Most Windows and Unix GUI applications respond by displaying a dialogue box (such as the one shown to the right) with the option to attach a debugger if one is installed. Some applications attempt to recover from the error and continue r |
https://en.wikipedia.org/wiki/Bus%20error | In computing, a bus error is a fault raised by hardware, notifying an operating system (OS) that a process is trying to access memory that the CPU cannot physically address: an invalid address for the address bus, hence the name. In modern use on most architectures these are much rarer than segmentation faults, which occur primarily due to memory access violations: problems in the logical address or permissions.
On POSIX-compliant platforms, bus errors usually result in the SIGBUS signal being sent to the process that caused the error. SIGBUS can also be caused by any general device fault that the computer detects, though a bus error rarely means that the computer hardware is physically broken—it is normally caused by a bug in software. Bus errors may also be raised for certain other paging errors; see below.
Causes
There are at least three main causes of bus errors:
Non-existent address
Software instructs the CPU to read or write a specific physical memory address. Accordingly, the CPU sets this physical address on its address bus and requests all other hardware connected to the CPU to respond with the results, if they answer for this specific address. If no other hardware responds, the CPU raises an exception, stating that the requested physical address is unrecognized by the whole computer system. Note that this only covers physical memory addresses. Trying to access an undefined virtual memory address is generally considered to be a segmentation fault rather than a bus error, though if the MMU is separate, the processor cannot tell the difference.
Unaligned access
Most CPUs are byte-addressable, where each unique memory address refers to an 8-bit byte. Most CPUs can access individual bytes from each memory address, but they generally cannot access larger units (16 bits, 32 bits, 64 bits and so on) without these units being "aligned" to a specific boundary (the x86 platform being a notable exception).
For example, if multi-byte accesses must be 16 bit-aligne |
https://en.wikipedia.org/wiki/Data%20Protection%20Directive | The Data Protection Directive, officially Directive 95/46/EC, enacted in October 1995, was a European Union directive which regulated the processing of personal data within the European Union (EU) and the free movement of such data. The Data Protection Directive was an important component of EU privacy and human rights law.
The principles set out in the Data Protection Directive were aimed at the protection of fundamental rights and freedoms in the processing of personal data. The General Data Protection Regulation, adopted in April 2016, superseded the Data Protection Directive and became enforceable on 25 May 2018.
Context
The right to privacy is a highly developed area of law in Europe. All the member states of the Council of Europe (CoE) are also signatories of the European Convention on Human Rights (ECHR). Article 8 of the ECHR provides a right to respect for one's "private and family life, his home and his correspondence", subject to certain restrictions. The European Court of Human Rights has given this article a very broad interpretation in its jurisprudence.
In 1973, American scholar Willis Ware published Records, Computers, and the Rights of Citizens, a report that was to be influential on the directions these laws would take.
In 1980, in an effort to create a comprehensive data protection system throughout Europe, the Organisation for Economic Co-operation and Development (OECD) issued its "Recommendations of the Council Concerning Guidelines Governing the Protection of Privacy and Trans-Border Flows of Personal Data". The seven principles governing the OECD's recommendations for protection of personal data were:
Notice—data subjects should be given notice when their data is being collected;
Purpose—data should only be used for the purpose stated and not for any other purposes;
Consent—data should not be disclosed without the data subject's consent;
Security—collected data should be kept secure from any potential abuses;
Disclosure—data subjects |
https://en.wikipedia.org/wiki/Truncated%20tetrahedron | In geometry, the truncated tetrahedron is an Archimedean solid. It has 4 regular hexagonal faces, 4 equilateral triangle faces, 12 vertices and 18 edges (of two types). It can be constructed by truncating all 4 vertices of a regular tetrahedron at one third of the original edge length.
A deeper truncation, removing a tetrahedron of half the original edge length from each vertex, is called rectification. The rectification of a tetrahedron produces an octahedron.
A truncated tetrahedron is the Goldberg polyhedron containing triangular and hexagonal faces.
A truncated tetrahedron can be called a cantic cube, with Coxeter diagram, , having half of the vertices of the cantellated cube (rhombicuboctahedron), . There are two dual positions of this construction, and combining them creates the uniform compound of two truncated tetrahedra.
Area and volume
The area A and the volume V of a truncated tetrahedron of edge length a are:
Densest packing
The densest packing of the Archimedean truncated tetrahedron is believed to be Φ = , as reported by two independent groups using Monte Carlo methods. Although no mathematical proof exists that this is the best possible packing for the truncated tetrahedron, the high proximity to the unity and independency of the findings make it unlikely that an even denser packing is to be found. In fact, if the truncation of the corners is slightly smaller than that of an Archimedean truncated tetrahedron, this new shape can be used to completely fill space.
Cartesian coordinates
Cartesian coordinates for the 12 vertices of a truncated tetrahedron centered at the origin, with edge length √8, are all permutations of (±1,±1,±3) with an even number of minus signs:
(+3,+1,+1), (+1,+3,+1), (+1,+1,+3)
(−3,−1,+1), (−1,−3,+1), (−1,−1,+3)
(−3,+1,−1), (−1,+3,−1), (−1,+1,−3)
(+3,−1,−1), (+1,−3,−1), (+1,−1,−3)
Another simple construction exists in 4-space as cells of the truncated 16-cell, with vertices as coordinate permutation of:
(0,0,1,2)
Orthogon |
https://en.wikipedia.org/wiki/Truncated%20octahedron | In geometry, the truncated octahedron is the Archimedean solid that arises from a regular octahedron by removing six pyramids, one at each of the octahedron's vertices. The truncated octahedron has 14 faces (8 regular hexagons and 6 squares), 36 edges, and 24 vertices. Since each of its faces has point symmetry the truncated octahedron is a 6-zonohedron. It is also the Goldberg polyhedron GIV(1,1), containing square and hexagonal faces. Like the cube, it can tessellate (or "pack") 3-dimensional space, as a permutohedron.
The truncated octahedron was called the "mecon" by Buckminster Fuller.
Its dual polyhedron is the tetrakis hexahedron. If the original truncated octahedron has unit edge length, its dual tetrakis hexahedron has edge lengths and .
Construction
A truncated octahedron is constructed from a regular octahedron with side length 3a by the removal of six right square pyramids, one from each point. These pyramids have both base side length (a) and lateral side length (e) of a, to form equilateral triangles. The base area is then a2. Note that this shape is exactly similar to half an octahedron or Johnson solid J1.
From the properties of square pyramids, we can now find the slant height, s, and the height, h, of the pyramid:
The volume, V, of the pyramid is given by:
Because six pyramids are removed by truncation, there is a total lost volume of a3.
Orthogonal projections
The truncated octahedron has five special orthogonal projections, centered, on a vertex, on two types of edges, and two types of faces: Hexagon, and square. The last two correspond to the B2 and A2 Coxeter planes.
Spherical tiling
The truncated octahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane.
Coordinates
All permutations of (0, ±1, ±2) are Cartesian coordinates of |
https://en.wikipedia.org/wiki/Type%20inference | Type inference refers to the automatic detection of the type of an expression in a formal language. These include programming languages and mathematical type systems, but also natural languages in some branches of computer science and linguistics.
Nontechnical explanation
Types in a most general view can be associated to a designated use suggesting and restricting the activities possible for an object of that type. Many nouns in language specify such uses. For instance, the word leash indicates a different use than the word line. Calling something a table indicates another designation than calling it firewood, though it might be materially the same thing. While their material properties make things usable for some purposes, they are also subject of particular designations. This is especially the case in abstract fields, namely mathematics and computer science, where the material is finally only bits or formulas.
To exclude unwanted, but materially possible uses, the concept of types is defined and applied in many variations. In mathematics, Russell's paradox sparked early versions of type theory. In programming languages, typical examples are "type errors", e.g. ordering a computer to sum values that are not numbers. While materially possible, the result would no longer be meaningful and perhaps disastrous for the overall process.
In a typing, an expression is opposed to a type. For example, , , and are all separate terms with the type for natural numbers. Traditionally, the expression is followed by a colon and its type, such as . This means that the value is of type . This form is also used to declare new names, e.g. , much like introducing a new character to a scene by the words "detective Decker".
Contrary to a story, where the designations slowly unfold, the objects in formal languages often have to be defined with their type from very beginning. Additionally, if the expressions are ambiguous, types may be needed to make the intended use explicit. For |
https://en.wikipedia.org/wiki/Type%20signature | In computer science, a type signature or type annotation defines the inputs and outputs for a function, subroutine or method. A type signature includes the number, types, and order of the arguments contained by a function. A type signature is typically used during overload resolution for choosing the correct definition of a function to be called among many overloaded forms.
Examples
C/C++
In C and C++, the type signature is declared by what is commonly known as a function prototype. In C/C++, a function declaration reflects its use; for example, a function pointer with the signature would be called as:
char c;
double d;
int retVal = (*fPtr)(c, d);
Erlang
In Erlang, type signatures may be optionally declared, as:
-spec function_name(type1(), type2(), ...) -> out_type().
For example:
-spec is_even(number()) -> boolean().
Haskell
A type signature in Haskell generally takes the following form:
functionName :: arg1Type -> arg2Type -> ... -> argNType
Notice that the type of the result can be regarded as everything past the first supplied argument. This is a consequence of currying, which is made possible by Haskell's support for first-class functions; this function requires two inputs where one argument is supplied and the function is "curried" to produce a function for the argument not supplied. Thus, calling , where , yields a new function that can be called to produce .
The actual type specifications can consist of an actual type, such as , or a general type variable that is used in parametric polymorphic functions, such as , or , or . So we can write something like:
Since Haskell supports higher-order functions, functions can be passed as arguments. This is written as:
This function takes in a function with type signature and returns data of type out.
Java
In the Java virtual machine, internal type signatures are used to identify methods and classes at the level of the virtual machine code.
Example: The method is represented in bytecode as .
The s |
https://en.wikipedia.org/wiki/Type%20variable | In type theory and programming languages, a type variable is a mathematical variable ranging over types. Even in programming languages that allow mutable variables, a type variable remains an abstraction, in the sense that it does not correspond to some memory locations.
Programming languages that support parametric polymorphism make use of universally quantified type variables. Languages that support existential types make use of existentially quantified type variables. For example, the following OCaml code defines a polymorphic identity function that has a universally quantified type, which is printed by the interpreter on the second line:
# let id x = x;;
val id : 'a -> 'a = <fun>
In mathematical notation, the type of the function id is , where is a type variable.
See also
System F
Type theory
Functional programming
Dependently typed programming |
https://en.wikipedia.org/wiki/Cokernel | The cokernel of a linear mapping of vector spaces is the quotient space of the codomain of by the image of . The dimension of the cokernel is called the corank of .
Cokernels are dual to the kernels of category theory, hence the name: the kernel is a subobject of the domain (it maps to the domain), while the cokernel is a quotient object of the codomain (it maps from the codomain).
Intuitively, given an equation that one is seeking to solve, the cokernel measures the constraints that must satisfy for this equation to have a solution – the obstructions to a solution – while the kernel measures the degrees of freedom in a solution, if one exists. This is elaborated in intuition, below.
More generally, the cokernel of a morphism in some category (e.g. a homomorphism between groups or a bounded linear operator between Hilbert spaces) is an object and a morphism such that the composition is the zero morphism of the category, and furthermore is universal with respect to this property. Often the map is understood, and itself is called the cokernel of .
In many situations in abstract algebra, such as for abelian groups, vector spaces or modules, the cokernel of the homomorphism is the quotient of by the image of . In topological settings, such as with bounded linear operators between Hilbert spaces, one typically has to take the closure of the image before passing to the quotient.
Formal definition
One can define the cokernel in the general framework of category theory. In order for the definition to make sense the category in question must have zero morphisms. The cokernel of a morphism is defined as the coequalizer of and the zero morphism .
Explicitly, this means the following. The cokernel of is an object together with a morphism such that the diagram
commutes. Moreover, the morphism must be universal for this diagram, i.e. any other such can be obtained by composing with a unique morphism :
As with all universal constructions the cokernel |
https://en.wikipedia.org/wiki/Truncated%20cube | In geometry, the truncated cube, or truncated hexahedron, is an Archimedean solid. It has 14 regular faces (6 octagonal and 8 triangular), 36 edges, and 24 vertices.
If the truncated cube has unit edge length, its dual triakis octahedron has edges of lengths 2 and 2 + .
Area and volume
The area A and the volume V of a truncated cube of edge length a are:
Orthogonal projections
The truncated cube has five special orthogonal projections, centered, on a vertex, on two types of edges, and two types of faces: triangles, and octagons. The last two correspond to the B2 and A2 Coxeter planes.
Spherical tiling
The truncated cube can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane.
Cartesian coordinates
Cartesian coordinates for the vertices of a truncated hexahedron centered at the origin with edge length 2ξ are all the permutations of
(±ξ, ±1, ±1),
where ξ = − 1.
The parameter ξ can be varied between ±1. A value of 1 produces a cube, 0 produces a cuboctahedron, and negative values produces self-intersecting octagrammic faces.
If the self-intersected portions of the octagrams are removed, leaving squares, and truncating the triangles into hexagons, truncated octahedra are produced, and the sequence ends with the central squares being reduced to a point, and creating an octahedron.
Dissection
The truncated cube can be dissected into a central cube, with six square cupolae around each of the cube's faces, and 8 regular tetrahedra in the corners. This dissection can also be seen within the runcic cubic honeycomb, with cube, tetrahedron, and rhombicuboctahedron cells.
This dissection can be used to create a Stewart toroid with all regular faces by removing two square cupolae and the central cube. This excavated cube has 16 triangles, 12 squares, and 4 octagons.
Ve |
https://en.wikipedia.org/wiki/Numerical%20digit | A numerical digit (often shortened to just digit) is a single symbol used alone (such as "1") or in combinations (such as "15"), to represent numbers in a positional numeral system. The name "digit" comes from the fact that the ten digits (Latin digiti meaning fingers) of the hands correspond to the ten symbols of the common base 10 numeral system, i.e. the decimal (ancient Latin adjective decem meaning ten) digits.
For a given numeral system with an integer base, the number of different digits required is given by the absolute value of the base. For example, the decimal system (base 10) requires ten digits (0 through to 9), whereas the binary system (base 2) requires two digits (0 and 1).
Overview
In a basic digital system, a numeral is a sequence of digits, which may be of arbitrary length. Each position in the sequence has a place value, and each digit has a value. The value of the numeral is computed by multiplying each digit in the sequence by its place value, and summing the results.
Digital values
Each digit in a number system represents an integer. For example, in decimal the digit "1" represents the integer one, and in the hexadecimal system, the letter "A" represents the number ten. A positional number system has one unique digit for each integer from zero up to, but not including, the radix of the number system.
Thus in the positional decimal system, the numbers 0 to 9 can be expressed using their respective numerals "0" to "9" in the rightmost "units" position. The number 12 can be expressed with the numeral "2" in the units position, and with the numeral "1" in the "tens" position, to the left of the "2" while the number 312 can be expressed by three numerals: "3" in the "hundreds" position, "1" in the "tens" position, and "2" in the "units" position.
Computation of place values
The decimal numeral system uses a decimal separator, commonly a period in English, or a comma in other European languages, to denote the "ones place" or "units place", whic |
https://en.wikipedia.org/wiki/Hygrometer | A hygrometer is an instrument which measures the humidity of air or some other gas: that is, how much water vapor it contains. Humidity measurement instruments usually rely on measurements of some other quantities such as temperature, pressure, mass and mechanical or electrical changes in a substance as moisture is absorbed. By calibration and calculation, these measured quantities can lead to a measurement of humidity. Modern electronic devices use the temperature of condensation (called the dew point), or they sense changes in electrical capacitance or resistance to measure humidity differences. A crude hygrometer was invented by Leonardo da Vinci in 1480. Major leaps came forward during the 1600s; Francesco Folli invented a more practical version of the device, while Robert Hooke improved a number of meteorological devices including the hygrometer. A more modern version was created by Swiss polymath Johann Heinrich Lambert in 1755. Later, in the year 1783, Swiss physicist and Geologist Horace Bénédict de Saussure invented the first hygrometer using human hair to measure humidity.
The maximum amount of water vapor that can be held in a given volume of air (saturation) varies greatly by temperature; cold air can hold less mass of water per unit volume than hot air. Temperature can change humidity.
Classical hygrometer
Ancient hygrometers
Prototype hygrometers were devised and developed during the Shang dynasty in Ancient China to study weather. The Chinese used a bar of charcoal and a lump of earth: its dry weight was taken, then compared with its damp weight after being exposed in the air. The differences in weight were used to tally the humidity level.
Other techniques were applied using mass to measure humidity, such as when the air was dry, the bar of charcoal would be light, while when the air was humid, the bar of charcoal would be heavy. By hanging a lump of earth and a bar of charcoal on the two ends of a staff separately and adding a fixed lifting st |
https://en.wikipedia.org/wiki/Mass%20spectrometry | Mass spectrometry (MS) is an analytical technique that is used to measure the mass-to-charge ratio of ions. The results are presented as a mass spectrum, a plot of intensity as a function of the mass-to-charge ratio. Mass spectrometry is used in many different fields and is applied to pure samples as well as complex mixtures.
A mass spectrum is a type of plot of the ion signal as a function of the mass-to-charge ratio. These spectra are used to determine the elemental or isotopic signature of a sample, the masses of particles and of molecules, and to elucidate the chemical identity or structure of molecules and other chemical compounds.
In a typical MS procedure, a sample, which may be solid, liquid, or gaseous, is ionized, for example by bombarding it with a beam of electrons. This may cause some of the sample's molecules to break up into positively charged fragments or simply become positively charged without fragmenting. These ions (fragments) are then separated according to their mass-to-charge ratio, for example by accelerating them and subjecting them to an electric or magnetic field: ions of the same mass-to-charge ratio will undergo the same amount of deflection. The ions are detected by a mechanism capable of detecting charged particles, such as an electron multiplier. Results are displayed as spectra of the signal intensity of detected ions as a function of the mass-to-charge ratio. The atoms or molecules in the sample can be identified by correlating known masses (e.g. an entire molecule) to the identified masses or through a characteristic fragmentation pattern.
History of the mass spectrometer
In 1886, Eugen Goldstein observed rays in gas discharges under low pressure that traveled away from the anode and through channels in a perforated cathode, opposite to the direction of negatively charged cathode rays (which travel from cathode to anode). Goldstein called these positively charged anode rays "Kanalstrahlen"; the standard translation of this |
https://en.wikipedia.org/wiki/Algebraic%20data%20type | In computer programming, especially functional programming and type theory, an algebraic data type (ADT) is a kind of composite type, i.e., a type formed by combining other types.
Two common classes of algebraic types are product types (i.e., tuples and records) and sum types (i.e., tagged or disjoint unions, coproduct types or variant types).
The values of a product type typically contain several values, called fields. All values of that type have the same combination of field types. The set of all possible values of a product type is the set-theoretic product, i.e., the Cartesian product, of the sets of all possible values of its field types.
The values of a sum type are typically grouped into several classes, called variants. A value of a variant type is usually created with a quasi-functional entity called a constructor. Each variant has its own constructor, which takes a specified number of arguments with specified types. The set of all possible values of a sum type is the set-theoretic sum, i.e., the disjoint union, of the sets of all possible values of its variants. Enumerated types are a special case of sum types in which the constructors take no arguments, as exactly one value is defined for each constructor.
Values of algebraic types are analyzed with pattern matching, which identifies a value by its constructor or field names and extracts the data it contains.
Algebraic data types were introduced in Hope, a small functional programming language developed in the 1970s at the University of Edinburgh.
Examples
One of the most common examples of an algebraic data type is the singly linked list. A list type is a sum type with two variants, Nil for an empty list and Cons x xs for the combination of a new element x with a list xs to create a new list. Here is an example of how a singly linked list would be declared in Haskell:
data List a = Nil | Cons a (List a)
or
data [] a = [] | a : [a]
Cons is an abbreviation of construct. Many languages have spe |
https://en.wikipedia.org/wiki/CfV | A CfV (Call for Votes) is part of the Usenet decision making process. Usenet users are called upon to vote on a topical administrative issue, such as whether to create a particular newsgroup.
See also
Big-8 Management Board
Big 8 (Usenet)
Call for papers
Usenet cabal
References
Usenet |
https://en.wikipedia.org/wiki/Think%20aloud%20protocol | A think-aloud (or thinking aloud) protocol is a method used to gather data in usability testing in product design and development, in psychology and a range of social sciences (e.g., reading, writing, translation research, decision making, and process tracing).
Description
Think-aloud protocols involve participants thinking aloud as they are performing a set of specified tasks. Participants are asked to say whatever comes into their mind as they complete the task. This might include what they are looking at, thinking, doing, and feeling. This gives observers insight into the participant's cognitive processes (rather than only their final product), to make thought processes as explicit as possible during task performance. In a formal research protocol, all verbalizations are transcribed and then analyzed. In a usability testing context, observers are asked to take notes of what participants say and do, without attempting to interpret their actions and words, and especially noting places where they encounter difficulty. Test sessions may be completed on participants own devices or in a more controlled setting. Sessions are often audio- and video-recorded so that developers can go back and refer to what participants did and how they reacted.
History
The think-aloud method was introduced in the usability field by Clayton Lewis while he was at IBM, and is explained in Task-Centered User Interface Design: A Practical Introduction by Lewis and John Rieman. The method was developed based on the techniques of protocol analysis by K. Ericsson and H. Simon. However, there are some significant differences between the way Ericsson and Simon propose that protocols be conducted and how they are actually conducted by usability practitioners, as noted by Ted Boren and Judith Ramey. These differences arise from the specific needs and context of usability testing; practitioners should be aware of these differences and adjust their method to meet their needs while still collecti |
https://en.wikipedia.org/wiki/Paper%20prototyping | In human–computer interaction, paper prototyping is a widely used method in the user-centered design process, a process that helps developers to create software that meets the user's expectations and needs – in this case, especially for designing and testing user interfaces. It is throwaway prototyping and involves creating rough, even hand-sketched, drawings of an interface to use as prototypes, or models, of a design. While paper prototyping seems simple, this method of usability testing can provide useful feedback to aid the design of easier-to-use products. This is supported by many usability professionals.
History
Paper prototyping started in the mid-1980s and then became popular in the mid-1990s, when companies such as IBM, Honeywell, Microsoft, and others, started using the technique in developing their products. Today, paper prototyping is used widely in user-centered design by usability professionals. More recently, digital paper prototyping has been advocated by companies like Pidoco due to advantages in terms of collaboration, flexibility, and cost.
Benefits
Paper prototyping saves time and money, since it enables developers to test product interfaces (from software and websites to cell phones and microwave ovens) before they write code or begin development. This also allows for easy and inexpensive modification to existing designs, which makes this method useful in the early phases of design. Using paper prototyping allows the entire creative team to be involved in the process, which eliminates the chance of someone with key information not being involved in the design process. Another benefit of paper prototyping is that users feel more comfortable being critical of the mock-up because it doesn't have a polished look.
There are different methods of paper prototyping, each of them showing several benefits regarding the communication within the development team, as well as the quality of the product being developed. In the development team, paper pro |
https://en.wikipedia.org/wiki/Read-copy-update | In computer science, read-copy-update (RCU) is a synchronization mechanism that avoids the use of lock primitives while multiple threads concurrently read and update elements that are linked through pointers and that belong to shared data structures (e.g., linked lists, trees, hash tables).
Whenever a thread is inserting or deleting elements of data structures in shared memory, all readers are guaranteed to see and traverse either the older or the new structure, therefore avoiding inconsistencies (e.g., dereferencing null pointers).
It is used when performance of reads is crucial and is an example of space–time tradeoff, enabling fast operations at the cost of more space. This makes all readers proceed as if there were no synchronization involved, hence they will be fast, but also making updates more difficult.
Name and overview
The name comes from the way that RCU is used to update a linked structure in place. A thread wishing to do this uses the following steps:
create a new structure,
copy the data from the old structure into the new one, and save a pointer to the old structure,
modify the new, copied, structure,
update the global pointer to refer to the new structure,
sleep until the operating system kernel determines that there are no readers left using the old structure, for example, in the Linux kernel, by using ,
once awakened by the kernel, deallocate the old structure.
So the structure is read concurrently with a thread copying in order to do an update, hence the name "read-copy update". The abbreviation "RCU" was one of many contributions by the Linux community. Other names for similar techniques include passive serialization and MP defer by VM/XA programmers and generations by K42 and Tornado programmers.
Detailed description
A key property of RCU is that readers can access a data structure even when it is in the process of being updated: RCU updaters cannot block readers or force them to retry their accesses. This overview starts by showing |
https://en.wikipedia.org/wiki/Bitboard | A bitboard is a specialized bit array data structure commonly used in computer systems that play board games, where each bit corresponds to a game board space or piece. This allows parallel bitwise operations to set or query the game state, or determine moves or plays in the game.
Bits in the same bitboard relate to each other by the rules of the game, often forming a game position when taken together. Other bitboards are commonly used as masks to transform or answer queries about positions. Bitboards are applicable to any game whose progress is represented by the state of, or presence of pieces on, discrete spaces of a gameboard, by mapping of the space states to bits in the data structure. Bitboards are a more efficient alternative board representation to the traditional mailbox representation, where each piece or space on the board is an array element.
Bitboards are especially effective when the associated bits of various related states on the board fit into a single word or double word of the CPU architecture, so that single bitwise operators like AND and OR can be used to build or query game states.
Among the computer game implementations that use bitboards are chess, checkers, othello and word games. The scheme was first employed in checkers programs in the 1950s, and since the mid-1970s has been the de facto standard for game board representation in computer automatons.
Description
A bitboard, a specialized bit field, is a format that packs multiple related boolean variables into the same machine word, typically representing a position on a board game, or state of a game. Each bit represents a space; when the bit is positive, a property of that space is true. Bitboards allow the computer to answer some questions about game state with one bitwise operation. For example, if a chess program wants to know if the white player has any pawns in the center of the board (center four squares) it can just compare a bitboard for the player's pawns with one for |
https://en.wikipedia.org/wiki/IBM%20PCjr | The IBM PCjr (pronounced "PC junior") was a home computer produced and marketed by IBM from March 1984 to May 1985, intended as a lower-cost variant of the IBM PC with hardware capabilities better suited for video games, in order to compete more directly with other home computers such as the Apple II and Commodore 64.
It retained the IBM PC's 8088 CPU and BIOS interface, but provided enhanced graphics and sound, ROM cartridge slots, built-in joystick ports, and an infrared wireless keyboard. The PCjr supported expansion via "sidecar" modules, which could be attached to the side of the unit.
Despite widespread anticipation, the PCjr was ultimately unsuccessful in the market. It was only partially IBM compatible, limiting support for IBM's software library, its chiclet keyboard was widely criticized for its poor quality, expandability was limited, and it was initially offered with a maximum of of RAM, insufficient for many PC programs.
Models
The PCjr came in two models:
4860-004 - 64 KB of memory, priced at US$669 ()
4860-067 - 128 KB of memory and a 360 KB, 5.25-inch floppy disk drive, priced at US$1,269 ()
The PCjr was manufactured for IBM in Lewisburg, Tennessee by Teledyne.
A related machine, the IBM JX, was sold in the Japan, Australia and New Zealand markets.
Hardware
The PCjr chassis is made entirely of plastic, unlike the all-steel chassis of the IBM PC. A 5.25" front bay allows the installation of a 180/360K floppy disk drive. The internal floppy drive was a half-height Qume 5.25" unit; IBM also used these drives in the PC Portable, but the PCjr units were specially equipped with a small fan to prevent overheating since the computer did not have a case fan.
Cartridges
The front of the PCjr exposes a pair of cartridge slots in which the user can insert software on ROM cartridges, as was common with other home computers. When a ROM cartridge is inserted, the machine automatically restarts and boots off of the ROM, without requiring the user to man |
https://en.wikipedia.org/wiki/RF%20probe | An RF probe is a device which allows electronic test equipment to measure radio frequency (RF) signal in an electronic circuit.
History
In 1980 Reed Gleason and Eric Strid invented the first high frequency wafer probe while working at Tektronix. They later went on to found Cascade Microtech in 1983.
RF energy may be challenging to measure for one or more reasons, depending on the nature of the circuit to be measured and the measuring equipment at hand.
The first kind of difficulty arises when the RF energy to be measured is at a frequency too high for available test equipment, such as a low-bandwidth oscilloscope, to process directly. In that case, the RF has to be converted to a DC or near-DC signal.
In this situation, a simple probe type sometimes called an RF detector can be used to convert the RF signal to DC. Such device will work as a RF rectifier and give a pulsed DC voltage.
The second kind of difficulty arises when RF energy has to be measured in a circuit which is sensitive to small changes in its electrical environment. For example, with some oscillator circuits, the presence of an ordinary wire within a few centimeters of the active components may change the amplitude or frequency of oscillations, or even prevent the circuit from oscillating at all. In that case, the signal has to be acquired by a measurement probe which extracts very little energy from the circuit. This can be achieved by employing very thin conductors, or tiny coils kept at some minimum separation from the active elements of the circuit.
In a situation, where circuit loading rather than high frequency is the real problem, a variety of small-geometry, high impedance probes can be used, sometimes including an amplifier to boost the tiny amount of energy extracted from the circuit to a level that allows it to be measured by available high-frequency test equipment.
Coaxial structures with spring-loaded inner and outer conductors can serve as an RF probe for modern communication elec |
https://en.wikipedia.org/wiki/TI-99/4A | The TI-99/4 and TI-99/4A are home computers released by Texas Instruments in 1979 and 1981, respectively. The TI-99 series competed against major home computers such as the Apple II, TRS-80, and the later Atari 400/800 series and VIC-20.
Based on the Texas Instruments TMS9900 microprocessor originally used in minicomputers, the TI-99/4 was the first 16-bit home computer. The associated video display controller provides color graphics and sprite support which were only comparable with those of the Atari 400 and 800 released a month after the TI-99/4.
The calculator-style keyboard of the TI-99/4 was cited as a weak point, and TI's reliance on ROM cartridges and their practice of limiting developer information to select third parties resulted in a lack of software for the system. The TI-99/4A was released in June 1981 to address some of these issues with a simplified internal design, full-travel keyboard, improved graphics, and a unique expansion system. At half the price of the original model, sales picked up significantly and TI supported the 4A with peripherals, including a speech synthesizer and a "Peripheral Expansion System" box to contain hardware add-ons. TI released developer information and tools, but the insistence on remaining sole publisher continued to starve the platform of software.
The 1981 US launch of the TI-99/4A followed Commodore's VIC-20 by several months. Commodore CEO Jack Tramiel did not like TI's predatory pricing in the mid-1970s and retaliated with a price war by repeatedly lowering the price of the VIC-20 and forcing TI to do the same. By late 1982, TI was dominating the U.S. home computer market, shipping 5,000 computers a day from their factory in Lubbock, Texas. By 1983, the 99/4A was selling at a loss for under . Even with the increased user base created by the heavy discounts, Texas Instruments lost US$330 million in the third quarter of 1983 and announced the discontinuation of the TI-99/4A in October 1983. Production ended in Mar |
https://en.wikipedia.org/wiki/CSIRAC | CSIRAC (; Commonwealth Scientific and Industrial Research Automatic Computer), originally known as CSIR Mk 1, was Australia's first digital computer, and the fifth stored program computer in the world. It is the oldest surviving first-generation electronic computer
(the Zuse Z4 at the Deutsches Museum is older, but was electro-mechanical, not electronic), and was the first in the world to play digital music.
After being exhibited at Melbourne Museum for many years, it was relocated to Scienceworks in 2018 and is now on permanent display in the Think Ahead gallery.
A comprehensive source of information about the CSIRA collection, its contributors and related topics is available from Museums Victoria on their Collections website.
History
The CSIRAC was constructed by a team led by Trevor Pearcey and Maston Beard, working in large part independently of similar efforts across Europe and the United States, and ran its first test program (multiplication of numbers) sometime in November 1949. In restricted operation from late 1950, publicly demonstrated and operational in 1951.
Design
The machine was fairly representative of first-generation valve-driven computer designs. It used mercury acoustic delay lines as its primary data storage, with a typical capacity of 768 20-bit words, supplemented by a parallel disk-type device with a total 4096-word capacity and an access time of 10 milliseconds. Its memory clock ran at 1000 Hz, and the control unit, synchronized to the clock, took two cycles to execute an instruction (later the speed was doubled to one cycle per instruction). The bus (termed the "digit trunk" in their design) is unusual compared to most computers in that it was serial—it transferred one bit at a time.
Most of CSIRAC's approximately 2000 valves were of the types 6SN7, 6V6, EA50 diodes and KT66. George Semkiw later redesigned the drum-read electronics to use germanium transistors.
Input to the machine was performed in the form of punched wide, 12-track p |
https://en.wikipedia.org/wiki/PIC16x84 | The PIC16C84, PIC16F84 and PIC16F84A are 8-bit microcontrollers of which the PIC16C84 was the first introduced in 1993 and hailed as the first PIC microcontroller to feature a serial programming algorithm and EEPROM memory. It is a member of the PIC family of controllers, produced by Microchip Technology. The memory architecture makes use of bank switching. Software tools for assembler, debug and programming were only available for the Microsoft Windows operating system.
Description
The PIC16x84 is a microcontroller in the PIC family of controllers produced by Microchip Technology (originally named " Arizona Microchip"). It was Microchip's first microcontroller that utilised "EEPROM" memory technology for the program memory.
The use of "EEPROM" technology for program memory has now been disused in favour of "FLASH" memory that is considerably cheaper to manufacture, releases less toxins into the atmosphere and is much more reliable than "EEPROM". Both "EEPROM" and "FLASH" utilise similar forms of "floating gate" technologies to operate.
The device features one 8-bit timer, and 13 I/O pins. The PIC16x84 became popular in many hobbyist applications because it uses a serial programming algorithm that lends itself to very simple programmers. Additionally, the PIC16C84 uses EEPROM memory, so it is easy to erase and requires no special tools to do so.
The PIC16F84 and its updated version, the PIC16F84A both utilised FLASH program memory. The PIC16C84, PIC16C84A, PIC16F84 and the PIC16F84A all contain an additional 64 Bytes of EEPROM addressed from the "DATA" memory map.
This additional memory is intended for use as "user data", hence the reason it can only be addressed from the "DATA" memory mapping.
F-version
The PIC16F84/PIC16F84A is an improved version of the PIC16C84, and almost completely compatible, with better program security and using flash memory instead of EEPROM memory for program memory. The PIC16F84/PIC16F84A has 68 bytes of RAM whilst the PIC16C |
https://en.wikipedia.org/wiki/Audio/modem%20riser | The audio/modem riser (AMR) is a riser expansion slot found on the motherboards of some Pentium III, Pentium 4, Duron, and Athlon personal computers. It was designed by Intel to interface with chipsets and provide analog functionality, such as sound cards and modems, on an expansion card.
Technology
Physically, it has two rows of 23 pins, making 46 pins total. Three drawbacks of AMR are that it eliminates one PCI slot, it is not plug and play, and it does not allow for hardware accelerated cards (only software-based).
Technologically, it has been superseded by the Advanced Communications Riser (ACR) and Intel's own communications and networking riser (CNR). However, riser technologies in general never really took off. Modems generally remained as PCI cards while audio and network interfaces were integrated on to motherboards.
See also
Advanced Communications Riser (ACR)
GeoPort
Mobile Daughter Card
References
Further reading
Motherboard expansion slot |
https://en.wikipedia.org/wiki/Line%20of%20Control | The Line of Control (LoC) is a military control line between the Indian and Pakistanicontrolled parts of the former princely state of Jammu and Kashmir—a line which does not constitute a legally recognized international boundary, but serves as the de facto border. It was established as part of the Simla Agreement at the end of the Indo-Pakistani War of 1971. Both nations agreed to rename the ceasefire line as the "Line of Control" and pledged to respect it without prejudice to their respective positions. Apart from minor details, the line is roughly the same as the original 1949 cease-fire line.
The part of the former princely state under Indian control is divided into the union territories of Jammu and Kashmir and Ladakh. The Pakistani-controlled section is divided into Azad Kashmir and Gilgit–Baltistan. The northernmost point of the Line of Control is known as NJ9842, beyond which lies the Siachen Glacier, which became a bone of contention in 1984. To the south of the Line of Control, (Sangam, Chenab River, Akhnoor), lies the border between Pakistani Punjab and the Jammu province, which has an ambiguous status: India regards it as an "international boundary", and Pakistan calls it a "working border".
Another ceasefire line separates the Indian-controlled state of Jammu and Kashmir from the Chinese-controlled area known as Aksai Chin. Lying further to the east, it is known as the Line of Actual Control (LAC).
Background
After the partition of India, present-day India and Pakistan contested the princely state of Jammu and Kashmir – India because of the ruler's accession to the country, and Pakistan by virtue of the state's Muslim-majority population. The First Kashmir War in 1947 lasted more than a year until a ceasefire was arranged through UN mediation. Both sides agreed on a ceasefire line.
After another Kashmir War in 1965, and the Indo-Pakistani War of 1971 (which saw Bangladesh become independent), only minor modifications had been effected in the origi |
https://en.wikipedia.org/wiki/Cray%20X-MP | The Cray X-MP was a supercomputer designed, built and sold by Cray Research. It was announced in 1982 as the "cleaned up" successor to the 1975 Cray-1, and was the world's fastest computer from 1983 to 1985 with a quad-processor system performance of 800 MFLOPS. The principal designer was Steve Chen.
Description
The X-MP's main improvement over the Cray-1 was that it was a shared-memory parallel vector processor, the first such computer from Cray Research. It housed up to four CPUs in a mainframe that was nearly identical in outside appearance to the Cray-1.
The X-MP CPU had a faster 9.5 nanosecond clock cycle (105 MHz), compared to 12.5 ns for the Cray-1A. It was built from bipolar gate-array integrated circuits containing 16 emitter-coupled logic gates each. The CPU was very similar to the Cray-1 CPU in architecture, but had better memory bandwidth (with two read ports and one write port to the main memory instead of only one read/write port) and improved chaining support. Each CPU had a theoretical peak performance of 200 MFLOPS.
The X-MP initially supported 2 million 64-bit words (16 MB) of main memory in 16 banks, respectively. The main memory was built from 4 Kbit bipolar SRAM ICs. CMOS memory versions of the Cray-1M were renamed Cray X-MP/1s. This configuration was first used for Cray Research's UNIX port.
In 1984, improved models of the X-MP were announced, consisting of one, two, and four-processor systems with 4 and 8 million word configurations. The top-end system was the X-MP/48, which contained four CPUs with a theoretical peak system performance of over 800 MFLOPS and 8 million words of memory. The CPUs in these models introduced vector gather/scatter memory reference instructions to the product line. The amount of main memory supported was increased to a maximum of 16 million words, depending on the model. The main memory was built from bipolar or MOS SRAM ICs, depending on the model.
The system initially ran the proprietary Cray Operating System |
https://en.wikipedia.org/wiki/Dining%20philosophers%20problem | In computer science, the dining philosophers problem is an example problem often used in concurrent algorithm design to illustrate synchronization issues and techniques for resolving them.
It was originally formulated in 1965 by Edsger Dijkstra as a student exam exercise, presented in terms of computers competing for access to tape drive peripherals.
Soon after, Tony Hoare gave the problem its present form.
Problem statement
Five philosophers dine together at the same table. Each philosopher has his own plate at the table. There is a fork between each plate. The dish served is a kind of spaghetti which has to be eaten with two forks. Each philosopher can only alternately think and eat. Moreover, a philosopher can only eat his spaghetti when he has both a left and right fork. Thus two forks will only be available when his two nearest neighbors are thinking, not eating. After an individual philosopher finishes eating, he will put down both forks.
The problem is how to design a regimen (a concurrent algorithm) such that no philosopher will starve; i.e., each can forever continue to alternate between eating and thinking, assuming that no philosopher can know when others may want to eat or think (an issue of incomplete information).
Problems
The problem was designed to illustrate the challenges of avoiding deadlock, a system state in which no progress is possible. To see that a proper solution to this problem is not obvious, consider a proposal in which each philosopher is instructed to behave as follows:
think unless the left fork is available; when it is, pick it up;
think unless the right fork is available; when it is, pick it up;
when both forks are held, eat for a fixed amount of time;
put the left fork down;
put the right fork down;
repeat from the beginning.
However, each philosopher will think for an undetermined amount of time and may end up holding a left fork thinking, staring at the right side of the plate, unable to eat because there is no right |
https://en.wikipedia.org/wiki/Low-noise%20amplifier | A low-noise amplifier (LNA) is an electronic component that amplifies a very low-power signal without significantly degrading its signal-to-noise ratio (SNR). Any electronic amplifier will increase the power of both the signal and the noise present at its input, but the amplifier will also introduce some additional noise. LNAs are designed to minimize that additional noise, by choosing special components, operating points, and circuit topologies. Minimizing additional noise must balance with other design goals such as power gain and impedance matching.
LNAs are found in radio communications systems, medical instruments and electronic test equipment. A typical LNA may supply a power gain of 100 (20 decibels (dB)) while decreasing the SNR by less than a factor of two (a 3 dB noise figure (NF)). Although LNAs are primarily concerned with weak signals that are just above the noise floor, they must also consider the presence of larger signals that cause intermodulation distortion.
Communications
Antennas are a common source of weak signals. An outdoor antenna is often connected to its receiver by a transmission line called a feed line. Losses in the feed line lower the received signal-to-noise ratio: a feed line loss of degrades the receiver signal-to-noise ratio (SNR) by .
An example is a feed line made from of RG-174 coaxial cable and used with a global positioning system (GPS) receiver. The loss in that feed line is at ; approximately at the GPS frequency (). This feed line loss can be avoided by placing an LNA at the antenna, which supplies enough gain to offset the loss.
An LNA is a key component at the front-end of a radio receiver circuit to help reduce unwanted noise in particular. Friis' formulas for noise models the noise in a multi-stage signal collection circuit. In most receivers, the overall NF is dominated by the first few stages of the RF front end.
By using an LNA close to the signal source, the effect of noise from subsequent stages of the re |
https://en.wikipedia.org/wiki/Baker%E2%80%93Campbell%E2%80%93Hausdorff%20formula | In mathematics, the Baker–Campbell–Hausdorff formula is the solution for to the equation
for possibly noncommutative and in the Lie algebra of a Lie group. There are various ways of writing the formula, but all ultimately yield an expression for in Lie algebraic terms, that is, as a formal series (not necessarily convergent) in and and iterated commutators thereof. The first few terms of this series are:
where "" indicates terms involving higher commutators of and . If and are sufficiently small elements of the Lie algebra of a Lie group , the series is convergent. Meanwhile, every element sufficiently close to the identity in can be expressed as for a small in . Thus, we can say that near the identity the group multiplication in —written as —can be expressed in purely Lie algebraic terms. The Baker–Campbell–Hausdorff formula can be used to give comparatively simple proofs of deep results in the Lie group–Lie algebra correspondence.
If and are sufficiently small matrices, then can be computed as the logarithm of , where the exponentials and the logarithm can be computed as power series. The point of the Baker–Campbell–Hausdorff formula is then the highly nonobvious claim that can be expressed as a series in repeated commutators of and .
Modern expositions of the formula can be found in, among other places, the books of Rossmann and Hall.
History
The formula is named after Henry Frederick Baker, John Edward Campbell, and Felix Hausdorff who stated its qualitative form, i.e. that only commutators and commutators of commutators, ad infinitum, are needed to express the solution. An earlier statement of the form was adumbrated by Friedrich Schur in 1890 where a convergent power series is given, with terms recursively defined. This qualitative form is what is used in the most important applications, such as the relatively accessible proofs of the Lie correspondence and in quantum field theory. Following Schur, it was noted in print by Campbell ( |
https://en.wikipedia.org/wiki/VCard | vCard, also known as VCF (Virtual Contact File), is a file format standard for electronic business cards. vCards can be attached to e-mail messages, sent via Multimedia Messaging Service (MMS), on the World Wide Web, instant messaging, NFC or through QR code. They can contain name and address information, phone numbers, e-mail addresses, URLs, logos, photographs, and audio clips.
vCard is used as a data interchange format in smartphone contacts, personal digital assistants (PDAs), personal information managers (PIMs) and customer relationship management systems (CRMs). To accomplish these data interchange applications, other "vCard variants" have been used and proposed as "variant standards", each for its specific niche: XML representation, JSON representation, or web pages.
An unofficial vCard Plus format makes use of a URL to a customized landing page with all the basic information along with a profile photo, geographic location, and other fields. This can also be saved as a contact file on smartphones.
Overview
The standard Internet media type (MIME type) for a vCard has varied with each version of the specification.
vCard information is common in web pages: the "free text" content is human-readable but not machine-readable. As technologies evolve, the "free text" (HTML) was adapting to be also machine-readable.
RDFa with the vCard Ontology can be used in HTML and various XML-family languages, e.g. SVG, MathML.
Related formats
jCard, "The JSON Format for vCard" is a standard proposal of 2014 in . This proposal has not yet become a widely used standard. The RFC 7095 does not use real JSON objects, but rather uses arrays of sequence-dependent tag-value pairs (like an XML file).
hCard is a microformat that allows a vCard to be embedded inside an HTML page. It makes use of CSS class names to identify each vCard property. Normal HTML markup and CSS styling can be used alongside the hCard class names without affecting the webpage's ability to be parsed by a h |
https://en.wikipedia.org/wiki/State%20%28computer%20science%29 | In information technology and computer science, a system is described as stateful if it is designed to remember preceding events or user interactions; the remembered information is called the state of the system.
The set of states a system can occupy is known as its state space. In a discrete system, the state space is countable and often finite. The system's internal behaviour or interaction with its environment consists of separately occurring individual actions or events, such as accepting input or producing output, that may or may not cause the system to change its state. Examples of such systems are digital logic circuits and components, automata and formal language, computer programs, and computers.
The output of a digital circuit or deterministic computer program at any time is completely determined by its current inputs and its state.
Digital logic circuit state
Digital logic circuits can be divided into two types: combinational logic, whose output signals are dependent only on its present input signals, and sequential logic, whose outputs are a function of both the current inputs and the past history of inputs. In sequential logic, information from past inputs is stored in electronic memory elements, such as flip-flops. The stored contents of these memory elements, at a given point in time, is collectively referred to as the circuit's state and contains all the information about the past to which the circuit has access.
Since each binary memory element, such as a flip-flop, has only two possible states, one or zero, and there is a finite number of memory elements, a digital circuit has only a certain finite number of possible states. If N is the number of binary memory elements in the circuit, the maximum number of states a circuit can have is 2N.
Program state
Similarly, a computer program stores data in variables, which represent storage locations in the computer's memory. The contents of these memory locations, at any given point in the program's |
https://en.wikipedia.org/wiki/Last%20mile%20%28telecommunications%29 | The last mile or last kilometer is a phrase widely used in the telecommunications, cable television and internet industries to refer to the final leg of the telecommunications networks that deliver telecommunication services to retail end-users (customers). More specifically, the last mile describes the portion of the telecommunications network chain that physically reaches the end-user's premises. Examples are the copper wire subscriber lines connecting landline telephones to the local telephone exchange; coaxial cable service drops carrying cable television signals from utility poles to subscribers' homes, and cell towers linking local cell phones to the cellular network. The word "mile" is used metaphorically; the length of the last mile link may be more or less than a mile. Because the last mile of a network to the user is conversely the first mile from the user's premises to the outside world when the user is sending data, the term first mile is also alternatively used.
The last mile is typically the speed bottleneck in communication networks; its bandwidth effectively limits the amount of data that can be delivered to the customer. This is because retail telecommunication networks have the topology of "trees", with relatively few high capacity "trunk" communication channels branching out to feed many final mile "twigs". The final mile links, being the most numerous and thus the most expensive part of the system, as well as having to interface with a wide variety of user equipment, are the most difficult to upgrade to new technology. For example, telephone trunklines that carry phone calls between switching centers are made of modern optical fiber, but the last mile is typically twisted pair wires, a technology which has essentially remained unchanged for over a century since the original laying of copper phone cables.
In recent years, usage of the term "last mile" has expanded outside the communications industries, to include other distribution networks that |
https://en.wikipedia.org/wiki/Limit%20of%20a%20function | Although the function is not defined at zero, as becomes closer and closer to zero, becomes arbitrarily close to 1. In other words, the limit of as approaches zero, equals 1.
In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input.
Formal definitions, first devised in the early 19th century, are given below. Informally, a function assigns an output to every input . We say that the function has a limit at an input , if gets closer and closer to as moves closer and closer to . More specifically, when is applied to any input sufficiently close to , the output value is forced arbitrarily close to . On the other hand, if some inputs very close to are taken to outputs that stay a fixed distance apart, then we say the limit does not exist.
The notion of a limit has many applications in modern calculus. In particular, the many definitions of continuity employ the concept of limit: roughly, a function is continuous if all of its limits agree with the values of the function. The concept of limit also appears in the definition of the derivative: in the calculus of one variable, this is the limiting value of the slope of secant lines to the graph of a function.
History
Although implicit in the development of calculus of the 17th and 18th centuries, the modern idea of the limit of a function goes back to Bolzano who, in 1817, introduced the basics of the epsilon-delta technique (see #(ε, δ)-definition of limit below) to define continuous functions. However, his work was not known during his lifetime.
In his 1821 book , Augustin-Louis Cauchy discussed variable quantities, infinitesimals and limits, and defined continuity of by saying that an infinitesimal change in necessarily produces an infinitesimal change in , while claims that he used a rigorous epsilon-delta definition in proofs. In 1861, Weierstrass first introduced the epsilon-delta definition of limit |
https://en.wikipedia.org/wiki/Limit%20of%20a%20sequence | As the positive integer becomes larger and larger, the value becomes arbitrarily close to . We say that "the limit of the sequence equals ."
In mathematics, the limit of a sequence is the value that the terms of a sequence "tend to", and is often denoted using the symbol (e.g., ). If such a limit exists, the sequence is called convergent. A sequence that does not converge is said to be divergent. The limit of a sequence is said to be the fundamental notion on which the whole of mathematical analysis ultimately rests.
Limits can be defined in any metric or topological space, but are usually first encountered in the real numbers.
History
The Greek philosopher Zeno of Elea is famous for formulating paradoxes that involve limiting processes.
Leucippus, Democritus, Antiphon, Eudoxus, and Archimedes developed the method of exhaustion, which uses an infinite sequence of approximations to determine an area or a volume. Archimedes succeeded in summing what is now called a geometric series.
Grégoire de Saint-Vincent gave the first definition of limit (terminus) of a geometric series in his work Opus Geometricum (1647): "The terminus of a progression is the end of the series, which none progression can reach, even not if she is continued in infinity, but which she can approach nearer than a given segment."
Pietro Mengoli anticipated the modern idea of limit of a sequence with his study of quasi-proportions in Geometriae speciosae elementa (1659). He used the term quasi-infinite for unbounded and quasi-null for vanishing.
Newton dealt with series in his works on Analysis with infinite series (written in 1669, circulated in manuscript, published in 1711), Method of fluxions and infinite series (written in 1671, published in English translation in 1736, Latin original published much later) and Tractatus de Quadratura Curvarum (written in 1693, published in 1704 as an Appendix to his Optiks). In the latter work, Newton considers the binomial expansion of , which he then |
https://en.wikipedia.org/wiki/Truncated%20cuboctahedron | In geometry, the truncated cuboctahedron or great rhombicuboctahedron is an Archimedean solid, named by Kepler as a truncation of a cuboctahedron. It has 12 square faces, 8 regular hexagonal faces, 6 regular octagonal faces, 48 vertices, and 72 edges. Since each of its faces has point symmetry (equivalently, 180° rotational symmetry), the truncated cuboctahedron is a 9-zonohedron. The truncated cuboctahedron can tessellate with the octagonal prism.
Names
There is a nonconvex uniform polyhedron with a similar name: the nonconvex great rhombicuboctahedron.
Cartesian coordinates
The Cartesian coordinates for the vertices of a truncated cuboctahedron having edge length 2 and centered at the origin are all the permutations of:
(±1, ±(1 + ), ±(1 + 2)).
Area and volume
The area A and the volume V of the truncated cuboctahedron of edge length a are:
Dissection
The truncated cuboctahedron is the convex hull of a rhombicuboctahedron with cubes above its 12 squares on 2-fold symmetry axes. The rest of its space can be dissected into 6 square cupolas below the octagons, and 8 triangular cupolas below the hexagons.
A dissected truncated cuboctahedron can create a genus 5, 7, or 11 Stewart toroid by removing the central rhombicuboctahedron, and either the 6 square cupolas, the 8 triangular cupolas, or the 12 cubes respectively. Many other lower symmetry toroids can also be constructed by removing the central rhombicuboctahedron, and a subset of the other dissection components. For example, removing 4 of the triangular cupolas creates a genus 3 toroid; if these cupolas are appropriately chosen, then this toroid has tetrahedral symmetry.
Uniform colorings
There is only one uniform coloring of the faces of this polyhedron, one color for each face type.
A 2-uniform coloring, with tetrahedral symmetry, exists with alternately colored hexagons.
Orthogonal projections
The truncated cuboctahedron has two special orthogonal projections in the A2 and B2 Coxeter planes with [6] |
https://en.wikipedia.org/wiki/Fractal%20dimension | In mathematics, a fractal dimension is a term invoked in the science of geometry to provide a rational statistical index of complexity detail in a pattern. A fractal pattern changes with the scale at which it is measured.
It is also a measure of the space-filling capacity of a pattern, and it tells how a fractal scales differently, in a fractal (non-integer) dimension.
The main idea of "fractured" dimensions has a long history in mathematics, but the term itself was brought to the fore by Benoit Mandelbrot based on his 1967 paper on self-similarity in which he discussed fractional dimensions. In that paper, Mandelbrot cited previous work by Lewis Fry Richardson describing the counter-intuitive notion that a coastline's measured length changes with the length of the measuring stick used (see Fig. 1). In terms of that notion, the fractal dimension of a coastline quantifies how the number of scaled measuring sticks required to measure the coastline changes with the scale applied to the stick. There are several formal mathematical definitions of fractal dimension that build on this basic concept of change in detail with change in scale: see the section Examples.
Ultimately, the term fractal dimension became the phrase with which Mandelbrot himself became most comfortable with respect to encapsulating the meaning of the word fractal, a term he created. After several iterations over years, Mandelbrot settled on this use of the language: "...to use fractal without a pedantic definition, to use fractal dimension as a generic term applicable to all the variants."
One non-trivial example is the fractal dimension of a Koch snowflake. It has a topological dimension of 1, but it is by no means rectifiable: the length of the curve between any two points on the Koch snowflake is infinite. No small piece of it is line-like, but rather it is composed of an infinite number of segments joined at different angles. The fractal dimension of a curve can be explained intuitively by t |
https://en.wikipedia.org/wiki/Email%20art | Email art refers to artwork created for the medium of email. It includes computer graphics, animations, screensavers, digital scans of artwork in other media, and even ASCII art. When exhibited, Email art can be either displayed on a computer screen or similar type of display device, or the work can be printed out and displayed.
Email art is an evolution of the networking Mail Art movement and began during the early 1990s. Chuck Welch, also known as Cracker Jack Kid, connected with early online artists and created a net-worker telenetlink. The historical evolution of the term "Email art" is documented in Chuck Welch's Eternal Network: A Mail Art Anthology published and edited by University of Calgary Press.
By the end of the 1990s, many mailartists, aware of increasing postal rates and cheaper internet access, were beginning the gradual migration of collective art projects towards the web and new, inexpensive forms of digital communication. The Internet facilitated faster dissemination of Mail Art calls (invitations), Mail Art blogs and websites have become commonly used to display contributions and online documentation, and an increasing number of projects include an invitation to submit Email art digitally, either as the preferred channel or as an alternative to sending contributions by post.
In 2006, Ramzi Turki received an e-mail containing a scanned work of Belgian artist Luc Fierens, so he sent this picture to about 7000 e-mail addresses artists seeking their interactions in order to acquire about 200 contributions and answers.
See also
Cyberculture
Digital art
Fax art
Internet art
Mail art
References
Computer networking
Email
Internet art
New media art
Digital art |
https://en.wikipedia.org/wiki/Unicellular%20organism | A unicellular organism, also known as a single-celled organism, is an organism that consists of a single cell, unlike a multicellular organism that consists of multiple cells. Organisms fall into two general categories: prokaryotic organisms and eukaryotic organisms. Most prokaryotes are unicellular and are classified into bacteria and archaea. Many eukaryotes are multicellular, but some are unicellular such as protozoa, unicellular algae, and unicellular fungi. Unicellular organisms are thought to be the oldest form of life, with early protocells possibly emerging 3.8–4.0 billion years ago.
Although some prokaryotes live in colonies, they are not specialised cells with differing functions. These organisms live together, and each cell must carry out all life processes to survive. In contrast, even the simplest multicellular organisms have cells that depend on each other to survive.
Most multicellular organisms have a unicellular life-cycle stage. Gametes, for example, are reproductive unicells for multicellular organisms. Additionally, multicellularity appears to have evolved independently many times in the history of life.
Some organisms are partially unicellular, like Dictyostelium discoideum. Additionally, unicellular organisms can be multinucleate, like Caulerpa, Plasmodium, and Myxogastria.
Evolutionary hypothesis
Primitive protocells were the precursors to today's unicellular organisms. Although the origin of life is largely still a mystery, in the currently prevailing theory, known as the RNA world hypothesis, early RNA molecules would have been the basis for catalyzing organic chemical reactions and self-replication.
Compartmentalization was necessary for chemical reactions to be more likely as well as to differentiate reactions with the external environment. For example, an early RNA replicator ribozyme may have replicated other replicator ribozymes of different RNA sequences if not kept separate. Such hypothetic cells with an RNA genome instead of |
https://en.wikipedia.org/wiki/Modular%20form | In mathematics, a modular form is a (complex) analytic function on the upper half-plane, , that satisfies:
a kind of functional equation with respect to the group action of the modular group,
and a growth condition.
The theory of modular forms therefore belongs to complex analysis. The main importance of the theory is its connections with number theory. Modular forms appear in other areas, such as algebraic topology, sphere packing, and string theory.
Modular form theory is a special case of the more general theory of automorphic forms, which are functions defined on Lie groups that transform nicely with respect to the action of certain discrete subgroups, generalizing the example of the modular group .
The term "modular form", as a systematic description, is usually attributed to Hecke.
Each modular form is attached to a Galois representation.
Definition
In general, given a subgroup of finite index, called an arithmetic group, a modular form of level and weight is a holomorphic function from the upper half-plane such that two conditions are satisfied:
Automorphy condition: For any there is the equality
Growth condition: For any the function is bounded for
where and the function is identified with the matrix The identification of such functions with such matrices causes composition of such functions to correspond to matrix multiplication. In addition, it is called a cusp form if it satisfies the following growth condition:
Cuspidal condition: For any the function as
As sections of a line bundle
Modular forms can also be interpreted as sections of a specific line bundle on modular varieties. For a modular form of level and weight can be defined as an element ofwhere is a canonical line bundle on the modular curveThe dimensions of these spaces of modular forms can be computed using the Riemann–Roch theorem. The classical modular forms for are sections of a line bundle on the moduli stack of elliptic curves.
Modular function
A m |
https://en.wikipedia.org/wiki/Regional%20Internet%20registry | A regional Internet registry (RIR) is an organization that manages the allocation and registration of Internet number resources within a region of the world. Internet number resources include IP addresses and autonomous system (AS) numbers.
The regional Internet registry system evolved, eventually dividing the responsibility for management to a registry for each of five regions of the world. The regional Internet registries are informally liaised through the unincorporated Number Resource Organization (NRO), which is a coordinating body to act on matters of global importance.
Five regional registries
The African Network Information Centre (AFRINIC) serves Africa.
The American Registry for Internet Numbers (ARIN) serves Antarctica, Canada, parts of the Caribbean, and the United States.
The Asia Pacific Network Information Centre (APNIC) serves East Asia, Oceania, South Asia, and Southeast Asia.
The Latin America and Caribbean Network Information Centre (LACNIC) serves most of the Caribbean and all of Latin America.
The Réseaux IP Européens Network Coordination Centre (RIPE NCC) serves Europe, Central Asia, Russia, and West Asia.
Internet Assigned Numbers Authority
Regional Internet registries are components of the Internet Number Registry System, which is described in IETF RFC 7020, where IETF stands for the Internet Engineering Task Force. The Internet Assigned Numbers Authority (IANA) delegates Internet resources to the RIRs who, in turn, follow their regional policies to delegate resources to their customers, which include Internet service providers and end-user organizations. Collectively, the RIRs participate in the Number Resource Organization (NRO), formed as a body to represent their collective interests, undertake joint activities, and coordinate their activities globally. The NRO has entered into an agreement with ICANN for the establishment of the Address Supporting Organisation (ASO), which undertakes coordination of global IP addressing pol |
https://en.wikipedia.org/wiki/Absorption%20law | In algebra, the absorption law or absorption identity is an identity linking a pair of binary operations.
Two binary operations, ¤ and ⁂, are said to be connected by the absorption law if:
a ¤ (a ⁂ b) = a ⁂ (a ¤ b) = a.
A set equipped with two commutative and associative binary operations ("join") and ("meet") that are connected by the absorption law is called a lattice; in this case, both operations are necessarily idempotent (i.e. a a = a and a a = a).
Examples of lattices include Heyting algebras and Boolean algebras, in particular sets of sets with union (∪) and intersection (∩) operators, and ordered sets with min and max operations.
In classical logic, and in particular Boolean algebra, the operations OR and AND, which are also denoted by and , satisfy the lattice axioms, including the absorption law. The same is true for intuitionistic logic.
The absorption law does not hold in many other algebraic structures, such as commutative rings, e.g. the field of real numbers, relevance logics, linear logics, and substructural logics. In the last case, there is no one-to-one correspondence between the free variables of the defining pair of identities.
See also
Absorption (logic)
References
Abstract algebra
Boolean algebra
Theorems in propositional logic
Lattice theory |
https://en.wikipedia.org/wiki/Torticollis | Torticollis, also known as wry neck, is an extremely painful, dystonic condition defined by an abnormal, asymmetrical head or neck position, which may be due to a variety of causes. The term torticollis is derived from the Latin words tortus, meaning "twisted", and collum, meaning "neck".
The most common case has no obvious cause, and the pain and difficulty with turning the head usually goes away after a few days, even without treatment in adults.
Signs and symptoms
Torticollis is a fixed or dynamic tilt, rotation, with flexion or extension of the head and/or neck.
The type of torticollis can be described depending on the positions of the head and neck.
laterocollis: the head is tipped toward the shoulder
rotational torticollis: the head rotates along the longitudinal axis towards the shoulder
anterocollis: forward flexion of the head and neck and brings the chin towards the chest
retrocollis: hyperextension of head and neck backward bringing the back of the head towards the back
A combination of these movements may often be observed. Torticollis can be a disorder in itself as well as a symptom in other conditions.
Other signs and symptoms include:
Neck pain
Occasional formation of a mass
Thickened or tight sternocleidomastoid muscle
Tenderness on the cervical spine
Tremor in head
Unequal shoulder heights
Decreased neck movement
Causes
A multitude of conditions may lead to the development of torticollis including: muscular fibrosis, congenital spine abnormalities, or toxic or traumatic brain injury.
A rough categorization discerns between congenital torticollis and acquired torticollis.
Other categories include:
Osseous
Traumatic
CNS/PNS
Ocular
Non-muscular soft tissue
Spasmodic
Drug induced
Oral ties (lip and tongue ties)
Congenital muscular torticollis
Congenital muscular torticollis is the most common torticollis that is present at birth. Congenital muscular torticollis is the third most common congenital musculoskeletal deformity in chi |
https://en.wikipedia.org/wiki/255%20%28number%29 | 255 (two hundred [and] fifty-five) is the natural number following 254 and preceding 256.
In mathematics
Its factorization makes it a sphenic number. Since 255 = 28 – 1, it is a Mersenne number (though not a pernicious one), and the fourth such number not to be a prime number. It is a perfect totient number, the smallest such number to be neither a power of three nor thrice a prime.
Since 255 is the product of the first three Fermat primes, the regular 255-gon is constructible.
In base 10, it is a self number.
255 is a repdigit in base 2 (11111111), in base 4 (3333), and in base 16 (FF).
In computing
255 is a special number in some tasks having to do with computing. This is the maximum value representable by an eight-digit binary number, and therefore the maximum representable by an unsigned 8-bit byte (the most common size of byte, also called an octet), the smallest common variable size used in high level programming languages (bit being smaller, but rarely used for value storage). The range is 0 to 255, which is 256 total values.
For example, 255 is the maximum value of
components in the 24-bit RGB color model, since each color channel is allotted eight bits;
any dotted quad in an IPv4 address; and
the alpha blending scale in Delphi (255 being 100% visible and 0 being fully transparent).
The use of eight bits for storage in older video games has had the consequence of it appearing as a hard limit in many video games. For example, in the original The Legend of Zelda game, Link can carry a maximum of 255 rupees. It was often used for numbers where casual gameplay would not cause anyone to exceed the number. However, in most situations it is reachable given enough time. This can cause many other peculiarities to appear when the number wraps back to 0, such as the infamous "kill screen" seen after clearing level 255 of Pac-Man.
This number could be interpreted by a computer as −1 if a programmer is not careful about which 8-bit values are signed and unsi |
https://en.wikipedia.org/wiki/Baghdad%20Battery | The Baghdad Battery is the name given to a set of three artifacts which were found together: a ceramic pot, a tube of copper, and a rod of iron. It was discovered in present-day Khujut Rabu, Iraq in 1936, close to the metropolis of Ctesiphon, the capital of the Parthian (150 BC – 223 AD) and Sasanian (224–650 AD) empires, and it is believed to date from either of these periods. Similar artifacts have been found at nearby sites.
Its origin and purpose remain unclear. It was hypothesized by Wilhelm König, at the time director of the National Museum of Iraq, that the object functioned as a galvanic cell, possibly used for electroplating, or some kind of electrotherapy, but there is no electroplated object known from this period, and the claims are near-universally rejected by archaeologists. An alternative explanation is that it functioned as a storage vessel for sacred scrolls.
The artifact disappeared in 2003 during the US-led invasion of Iraq.
Physical description and dating
The artifacts consist of a terracotta pot approximately tall, with a mouth, containing a cylinder made of a rolled copper sheet, which houses a single iron rod. At the top, the iron rod is isolated from the copper by bitumen, with plugs or stoppers, and both rod and cylinder fit snugly inside the opening of the jar. The copper cylinder is not watertight, so if the jar were filled with a liquid, this would surround the iron rod as well. The artifact had been exposed to the weather and had suffered corrosion.
Austrian archeologist Wilhelm König thought the objects might date to the Parthian period, between 250 BC and AD 224. However, according to St John Simpson of the Near Eastern department of the British Museum, their original excavation and context were not well-recorded, and evidence for this date range is very weak. Furthermore, the style of the pottery is Sassanid (224–640).
Albert Al-Haik noted original reports from the 1936 dig at Khuyut Rabbou'a giving the location as an area nor |
https://en.wikipedia.org/wiki/Truncated%20icosidodecahedron | In geometry, a truncated icosidodecahedron, rhombitruncated icosidodecahedron, great rhombicosidodecahedron, omnitruncated dodecahedron or omnitruncated icosahedron is an Archimedean solid, one of thirteen convex, isogonal, non-prismatic solids constructed by two or more types of regular polygon faces.
It has 62 faces: 30 squares, 20 regular hexagons, and 12 regular decagons. It has the most edges and vertices of all Platonic and Archimedean solids, though the snub dodecahedron has more faces. Of all vertex-transitive polyhedra, it occupies the largest percentage (89.80%) of the volume of a sphere in which it is inscribed, very narrowly beating the snub dodecahedron (89.63%) and small rhombicosidodecahedron (89.23%), and less narrowly beating the truncated icosahedron (86.74%); it also has by far the greatest volume (206.8 cubic units) when its edge length equals 1. Of all vertex-transitive polyhedra that are not prisms or antiprisms, it has the largest sum of angles (90 + 120 + 144 = 354 degrees) at each vertex; only a prism or antiprism with more than 60 sides would have a larger sum. Since each of its faces has point symmetry (equivalently, 180° rotational symmetry), the truncated icosidodecahedron is a 15-zonohedron.
Names
The name great rhombicosidodecahedron refers to the relationship with the (small) rhombicosidodecahedron (compare section Dissection).
There is a nonconvex uniform polyhedron with a similar name, the nonconvex great rhombicosidodecahedron.
Area and volume
The surface area A and the volume V of the truncated icosidodecahedron of edge length a are:
If a set of all 13 Archimedean solids were constructed with all edge lengths equal, the truncated icosidodecahedron would be the largest.
Cartesian coordinates
Cartesian coordinates for the vertices of a truncated icosidodecahedron with edge length 2φ − 2, centered at the origin, are all the even permutations of:
(±, ±, ±(3 + φ)),
(±, ±φ, ±(1 + 2φ)),
(±, ±φ2, ±(−1 + 3φ)),
(±(2φ − 1), ±2, ±(2 |
https://en.wikipedia.org/wiki/Safety-critical%20system | A safety-critical system or life-critical system is a system whose failure or malfunction may result in one (or more) of the following outcomes:
death or serious injury to people
loss or severe damage to equipment/property
environmental harm
A safety-related system (or sometimes safety-involved system) comprises everything (hardware, software, and human aspects) needed to perform one or more safety functions, in which failure would cause a significant increase in the safety risk for the people or environment involved. Safety-related systems are those that do not have full responsibility for controlling hazards such as loss of life, severe injury or severe environmental damage. The malfunction of a safety-involved system would only be that hazardous in conjunction with the failure of other systems or human error. Some safety organizations provide guidance on safety-related systems, for example the Health and Safety Executive in the United Kingdom.
Risks of this sort are usually managed with the methods and tools of safety engineering. A safety-critical system is designed to lose less than one life per billion (109) hours of operation. Typical design methods include probabilistic risk assessment, a method that combines failure mode and effects analysis (FMEA) with fault tree analysis. Safety-critical systems are increasingly computer-based.
Safety-critical systems are a concept often used together with the Swiss cheese model to represent (usually in a bow-tie diagram) how a threat can escalate to a major accident through the failure of multiple critical barriers. This use has become common especially in the domain of process safety, in particular when applied to oil and gas drilling and production both for illustrative purposes and to support other processes, such as asset integrity management and incident investigation.
Reliability regimes
Several reliability regimes for safety-critical systems exist:
Fail-operational systems continue to operate when their |
https://en.wikipedia.org/wiki/Elite%20%28video%20game%29 | Elite is a space trading video game. It was written and developed by David Braben and Ian Bell and originally published by Acornsoft for the BBC Micro and Acorn Electron computers in September 1984. Elites open-ended game model, and revolutionary 3D graphics led to it being ported to virtually every contemporary home computer system and earned it a place as a classic and a genre maker in gaming history. The game's title derives from one of the player's goals of raising their combat rating to the exalted heights of "Elite".
Elite was one of the first home computer games to use wire-frame 3D graphics with hidden-line removal. It added graphics and twitch gameplay aspects to the genre established by the 1974 game Star Trader. Another novelty was the inclusion of The Dark Wheel, a novella by Robert Holdstock which gave players insight into the moral and legal codes to which they might aspire.
The Elite series is among the longest-running video game franchises. The first game was followed by the sequels Frontier: Elite II in 1993, and Frontier: First Encounters in 1995, which introduced Newtonian physics, realistic star systems and seamless freeform planetary landings. A third sequel, Elite Dangerous, began crowdfunding in 2012 and was launched on 16 December 2014, following a period of semi-open testing; it received a paid-for expansion season, Horizons, on 15 December 2015.
Elite proved hugely influential, serving as a model for other games including Wing Commander: Privateer, Grand Theft Auto, EVE Online, Freelancer, the X series and No Man's Sky.
Non-Acorn versions were each first published by Firebird and Imagineer. Subsequently, Frontier Developments has claimed the game to be a "Game by Frontier" to be part of its own back catalogue and all the rights to the game have been owned by David Braben.
Gameplay
The player initially controls the character "Commander Jameson", though the name can be changed each time the game is saved. The player starts at Lave Stat |
https://en.wikipedia.org/wiki/Self-organization | Self-organization, also called spontaneous order in the social sciences, is a process where some form of overall order arises from local interactions between parts of an initially disordered system. The process can be spontaneous when sufficient energy is available, not needing control by any external agent. It is often triggered by seemingly random fluctuations, amplified by positive feedback. The resulting organization is wholly decentralized, distributed over all the components of the system. As such, the organization is typically robust and able to survive or self-repair substantial perturbation. Chaos theory discusses self-organization in terms of islands of predictability in a sea of chaotic unpredictability.
Self-organization occurs in many physical, chemical, biological, robotic, and cognitive systems. Examples of self-organization include crystallization, thermal convection of fluids, chemical oscillation, animal swarming, neural circuits, and black markets.
Overview
Self-organization is realized in the physics of non-equilibrium processes, and in chemical reactions, where it is often characterized as self-assembly. The concept has proven useful in biology, from the molecular to the ecosystem level. Cited examples of self-organizing behaviour also appear in the literature of many other disciplines, both in the natural sciences and in the social sciences (such as economics or anthropology). Self-organization has also been observed in mathematical systems such as cellular automata. Self-organization is an example of the related concept of emergence.
Self-organization relies on four basic ingredients:
strong dynamical non-linearity, often (though not necessarily) involving positive and negative feedback
balance of exploitation and exploration
multiple interactions among components
availability of energy (to overcome the natural tendency toward entropy, or loss of free energy)
Principles
The cybernetician William Ross Ashby formulated the original p |
https://en.wikipedia.org/wiki/Kaizen | is a concept referring to business activities that continuously improve all functions and involve all employees from the CEO to the assembly line workers. Kaizen also applies to processes, such as purchasing and logistics, that cross organizational boundaries into the supply chain. It has been applied in healthcare, psychotherapy, life coaching, government, manufacturing, and banking.
By improving standardized programs and processes, kaizen aims to eliminate waste and redundancies (lean manufacturing). Kaizen was first practiced in Japanese businesses after World War II, influenced in part by American business and quality-management teachers, and most notably as part of The Toyota Way. It has since spread throughout the world and has been applied to environments outside of business and productivity.
Overview
The Japanese word means 'change for better' (from 改 kai - change, revision; and 善 zen - virtue, goodness) with the inherent meaning of either 'continuous' or 'philosophy' in Japanese dictionaries and in everyday use. The word refers to any improvement, one-time or continuous, large or small, in the same sense as the English word improvement. However, given the common practice in Japan of labeling industrial or business improvement techniques with the word kaizen, particularly the practices spearheaded by Toyota, the word kaizen in English is typically applied to measures for implementing continuous improvement, especially those with a "Japanese philosophy". The discussion below focuses on such interpretations of the word, as frequently used in the context of modern management discussions. Two kaizen approaches have been distinguished:
Point kaizen
Point kaizen is one of the most commonly implemented types of kaizen. It happens very quickly and usually without much planning. As soon as something is found broken or incorrect, quick and immediate measures are taken to correct the issues. These measures are generally small, isolated and easy to implement.; howe |
https://en.wikipedia.org/wiki/Snub%20dodecahedron | In geometry, the snub dodecahedron, or snub icosidodecahedron, is an Archimedean solid, one of thirteen convex isogonal nonprismatic solids constructed by two or more types of regular polygon faces.
The snub dodecahedron has 92 faces (the most of the 13 Archimedean solids): 12 are pentagons and the other 80 are equilateral triangles. It also has 150 edges, and 60 vertices.
It has two distinct forms, which are mirror images (or "enantiomorphs") of each other. The union of both forms is a compound of two snub dodecahedra, and the convex hull of both forms is a truncated icosidodecahedron.
Kepler first named it in Latin as dodecahedron simum in 1619 in his Harmonices Mundi. H. S. M. Coxeter, noting it could be derived equally from either the dodecahedron or the icosahedron, called it snub icosidodecahedron, with a vertical extended Schläfli symbol and flat Schläfli symbol sr{5,3}.
Cartesian coordinates
Let ξ ≈ be the real zero of the cubic polynomial , where φ is the golden ratio. Let the point p be given by
.
Let the rotation matrices M1 and M2 be given by
and
M1 represents the rotation around the axis (0,1,φ) through an angle of counterclockwise, while M2 being a cyclic shift of (x,y,z) represents the rotation around the axis (1,1,1) through an angle of . Then the 60 vertices of the snub dodecahedron are the 60 images of point p under repeated multiplication by M1 and/or M2, iterated to convergence. (The matrices M1 and M2 generate the 60 rotation matrices corresponding to the 60 rotational symmetries of a regular icosahedron.) The coordinates of the vertices are integral linear combinations of 1, φ, ξ, φξ, ξ2 and φξ2. The edge length equals
Negating all coordinates gives the mirror image of this snub dodecahedron.
As a volume, the snub dodecahedron consists of 80 triangular and 12 pentagonal pyramids.
The volume V3 of one triangular pyramid is given by:
and the volume V5 of one pentagonal pyramid by:
The total volume is
The circumradius equals
The mi |
https://en.wikipedia.org/wiki/Number%20line | In elementary mathematics, a number line is a picture of a graduated straight line that serves as visual representation of the real numbers. Every point of a number line is assumed to correspond to a real number, and every real number to a point.
The integers are often shown as specially-marked points evenly spaced on the line. Although the image only shows the integers from –3 to 3, the line includes all real numbers, continuing forever in each direction, and also numbers that are between the integers. It is often used as an aid in teaching simple addition and subtraction, especially involving negative numbers.
In advanced mathematics, the number line can be called the real line or real number line, formally defined as the set of all real numbers. It is viewed as a geometric space, namely the real coordinate space of dimension one, or the Euclidean space of dimension one – the Euclidean line. It can also be thought of as a vector space (or affine space), a metric space, a topological space, a measure space, or a linear continuum.
Just like the set of real numbers, the real line is usually denoted by the symbol (or alternatively, , the letter "R" in blackboard bold). However, it is sometimes denoted or in order to emphasize its role as the first real space or first Euclidean space.
History
The first mention of the number line used for operation purposes is found in John Wallis's Treatise of algebra. In his treatise, Wallis describes addition and subtraction on a number line in terms of moving forward and backward, under the metaphor of a person walking.
An earlier depiction without mention to operations, though, is found in John Napier's A description of the admirable table of logarithmes, which shows values 1 through 12 lined up from left to right.
Contrary to popular belief, Rene Descartes's original La Géométrie does not feature a number line, defined as we use it today, though it does use a coordinate system. In particular, Descartes's work does not |
https://en.wikipedia.org/wiki/Inverse%20function%20theorem | In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its derivative is continuous and non-zero at the point. The theorem also gives a formula for the derivative of the inverse function.
In multivariable calculus, this theorem can be generalized to any continuously differentiable, vector-valued function whose Jacobian determinant is nonzero at a point in its domain, giving a formula for the Jacobian matrix of the inverse. There are also versions of the inverse function theorem for complex holomorphic functions, for differentiable maps between manifolds, for differentiable functions between Banach spaces, and so forth.
The theorem was first established by Picard and Goursat using an iterative scheme: the basic idea is to prove a fixed point theorem using the contraction mapping theorem.
Statements
For functions of a single variable, the theorem states that if is a continuously differentiable function with nonzero derivative at the point ; then is injective (or bijective onto the image) in a neighborhood of , the inverse is continuously differentiable near , and the derivative of the inverse function at is the reciprocal of the derivative of at :
It can happen that a function may be injective near a point while . An example is . In fact, for such a function, the inverse cannot be differentiable at , since if were differentiable at , then, by the chain rule, , which implies . (The situation is different for holomorphic functions; see #Holomorphic inverse function theorem below.)
For functions of more than one variable, the theorem states that if is a continuously differentiable function from an open subset of into , and the derivative is invertible at a point (that is, the determinant of the Jacobian matrix of at is non-zero), then there exist neighborhoods of in and of such that and is bijective. Writ |
https://en.wikipedia.org/wiki/Sarcoidosis | Sarcoidosis (also known as Besnier–Boeck–Schaumann disease) is a disease involving abnormal collections of inflammatory cells that form lumps known as granulomata. The disease usually begins in the lungs, skin, or lymph nodes. Less commonly affected are the eyes, liver, heart, and brain, though any organ can be affected. The signs and symptoms depend on the organ involved. Often, no, or only mild, symptoms are seen. When it affects the lungs, wheezing, coughing, shortness of breath, or chest pain may occur. Some may have Löfgren syndrome with fever, large lymph nodes, arthritis, and a rash known as erythema nodosum.
The cause of sarcoidosis is unknown. Some believe it may be due to an immune reaction to a trigger such as an infection or chemicals in those who are genetically predisposed. Those with affected family members are at greater risk. Diagnosis is partly based on signs and symptoms, which may be supported by biopsy. Findings that make it likely include large lymph nodes at the root of the lung on both sides, high blood calcium with a normal parathyroid hormone level, or elevated levels of angiotensin-converting enzyme in the blood. The diagnosis should only be made after excluding other possible causes of similar symptoms such as tuberculosis.
Sarcoidosis may resolve without any treatment within a few years. However, some people may have long-term or severe disease. Some symptoms may be improved with the use of anti-inflammatory drugs such as ibuprofen. In cases where the condition causes significant health problems, steroids such as prednisone are indicated. Medications such as methotrexate, chloroquine, or azathioprine may occasionally be used in an effort to decrease the side effects of steroids. The risk of death is 1–7%. The chance of the disease returning in someone who has had it previously is less than 5%.
In 2015, pulmonary sarcoidosis and interstitial lung disease affected 1.9 million people globally and they resulted in 122,000 deaths. It is m |
https://en.wikipedia.org/wiki/Transcendental%20extension | In mathematics, a transcendental extension is a field extension such that there exists an element in the field that is transcendental over the field ; that is, an element that is not a root of any univariate polynomial with coefficients in . In other words, a transcendental extension is a field extension that is not algebraic. For example, are both transcendental extensions of
A transcendence basis of a field extension (or a transcendence basis of over ) is a maximal algebraically independent subset of over Transcendence bases share many properties with bases of vector spaces. In particular, all transcendence bases of a field extension have the same cardinality, called the transcendence degree of the extension. Thus, a field extension is a transcendental extension if and only if its transcendence degree is positive.
Transcendental extensions are widely used in algebraic geometry. For example, the dimension of an algebraic variety is the transcendence degree of its function field. Also, global function fields are transcendental extensions of degree one of a finite field, and play in number theory in positive characteristic a role that is very similar to the role of algebraic number fields in characteristic zero.
Transcendence basis
Zorn's lemma shows there exists a maximal linearly independent subset of a vector space (i.e., a basis). A similar argument with Zorn's lemma shows that, given a field extension L / K, there exists a maximal algebraically independent subset of L over K. It is then called a transcendence basis. By maximality, an algebraically independent subset S of L over K is a transcendence basis if and only if L is an algebraic extension of K(S), the field obtained by adjoining the elements of S to K.
The exchange lemma (a version for algebraically independent sets) implies that if S, S' are transcendence bases, then S and S' have the same cardinality. Then the common cardinality of transcendence bases is called the transcendence degree of |
https://en.wikipedia.org/wiki/Hot%20swapping | Hot swapping is the replacement or addition of components to a computer system without stopping, shutting down, or rebooting the system; hot plugging describes the addition of components only. Components which have such functionality are said to be hot-swappable or hot-pluggable; likewise, components which do not are cold-swappable or cold-pluggable.
Most desktop computer hardware, such as CPUs and memory, are only cold-pluggable. However, it is common for mid to high-end servers and mainframes to feature hot-swappable capability for hardware components, such as CPU, memory, PCIe, SATA and SAS drives.
An example of hot swapping is the express ability to pull a Universal Serial Bus (USB) peripheral device, such as a thumb drive, external hard disk drive (HDD), mouse, keyboard, or printer out of a computer's USB slot or peripheral hub without ejecting it first.
Most smartphones and tablets with tray-loading holders can interchange SIM cards without powering down the system.
Dedicated digital cameras and camcorders usually have readily accessible memory card and battery compartments for quick changing with only minimal interruption of operation. Batteries can be cycled through by recharging reserve batteries externally while unused. Many cameras and camcorders feature an internal memory to allow capturing when no memory card is inserted.
Rationale
Hot swapping is used whenever it is desirable to change the configuration or repair a working system without interrupting its operation. It may simply be for convenience of avoiding the delay and nuisance of shutting down and then restarting complex equipment or because it is essential for equipment, such as a server, to be continuously active.
Hot swapping may be used to add or remove peripherals or components, to allow a device to synchronize data with a computer, and to replace faulty modules without interrupting equipment operation. A machine may have dual power supplies, each adequate to power the machine; a fault |
https://en.wikipedia.org/wiki/Minitel | The Minitel was a videotex online service accessible through telephone lines, and was the world's most successful online service prior to the World Wide Web. It was invented in Cesson-Sévigné, near Rennes, Brittany, France.
The service was rolled out experimentally on 15 July 1980 in Saint-Malo, France, and from autumn 1980 in other areas, and introduced commercially throughout France in 1982 by the PTT (Postes, Télégraphes et Téléphones; divided since 1991 between France Télécom and La Poste). From its early days, users could make online purchases, make train reservations, check stock prices, search the telephone directory, have a mail box, and chat in a similar way to what is now made possible by the World Wide Web.
In February 2009, France Télécom indicated the Minitel network still had 10 million monthly connections. France Télécom retired the service on 30 June 2012.
Name
Officially TELETEL, the name Minitel is abbreviated from the French title of Médium interactif par numérisation d'information téléphonique (Interactive medium for digitized information by telephone).
Business model
In 1978, Postes, Télégraphes et Téléphones, the French PTT organisation, began designing the Minitel network. By distributing terminals that could access a nationwide electronic directory of telephone and address information, it hoped to increase use of the country's 23 million phone lines, and reduce the costs of printing phone books and employing directory assistance personnel. Millions of terminals were given for free (officially loans, and property of the PTT) to telephone subscribers.
The telephone company emphasized ease of use; one observer wrote that "the Minitel terminal requires slightly more training than a toaster to operate". By offering a popular service on simple, free equipment, Minitel achieved high market penetration and avoided the chicken and the egg problem that prevented widespread adoption of such a system in the United States. In exchange for the termi |
https://en.wikipedia.org/wiki/Algebraic%20independence | In abstract algebra, a subset of a field is algebraically independent over a subfield if the elements of do not satisfy any non-trivial polynomial equation with coefficients in .
In particular, a one element set is algebraically independent over if and only if is transcendental over . In general, all the elements of an algebraically independent set over are by necessity transcendental over , and over all of the field extensions over generated by the remaining elements of .
Example
The two real numbers and are each transcendental numbers: they are not the roots of any nontrivial polynomial whose coefficients are rational numbers. Thus, each of the two singleton sets and is algebraically independent over the field of rational numbers.
However, the set is not algebraically independent over the rational numbers, because the nontrivial polynomial
is zero when and .
Algebraic independence of known constants
Although both and e are known to be transcendental,
it is not known whether the set of both of them is algebraically independent over . In fact, it is not even known if is irrational.
Nesterenko proved in 1996 that:
the numbers , , and , where is the gamma function, are algebraically independent over .
the numbers and are algebraically independent over .
for all positive integers , the number is algebraically independent over .
Lindemann–Weierstrass theorem
The Lindemann–Weierstrass theorem can often be used to prove that some sets are algebraically independent over . It states that whenever are algebraic numbers that are linearly independent over , then are also algebraically independent over .
Algebraic matroids
Given a field extension which is not algebraic, Zorn's lemma can be used to show that there always exists a maximal algebraically independent subset of over . Further, all the maximal algebraically independent subsets have the same cardinality, known as the transcendence degree of the extension.
For every set of element |
https://en.wikipedia.org/wiki/Upper%20half-plane | In mathematics, the upper half-plane, , is the set of points in the Cartesian plane with
. The lower half-plane is defined similarly, by requiring that be negative instead. Each is an example of two-dimensional half-space.
Affine geometry
The affine transformations of the upper half-plane include
shifts , , and
dilations , .
Proposition: Let and be semicircles in the upper half-plane with centers on the boundary. Then there is an affine mapping that takes
to .
Proof: First shift the center of to . Then take
and dilate. Then shift the center of .
Definition: .
can be recognized as the circle of radius centered at , and as the polar plot of
.
Proposition: , , and are collinear points.
In fact, is the reflection of the line in the unit circle. Indeed, the diagonal from
to has squared length , so that
is the reciprocal of that length.
Metric geometry
The distance between any two points and in the upper half-plane can be consistently defined as follows: The perpendicular bisector of the segment from to either intersects the boundary or is parallel to it. In the latter case and lie on a ray perpendicular to the boundary and logarithmic measure can be used to define a distance that is invariant under dilation. In the former case and lie on a circle centered at the intersection of their perpendicular bisector and the boundary. By the above proposition this circle can be moved by affine motion to
. Distances on can be defined using the correspondence with points on and logarithmic measure on this ray. In consequence, the upper half-plane becomes a metric space. The generic name of this metric space is the hyperbolic plane. In terms of the models of hyperbolic geometry, this model is frequently designated the Poincaré half-plane model.
Complex plane
Mathematicians sometimes identify the Cartesian plane with the complex plane, and then the upper half-plane corresponds to the set of complex numbers with positive imaginary part:
The |
https://en.wikipedia.org/wiki/I-mode | NTT DoCoMo's i-mode is a mobile internet (distinct from wireless internet) service popular in Japan. Unlike Wireless Application Protocols, i-mode encompasses a wider variety of internet standards, including web access, e-mail, and the packet-switched network that delivers the data. i-mode users also have access to other various services such as: sports results, weather forecasts, games, financial services, and ticket booking. Content is provided by specialised services, typically from the mobile carrier, which allows them to have tighter control over billing.
Like WAP, i-mode delivers only those services that are specifically converted for the service, or are converted through gateways.
Description
In contrast with the Wireless Application Protocol (WAP) standard, which used Wireless Markup Language (WML) on top of a protocol stack for wireless handheld devices, i-mode borrows from DoCoMo proprietary protocols ALP (HTTP) and TLP (TCP, UDP), as well as fixed Internet data formats such as C-HTML, a subset of the HTML language designed by DoCoMo. C-HTML was designed for small devices (e.g. cellular phones) with hardware restrictions such as lower memory, low-power CPUs with limited or no storage capabilities, small monochrome display screens, single-character fonts and limited input methods. As a simpler form of HTML, C-HTML does not support tables, image maps, multiple fonts and styling of fonts, background colors and images, frames, or style sheets, and is limited to a monochromatic display.
i-mode phones have a special i-mode button for the user to access the start menu. There are more than 12,000 official sites and around 100,000 or more unofficial i-mode sites, which are not linked to DoCoMo's i-mode portal page and DoCoMo's billing services. NTT DoCoMo supervises the content and operations of all official i-mode sites, most of which are commercial. These official sites are accessed through DoCoMo's i-mode menu but in many cases official sites can also be acc |
https://en.wikipedia.org/wiki/Rog-O-Matic | Rog-O-Matic is a bot developed in 1981 to play and win the video game Rogue, by four graduate students in the Computer Science Department at Carnegie-Mellon University in Pittsburgh: Andrew Appel, Leonard Hamey, Guy Jacobson and Michael Loren Mauldin.
Described as a "belligerent expert system", Rog-O-Matic performs well when tested against expert Rogue players, even winning the game.
Because all information in Rogue is communicated to the player via ASCII text, Rog-O-Matic has automatic access to the same information a human player has. The program is still the subject of some scholarly interest; a 2005 paper said:
Notes
References
External links
Game artificial intelligence
Expert systems |
https://en.wikipedia.org/wiki/Honeypot%20%28computing%29 | In computer terminology, a honeypot is a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems. Generally, a honeypot consists of data (for example, in a network site) that appears to be a legitimate part of the site which contains information or resources of value to attackers. It is actually isolated, monitored, and capable of blocking or analyzing the attackers. This is similar to police sting operations, colloquially known as "baiting" a suspect.
The main use for this network decoy is to distract potential attackers from more important information and machines on the real network, learn about the forms of attacks they can suffer, and examine such attacks during and after the exploitation of a honeypot.
It provides a way to prevent and see vulnerabilities in a specific network system. A honeypot is a decoy used to protect a network from present or future attacks.
Types
Honeypots can be differentiated based on if they are physical or virtual:
Physical honeypots: real machine with its own IP address, this machine simulates behaviors modeled by the system. Many times this modality is not used as much as the high price of acquiring new machines, their maintenance and the complication affected by configuring specialized hardware
Virtual honeypots: the use of these types of honeypot allow one to install and simulate hosts on the network from different operating systems, but in order to do so, it is necessary to simulate the TCP/IP of the target operating system. This modality is more frequent.
Honeypots can be classified based on their deployment (use/action) and based on their level of involvement. Based on deployment, honeypots may be classified as:
production honeypots
research honeypots
Production honeypots are easy to use, capture only limited information, and are used primarily by corporations. Production honeypots are placed inside the production network with other production |
https://en.wikipedia.org/wiki/Gibbs%20paradox | In statistical mechanics, a semi-classical derivation of entropy that does not take into account the indistinguishability of particles yields an expression for entropy which is not extensive (is not proportional to the amount of substance in question). This leads to a paradox known as the Gibbs paradox, after Josiah Willard Gibbs, who proposed this thought experiment in 1874‒1875. The paradox allows for the entropy of closed systems to decrease, violating the second law of thermodynamics. A related paradox is the "mixing paradox". If one takes the perspective that the definition of entropy must be changed so as to ignore particle permutation, in the thermodynamic limit, the paradox is averted.
Illustration of the problem
Gibbs himself considered the following problem that arises if the ideal gas entropy is not extensive. Two identical containers of an ideal gas sit side-by-side. The gas in container #1 is identical in every respect to the gas in container #2 (i.e. in volume, mass, temperature, pressure, etc). There is a certain entropy S associated with each container which depends on the volume of each container. Now a door in the container wall is opened to allow the gas particles to mix between the containers. No macroscopic changes occur, as the system is in equilibrium. The entropy of the gas in the two-container system can be easily calculated, but if the equation is not extensive, the entropy would not be 2S. In fact, the non-extensive entropy quantity defined and studied by Gibbs would predict additional entropy. Closing the door then reduces the entropy again to S per box, in supposed violation of the Second Law of Thermodynamics.
As understood by Gibbs, and reemphasized more recently, this is a misapplication of Gibbs' non-extensive entropy quantity. If the gas particles are distinguishable, closing the doors will not return the system to its original state – many of the particles will have switched containers. There is a freedom in what is defi |
https://en.wikipedia.org/wiki/List%20of%20forms%20of%20alternative%20medicine | This is a list of articles covering alternative medicine topics.
A
Activated charcoal cleanse
Acupressure
Acupuncture
Affirmative prayer
Alexander technique
Alternative cancer treatments
Animal-assisted therapy
Anthroposophical medicine
Apitherapy
Applied kinesiology
Aquatherapy
Aromatherapy
Art therapy
Asahi Health
Astrology
Attachment therapy
Auriculotherapy
Autogenic training
Autosuggestion
Ayurveda
B
Bach flower therapy
Balneotherapy
Bates method
Bibliotherapy
Biodanza
Bioresonance therapy
Blood irradiation therapies
Body-based manipulative therapies
Body work (alternative medicine) or Massage therapy
C
Chelation therapy
Chinese food therapy
Chinese herbology
Chinese martial arts
Chinese medicine
Chinese pulse diagnosis
Chakra
Chiropractic
Chromotherapy (color therapy, colorpuncture)
Cinema therapy
Coding (therapy)
Coin rubbing
Colloidal silver therapy
Colon cleansing
Colon hydrotherapy (Enema)
Craniosacral therapy
Creative visualization
Crystal healing
Cupping
D
Dance therapy
Detoxification
Detoxification foot baths
Dietary supplements
Dowsing
E
Ear candling
Eclectic medicine
Electromagnetic therapy
Electrohomeopathy
Equine-assisted therapy
Energy medicine
Earthing
Magnet therapy
Reiki
Qigong
Shiatsu
Therapeutic touch
Energy psychology
F
Faith healing
Fasting
Feldenkrais Method
Feng shui
Five elements
Flower essence therapy
Functional medicine
G
German New Medicine
Grahamism
Grinberg Method
Gua sha
Graphology
H
Hair analysis (alternative medicine)
Hatha yoga
Havening
Hawaiian massage
Herbalism
Herbal therapy
Herbology
Hijama
Holistic living
Holistic medicine
Homeopathy
Home remedies
Hydrotherapy
Hypnosis
Hypnotherapy
I
Introspection rundown
Iridology
Isolation tank
Isopathy
J
Jilly Juice
L
Laughter therapy
Light therapy
M
Macrobiotic lifestyle
Magnetic healing
Manipulative therapy
Manual lymphatic drainage
Martial arts
Massage therapy
M |
https://en.wikipedia.org/wiki/Component%20video | Component video is an analog video signal that has been split into two or more component channels. In popular use, it refers to a type of component analog video (CAV) information that is transmitted or stored as three separate signals. Component video can be contrasted with composite video in which all the video information is combined into a single signal that is used in analog television. Like composite, component cables do not carry audio and are often paired with audio cables.
When used without any other qualifications, the term component video usually refers to analog component video with sync on luma (Y) found on analog high-definition televisions and associated equipment from the 1990s through the 2000s when they were largely replaced with HDMI and other all-digital standards. Component video cables and their RCA jack connectors on equipment are normally color-coded red, green and blue, although the signal is not in RGB. YPbPr component video can be losslessly converted to the RGB signal that internally drives the monitor; the encoding is useful as the Y signal will also work on black and white monitors.
Analog component video
Reproducing a video signal on a display device (for example, a cathode-ray tube; CRT) is a straightforward process complicated by the multitude of signal sources. DVD, VHS, computers and video game consoles all store, process and transmit video signals using different methods, and often each will provide more than one signal option. One way of maintaining signal clarity is by separating the components of a video signal so that they do not interfere with each other. A signal separated in this way is called "component video". S-Video, RGB and signals comprise two or more separate signals, and thus are all component-video signals. For most consumer-level video applications, the common three-cable system using BNC or RCA connectors analog component video was used. Typical formats are 480i (480 lines visible, 525 full for NTSC) and 576i |
https://en.wikipedia.org/wiki/Gibbard%E2%80%93Satterthwaite%20theorem | In social choice theory, the Gibbard–Satterthwaite theorem is a result published independently by philosopher Allan Gibbard in 1973 and economist Mark Satterthwaite in 1975. It deals with deterministic ordinal electoral systems that choose a single winner. It states that for every voting rule, one of the following three things must hold:
The rule is dictatorial, i.e. there exists a distinguished voter who can choose the winner; or
The rule limits the possible outcomes to two alternatives only; or
The rule is susceptible to tactical voting: in certain conditions, a voter's sincere ballot may not best defend their opinion.
While the scope of this theorem is limited to ordinal voting, Gibbard's theorem is more general, in that it deals with processes of collective decision that may not be ordinal: for example, voting systems where voters assign grades to candidates. Gibbard's 1978 theorem and Hylland's theorem are even more general and extend these results to non-deterministic processes, i.e. where the outcome may not only depend on the voters' actions but may also involve a part of chance.
Informal description
Consider three voters named Alice, Bob and Carol, who wish to select a winner among four candidates named , , and . Assume that they use the Borda count: each voter communicates his or her preference order over the candidates. For each ballot, 3 points are assigned to the top candidate, 2 points to the second candidate, 1 point to the third one and 0 points to the last one. Once all ballots have been counted, the candidate with the most points is declared the winner.
Assume that their preferences are as follows.
If the voters cast sincere ballots, then the scores are: . Hence, candidate will be elected, with 7 points.
But Alice can vote strategically and change the result. Assume that she modifies her ballot, in order to produce the following situation.
Alice has strategically upgraded candidate and downgraded candidate . Now, the scores are: . Hen |
https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller | Model–view–controller (MVC) is a software design pattern commonly used for developing user interfaces that divides the related program logic into three interconnected elements. These elements are the internal representations of information (the Model), the interface (the View) that presents information to and accepts it from the user, and the Controller software linking the two.
Traditionally used for desktop graphical user interfaces (GUIs), this pattern became popular for designing web applications. Popular programming languages have MVC frameworks that facilitate the implementation of the pattern.
History
One of the seminal insights in the early development of graphical user interfaces, MVC became one of the first approaches to describe and implement software constructs in terms of their responsibilities.
Trygve Reenskaug created MVC while working on Smalltalk-79 as a visiting scientist at the Xerox Palo Alto Research Center (PARC) in the late 1970s. He wanted a pattern that could be used to structure any program where users interact with a large, convoluted data set. His design initially had four parts: Model, View, Thing, and Editor. After discussing it with the other Smalltalk developers, he and the rest of the group settled on Model, View, and Controller instead.
In their final design, a Model represents some part of the program purely and intuitively. A View is a visual representation of a Model, retrieving data from the Model to display to the user and passing requests back and forth between the user and the Model. A Controller is an organizational part of the user interface that lays out and coordinates multiple Views on the screen, and which receives user input and sends the appropriate messages to its underlying Views. This design also includes an Editor as a specialized kind of Controller used to modify a particular View, and which is created through that View.
Smalltalk-80 supports a version of MVC that evolved from this one. It provides abstrac |
https://en.wikipedia.org/wiki/Sodipodi | Sodipodi was an open-source vector graphics editor, discontinued in 2004. It is the predecessor to Inkscape.
Development
Sodipodi started as a fork of Gill, a vector-graphics program written by Raph Levien. The main author was Lauris Kaplinski, and several other people contributed to the project. The project is no longer under active development, having been succeeded by Inkscape, a 2003 fork of Sodipodi. Sodipodi means "mish mash" or "hodgepodge" in Estonian child-speak.
The primary design goal of Sodipodi was to produce a usable vector graphics editor, and a drawing tool for artists. Although it used SVG as its native file format (including some extensions to hold metadata), it was not intended to be a full implementation of the SVG standard. Sodipodi imports and exports plain SVG data, and can also export raster graphics in PNG format. The user interface of Sodipodi is a Controlled Single Document Interface (CSDI) similar to GIMP.
Sodipodi was developed for Linux and Microsoft Windows. The last version was 0.34, released on 11 February 2004. Released under the GNU General Public License, Sodipodi is free software.
Derivatives
Sodipodi started a collection of SVG clip art containing symbols and flags from around the world. This work helped inspire the Open Clip Art Library.
Inkscape started as a fork of Sodipodi, founded in 2003 by a group of Sodipodi developers with different goals, including redesigning the interface and closer compliance with the SVG standard.
See also
Comparison of vector graphics editors
References
External links
Interview with Lauris Kaplinski
Working version for Windows 7 and Windows 10
Free vector graphics editors
Vector graphics editors for Linux
Free software programmed in C
Graphics software that uses GTK
Scalable Vector Graphics
Software forks
Estonian brands
Estonian inventions |
https://en.wikipedia.org/wiki/Uniformization%20theorem | In mathematics, the uniformization theorem says that every simply connected Riemann surface is conformally equivalent to one of three Riemann surfaces: the open unit disk, the complex plane, or the Riemann sphere. The theorem is a generalization of the Riemann mapping theorem from simply connected open subsets of the plane to arbitrary simply connected Riemann surfaces.
Since every Riemann surface has a universal cover which is a simply connected Riemann surface, the uniformization theorem leads to a classification of Riemann surfaces into three types: those that have the Riemann sphere as universal cover ("elliptic"), those with the plane as universal cover ("parabolic") and those with the unit disk as universal cover ("hyperbolic"). It further follows that every Riemann surface admits a Riemannian metric of constant curvature, where the curvature can be taken to be 1 in the elliptic, 0 in the parabolic and -1 in the hyperbolic case.
The uniformization theorem also yields a similar classification of closed orientable Riemannian 2-manifolds into elliptic/parabolic/hyperbolic cases. Each such manifold has a conformally equivalent Riemannian metric with constant curvature, where the curvature can be taken to be 1 in the elliptic, 0 in the parabolic and -1 in the hyperbolic case.
History
Felix and Henri conjectured the uniformization theorem for (the Riemann surfaces of) algebraic curves. extended this to arbitrary multivalued analytic functions and gave informal arguments in its favor. The first rigorous proofs of the general uniformization theorem were given by and . Paul Koebe later gave several more proofs and generalizations. The history is described in ; a complete account of uniformization up to the 1907 papers of Koebe and Poincaré is given with detailed proofs in (the Bourbaki-type pseudonym of the group of fifteen mathematicians who jointly produced this publication).
Classification of connected Riemann surfaces
Every Riemann surface is the quot |
https://en.wikipedia.org/wiki/Usability | Usability can be described as the capacity of a system to provide a condition for its users to perform the tasks safely, effectively, and efficiently while enjoying the experience. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use.
The object of use can be a software application, website, book, tool, machine, process, vehicle, or anything a human interacts with. A usability study may be conducted as a primary job function by a usability analyst or as a secondary job function by designers, technical writers, marketing personnel, and others. It is widely used in consumer electronics, communication, and knowledge transfer objects (such as a cookbook, a document or online help) and mechanical objects such as a door handle or a hammer.
Usability includes methods of measuring usability, such as needs analysis and the study of the principles behind an object's perceived efficiency or elegance. In human-computer interaction and computer science, usability studies the elegance and clarity with which the interaction with a computer program or a web site (web usability) is designed. Usability considers user satisfaction and utility as quality components, and aims to improve user experience through iterative design.
Introduction
The primary notion of usability is that an object designed with a generalized users' psychology and physiology in mind is, for example:
More efficient to use—takes less time to accomplish a particular task
Easier to learn—operation can be learned by observing the object
More satisfying to use
Complex computer systems find their way into everyday life, and at the same time the market is saturated with competing brands. This has made usability more popular and widely recognized in recent years, as companies see the benefits of researching and developing their products with user-orient |
https://en.wikipedia.org/wiki/Replica%20plating | Replica plating is a microbiological technique in which one or more secondary Petri plates containing different solid (agar-based) selective growth media (lacking nutrients or containing chemical growth inhibitors such as antibiotics) are inoculated with the same colonies of microorganisms from a primary plate (or master dish), reproducing the original spatial pattern of colonies. The technique involves pressing a velveteen-covered disk, and then imprinting secondary plates with cells in colonies removed from the original plate by the material. Generally, large numbers of colonies (roughly 30-300) are replica plated due to the difficulty in streaking each out individually onto a separate plate.
The purpose of replica plating is to be able to compare the master plate and any secondary plates, typically to screen for a desired phenotype. For example, when a colony that was present on the primary plate (or master dish), fails to appear on a secondary plate, it shows that the colony was sensitive to a substance on that particular secondary plate. Common screenable phenotypes include auxotrophy and antibiotic resistance.
Replica plating is especially useful for "negative selection". However, it is more correct to refer to "negative screening" instead of using the term 'selection'. For example, if one wanted to select colonies that were sensitive to ampicillin, the primary plate could be replica plated on a secondary Amp+ agar plate. The sensitive colonies on the secondary plate would die but the colonies could still be deduced from the primary plate since the two have the same spatial patterns from ampicillin resistant colonies. The sensitive colonies could then be picked off from the primary plate. Frequently the last plate will be non-selective. In the figure, a nonselective plate will be replica plated after the Amp+ plate to confirm that the absence of growth on the selective plate is due to the selection itself and not a problem with transferring cells. |
https://en.wikipedia.org/wiki/Web%20application | A web application (or web app) is application software that is accessed using a web browser. Web applications are delivered on the World Wide Web to users with an active network connection.
History
In earlier computing models like client-server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as its user interface and had to be separately installed on each user's personal computer. An upgrade to the server-side code of the application would typically also require an upgrade to the client-side code installed on each user workstation, adding to the support cost and decreasing productivity. In addition, both the client and server components of the application were usually tightly bound to a particular computer architecture and operating system and porting them to others was often prohibitively expensive for all but the largest applications (Nowadays, native apps for mobile devices are also hobbled by some or all of the foregoing issues).
In 1995, Netscape introduced a client-side scripting language called JavaScript, allowing programmers to add some dynamic elements to the user interface that ran on the client side. So instead of sending data to the server in order to generate an entire web page, the embedded scripts of the downloaded page can perform various tasks such as input validation or showing/hiding parts of the page.
In 1999, the "web application" concept was introduced in the Java language in the Servlet Specification version 2.2. [2.1?]. At that time both JavaScript and XML had already been developed, but Ajax had still not yet been coined and the XMLHttpRequest object had only been recently introduced on Internet Explorer 5 as an ActiveX object.
Applications like Gmail started to make their client sides more and more interactive since early 2000s. A web page script is able to contact the server fo |
https://en.wikipedia.org/wiki/Firebird%20%28database%20server%29 | Firebird is an open-source SQL relational database management system that supports Linux, Microsoft Windows, macOS and other Unix platforms. The database forked from Borland's open source edition of InterBase in 2000 but the code has been largely rewritten since Firebird 1.5.
History
Within a week of the InterBase 6.0 source being released by Borland on 25 July 2000, the Firebird project was created on SourceForge. Firebird 1.0 was released for Linux, Microsoft Windows and Mac OS X on 11 March 2002, with ports to Solaris, FreeBSD 4, HP-UX over the next two months.
Work on porting the codebase from C to C++ began in 2000. On 23 February 2004, Firebird 1.5 was released, which was the first stable release of the new codebase. Version 1.5 featured an improved query optimizer, SQL-92 conditional expressions, SQL:1999 savepoints and support for explicit locking. Firebird 2.0 was released on 12 November 2006, adding support for 64-bit architectures, tables nested in FROM clauses, and programmable lock timeouts in blocking transactions.
The previous stable release was version 2.1.6, which added new features including procedural triggers, recursive queries, and support for SQL:2003 MERGE statements.
Firebird 2.5 introduced new features like improved multithreading, regular expression syntax and the ability to query remote databases.
The most recent stable version is Firebird 3.0, released 19 April 2016, with focus in performance and security. A major re-architecture of the code allowed total support to SMP machines when using the SuperServer version.
Through the Google Summer of Code 2013 work has begun on integrating Firebird as a replacement for HSQLDB in LibreOffice Base.
Mozilla Firefox name conflict
In April 2003, the Mozilla Organization announced a rename of its web browser from Phoenix to Firebird after a trademark dispute with Phoenix Technologies.
This decision caused concern within the Firebird database project due to the assumption that users and Internet |
https://en.wikipedia.org/wiki/Webby%20Awards | The Webby Awards (colloquially referred to as the Webbys) are awards for excellence on the Internet presented annually by the International Academy of Digital Arts and Sciences, a judging body composed of over three thousand industry experts and technology innovators. Categories include websites, advertising and media, online film and video, mobile sites and apps, and social.
Two winners are selected in each category, one by members of The International Academy of Digital Arts and Sciences, and one by the public who cast their votes during Webby People's Voice voting. Each winner presents a five-word acceptance speech, a trademark of the annual awards show.
Hailed as the "Internet’s highest honor," the award is one of the oldest Internet-oriented awards, and is associated with the phrase "The Oscars of the Internet."
History
In its early years, the organization was one among others vying to be the premiere internet awards show, most notably, the Cool Site of the Year Awards. Both shows would compare themselves to the Oscars, as did media outlets such as The New York Times to Canada's Globe & Mail.
The winners of the First Annual Webby Awards in 1995 were presented by John Brancato and Michael Ferris, writers for Columbia Pictures. It was held at the Hollywood Roosevelt Hotel. The televised Webby Awards were sponsored by the Academy of Web Design and Cool Site of the Day. The first Webby Awards were produced by Kay Dangaard at the Hollywood Roosevelt Hotel as a nod to the first site of the Academy of Motion Picture Arts and Sciences (Oscars). That first year, they were called "Webbie" Awards. The first "Site of the Year" winner was the pioneer webisodic serial The Spot.
The modern Webby Awards were co-founded by Tiffany Shlain, a filmmaker, when she was hired by The Web Magazine to re-establish them, and were first held in San Francisco in 1997. They quickly became known for its requirement that winners give their acceptance speeches in five words. After this, |
https://en.wikipedia.org/wiki/Gum%20arabic | Gum arabic (gum acacia, gum sudani, Senegal gum and by other names) is a natural gum originally consisting of the hardened sap of two species of the Acacia tree, Senegalia senegal and Vachellia seyal. However, the term "gum arabic" does not actually indicate a particular botanical source. The gum is harvested commercially from wild trees, mostly in Sudan (about 70% of the global supply) and throughout the Sahel, from Senegal to Somalia. The name "gum Arabic" (al-samgh al-'arabi) was used in the Middle East at least as early as the 9th century. Gum arabic first found its way to Europe via Arabic ports, and so retained its name.
Gum arabic is a complex mixture of glycoproteins and polysaccharides, predominantly polymers of arabinose and galactose. It is soluble in water, edible, and used primarily in the food industry and soft-drink industry as a stabilizer, with E number E414 (I414 in the US). Gum arabic is a key ingredient in traditional lithography and is used in printing, paints, glues, cosmetics, and various industrial applications, including viscosity control in inks and in textile industries, though less expensive materials compete with it for many of these roles.
Definition
Gum arabic was defined by the 31st Codex Committee for Food Additives, held at The Hague from 19 to 23 March 1999, as the dried exudate from the trunks and branches of Acacia senegal or Vachellia (Acacia) seyal in the family Fabaceae (Leguminosae). A 2017 safety re-evaluation by the Panel on Food Additives and Nutrient Sources of the European Food Safety Authority (EFSA) said that although the above definition holds true for most internationally traded samples, the term "gum arabic" does not indicate a particular botanical source; in a few cases, so‐called "gum arabic" may not even have been collected from Acacia (in the broad sense) species, instead coming from e.g. Combretum or Albizia.
Health benefits
Gum arabic is a rich source of dietary fibers and in addition to its widespread us |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.