source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Biological%20life%20cycle | In biology, a biological life cycle (or just life cycle when the biological context is clear) is a series of stages of the life of an organism, that begins as a zygote, often in an egg, and concludes as an adult that reproduces, producing an offspring in the form of a new zygote which then itself goes through the same series of stages, the process repeating in a cyclic fashion.
"The concept is closely related to those of the life history, development and ontogeny, but differs from them in stressing renewal." Transitions of form may involve growth, asexual reproduction, or sexual reproduction.
In some organisms, different "generations" of the species succeed each other during the life cycle. For plants and many algae, there are two multicellular stages, and the life cycle is referred to as alternation of generations. The term life history is often used, particularly for organisms such as the red algae which have three multicellular stages (or more), rather than two.
Life cycles that include sexual reproduction involve alternating haploid (n) and diploid (2n) stages, i.e., a change of ploidy is involved. To return from a diploid stage to a haploid stage, meiosis must occur. In regard to changes of ploidy, there are three types of cycles:
haplontic life cycle — the haploid stage is multicellular and the diploid stage is a single cell, meiosis is "zygotic".
diplontic life cycle — the diploid stage is multicellular and haploid gametes are formed, meiosis is "gametic".
haplodiplontic life cycle (also referred to as diplohaplontic, diplobiontic, or dibiontic life cycle) — multicellular diploid and haploid stages occur, meiosis is "sporic".
The cycles differ in when mitosis (growth) occurs. Zygotic meiosis and gametic meiosis have one mitotic stage: mitosis occurs during the n phase in zygotic meiosis and during the 2n phase in gametic meiosis. Therefore, zygotic and gametic meiosis are collectively termed "haplobiontic" (single mitotic phase, not to be confused with ha |
https://en.wikipedia.org/wiki/Voltage%20spike | In electrical engineering, spikes are fast, short duration electrical transients in voltage (voltage spikes), current (current spikes), or transferred energy (energy spikes) in an electrical circuit.
Fast, short duration electrical transients (overvoltages) in the electric potential of a circuit are typically caused by
Lightning strikes
Power outages
Tripped circuit breakers
Short circuits
Power transitions in other large equipment on the same power line
Malfunctions caused by the power company
Electromagnetic pulses (EMP) with electromagnetic energy distributed typically up to the 100 kHz and 1 MHz frequency range.
Inductive spikes
In the design of critical infrastructure and military hardware, one concern is of pulses produced by nuclear explosions, whose nuclear electromagnetic pulses distribute large energies in frequencies from 1 kHz into the gigahertz range through the atmosphere.
The effect of a voltage spike is to produce a corresponding increase in current (current spike). However some voltage spikes may be created by current sources. Voltage would increase as necessary so that a constant current will flow. Current from a discharging inductor is one example.
For sensitive electronics, excessive current can flow if this voltage spike exceeds a material's breakdown voltage, or if it causes avalanche breakdown. In semiconductor junctions, excessive electric current may destroy or severely weaken that device. An avalanche diode, transient voltage suppression diode, varistor, overvoltage crowbar, or a range of other overvoltage protective devices can divert (shunt) this transient current thereby minimizing voltage.
Voltage spikes, also known as surges, may be created by a rapid buildup or decay of a magnetic field, which may induce energy into the associated circuit. However voltage spikes can also have more mundane causes such as a fault in a transformer or higher-voltage (primary circuit) power wires falling onto lower-voltage (secondary circuit |
https://en.wikipedia.org/wiki/Well-founded%20relation | In mathematics, a binary relation is called well-founded (or wellfounded or foundational) on a class if every non-empty subset has a minimal element with respect to , that is, an element not related by (for instance, " is not smaller than ") for any . In other words, a relation is well founded if
Some authors include an extra condition that is set-like, i.e., that the elements less than any given element form a set.
Equivalently, assuming the axiom of dependent choice, a relation is well-founded when it contains no infinite descending chains, which can be proved when there is no infinite sequence of elements of such that for every natural number .
In order theory, a partial order is called well-founded if the corresponding strict order is a well-founded relation. If the order is a total order then it is called a well-order.
In set theory, a set is called a well-founded set if the set membership relation is well-founded on the transitive closure of . The axiom of regularity, which is one of the axioms of Zermelo–Fraenkel set theory, asserts that all sets are well-founded.
A relation is converse well-founded, upwards well-founded or Noetherian on , if the converse relation is well-founded on . In this case is also said to satisfy the ascending chain condition. In the context of rewriting systems, a Noetherian relation is also called terminating.
Induction and recursion
An important reason that well-founded relations are interesting is because a version of transfinite induction can be used on them: if () is a well-founded relation, is some property of elements of , and we want to show that
holds for all elements of ,
it suffices to show that:
If is an element of and is true for all such that , then must also be true.
That is,
Well-founded induction is sometimes called Noetherian induction, after Emmy Noether.
On par with induction, well-founded relations also support construction of objects by transfinite recursion. Let be a set-like |
https://en.wikipedia.org/wiki/Blame | Blame is the act of censuring, holding responsible, or making negative statements about an individual or group that their actions or inaction are socially or morally irresponsible, the opposite of praise. When someone is morally responsible for doing something wrong, their action is blameworthy. By contrast, when someone is morally responsible for doing something right, it may be said that their action is praiseworthy. There are other senses of praise and blame that are not ethically relevant. One may praise someone's good dress sense, and blame their own sense of style for their own dress sense.
Neurology
Blaming appears to relate to include brain activity in the temporoparietal junction (TPJ). The amygdala has been found to contribute when we blame others, but not when we respond to their positive actions.
Sociology and psychology
Humans—consciously and unconsciously—constantly make judgments about other people. The psychological criteria for judging others may be partly ingrained, negative, and rigid, indicating some degree of grandiosity.
Blaming provides a way of devaluing others, with the end result that the blamer feels superior, seeing others as less worthwhile and/or making the blamer "perfect". Off-loading blame means putting the other person down by emphasizing their flaws.
Victims of manipulation and abuse frequently feel responsible for causing negative feelings in the manipulator/abuser towards them and the resultant anxiety in themselves. This self-blame often becomes a major feature of victim status.
The victim gets trapped into a self-image of victimization. The psychological profile of victimization includes a pervasive sense of helplessness, passivity, loss of control, pessimism, negative thinking, strong feelings of guilt, shame, remorse, self-blame, and depression. This way of thinking can lead to hopelessness and despair.
Self-blame
Two main types of self-blame exist:
behavioral self-blame – undeserved blame based on actions. Victims w |
https://en.wikipedia.org/wiki/Thaumatin | Thaumatin (also known as talin) is a low-calorie sweetener and flavor modifier. The protein is often used primarily for its flavor-modifying properties and not exclusively as a sweetener.
The thaumatins were first found as a mixture of proteins isolated from the katemfe fruit (Thaumatococcus daniellii) (Marantaceae) of West Africa. Although very sweet, thaumatin's taste is markedly different from sugar's. The sweetness of thaumatin builds very slowly. Perception lasts a long time, leaving a liquorice-like aftertaste at high concentrations. Thaumatin is highly water soluble, stable to heating, and stable under acidic conditions.
Biological role
Thaumatin production is induced in katemfe in response to an attack upon the plant by viroid pathogens. Several members of the thaumatin protein family display significant in vitro inhibition of hyphal growth and sporulation by various fungi. The thaumatin protein is considered a prototype for a pathogen-response protein domain. This thaumatin domain has been found in species as diverse as rice and Caenorhabditis elegans.
Thaumatins are pathogenesis-related (PR) proteins, which are induced by various agents ranging from ethylene to pathogens themselves, and are structurally diverse and ubiquitous in plants: They include thaumatin, osmotin, tobacco major and minor PR proteins, alpha-amylase/trypsin inhibitor, and P21 and PWIR2 soybean and wheat leaf proteins. The proteins are involved in systematically-acquired stress resistance and stress responses in plants, although their precise role is unknown. Thaumatin is an intensely sweet-tasting protein (on a molar basis about 100,000 times as sweet as sucrose) found in the fruit of the West African plant Thaumatococcus daniellii: it is induced by attack by viroids, which are single-stranded unencapsulated RNA molecules that do not code for protein. The thaumatin protein I consists of a single polypeptide chain of 207 residues.
Like other PR proteins, thaumatin is predicted to h |
https://en.wikipedia.org/wiki/Index%20notation | In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program.
In mathematics
It is frequently helpful in mathematics to refer to the elements of an array using subscripts. The subscripts can be integers or variables. The array takes the form of tensors in general, since these can be treated as multi-dimensional arrays. Special (and more familiar) cases are vectors (1d arrays) and matrices (2d arrays).
The following is only an introduction to the concept: index notation is used in more detail in mathematics (particularly in the representation and manipulation of tensor operations). See the main article for further details.
One-dimensional arrays (vectors)
A vector treated as an array of numbers by writing as a row vector or column vector (whichever is used depends on convenience or context):
Index notation allows indication of the elements of the array by simply writing ai, where the index i is known to run from 1 to n, because of n-dimensions.
For example, given the vector:
then some entries are
.
The notation can be applied to vectors in mathematics and physics. The following vector equation
can also be written in terms of the elements of the vector (aka components), that is
where the indices take a given range of values. This expression represents a set of equations, one for each index. If the vectors each have n elements, meaning i = 1,2,…n, then the equations are explicitly
Hence, index notation serves as an efficient shorthand for
representing the general structure to an equation,
while applicable to individual components.
Two-dimensional arrays
More than one index is used to describe arrays of number |
https://en.wikipedia.org/wiki/Habitat%20conservation | Habitat conservation is a management practice that seeks to conserve, protect and restore habitats and prevent species extinction, fragmentation or reduction in range. It is a priority of many groups that cannot be easily characterized in terms of any one ideology.
History of the conservation movement
For much of human history, nature was seen as a resource that could be controlled by the government and used for personal and economic gain. The idea was that plants only existed to feed animals and animals only existed to feed humans. The value of land was limited only to the resources it provided such as fertile soil, timber, and minerals.
Throughout the 18th and 19th centuries, social views started to change and conservation principles were first practically applied to the forests of British India. The conservation ethic that began to evolve included three core principles: 1) human activities damage the environment, 2) there was a civic duty to maintain the environment for future generations, and 3) scientific, empirically-based methods should be applied to ensure this duty was carried out. Sir James Ranald Martin was prominent in promoting this ideology, publishing numerous medico-topographical reports that demonstrated the damage from large-scale deforestation and desiccation, and lobbying extensively for the institutionalization of forest conservation activities in British India through the establishment of Forest Departments.
The Madras Board of Revenue started local conservation efforts in 1842, headed by Alexander Gibson, a professional botanist who systematically adopted a forest conservation program based on scientific principles. This was the first case of state conservation management of forests in the world. Governor-General Lord Dalhousie introduced the first permanent and large-scale forest conservation program in 1855, a model that soon spread to other colonies, as well to the United States, where Yellowstone National Park was opened in 1872 as th |
https://en.wikipedia.org/wiki/Programmed%20input%E2%80%93output | Programmed input–output (also programmable input/output, programmed input/output, programmed I/O, PIO) is a method of data transmission, via input/output (I/O), between a central processing unit (CPU) and a peripheral device, such as a Parallel ATA storage device. Each data item transfer is initiated by an instruction in the program, involving the CPU for every transaction. In contrast, in direct memory access (DMA) operations, the CPU is uninvolved in the data transfer.
The term can refer to either memory-mapped I/O (MMIO) or port-mapped I/O (PMIO). PMIO refers to transfers using a special address space outside of normal memory, usually accessed with dedicated instructions, such as IN and OUT in x86 architectures. MMIO refers to transfers to I/O devices that are mapped into the normal address space available to the program. PMIO was very useful for early microprocessors with small address spaces, since the valuable resource was not consumed by the I/O devices.
The best known example of a PC device that uses programmed I/O is the Parallel AT Attachment (PATA) interface; however, the AT Attachment interface can also be operated in any of several DMA modes. Many older devices in a PC also use PIO, including legacy serial ports, legacy parallel ports when not in ECP mode, keyboard and mouse PS/2 ports, legacy MIDI and joystick ports, the interval timer, and older network interfaces.
PIO mode in the ATA interface
The PIO interface is grouped into different modes that correspond to different transfer rates. The electrical signaling among the different modes is similar — only the cycle time between transactions is reduced in order to achieve a higher transfer rate. All ATA devices support the slowest mode — Mode 0. By accessing the information registers (using Mode 0) on an ATA drive, the CPU is able to determine the maximum transfer rate for the device and configure the ATA controller for optimal performance.
The PIO modes require a great deal of CPU overhead to con |
https://en.wikipedia.org/wiki/Instrumentation%20amplifier | An instrumentation amplifier (sometimes shorthanded as in-amp or InAmp) is a type of differential amplifier that has been outfitted with input buffer amplifiers, which eliminate the need for input impedance matching and thus make the amplifier particularly suitable for use in measurement and test equipment. Additional characteristics include very low DC offset, low drift, low noise, very high open-loop gain, very high common-mode rejection ratio, and very high input impedances. Instrumentation amplifiers are used where great accuracy and stability of the circuit both short- and long-term are required.
Although the instrumentation amplifier is usually shown schematically identical to a standard operational amplifier (op-amp), the electronic instrumentation amplifier is almost always internally composed of 3 op-amps. These are arranged so that there is one op-amp to buffer each input (+, −), and one to produce the desired output with adequate impedance matching for the function.
The most commonly used instrumentation amplifier circuit is shown in the figure. The gain of the circuit is
The rightmost amplifier, along with the resistors labelled and is just the standard differential-amplifier circuit, with gain and differential input resistance . The two amplifiers on the left are the buffers. With removed (open-circuited), they are simple unity-gain buffers; the circuit will work in that state, with gain simply equal to and high input impedance because of the buffers. The buffer gain could be increased by putting resistors between the buffer inverting inputs and ground to shunt away some of the negative feedback; however, the single resistor between the two inverting inputs is a much more elegant method: it increases the differential-mode gain of the buffer pair while leaving the common-mode gain equal to 1. This increases the common-mode rejection ratio (CMRR) of the circuit and also enables the buffers to handle much larger common-mode signals without clip |
https://en.wikipedia.org/wiki/North%20American%20Network%20Operators%27%20Group | The North American Network Operators' Group (NANOG) is an educational and operational forum for the coordination and dissemination of technical information related to backbone/enterprise networking technologies and operational practices. It runs meetings, talks, surveys, and an influential mailing list for Internet service providers. The main method of communication is the NANOG mailing list (known informally as nanog-l), a free mailing list to which anyone may subscribe or post.
Meetings
NANOG meetings are held three times each year, and include presentations, tutorials, and BOFs (Birds of a Feather meetings). There are also 'lightning talks', where speakers can submit brief presentations (no longer than 10 minutes), on a very short term. The meetings are informal, and membership is open. Conference participants typically include senior engineering staff from tier 1 and tier 2 ISPs. Participating researchers present short summaries of their work for operator feedback. In addition to the conferences, NANOG On the Road events offer single-day professional development and networking events touching on current NANOG discussion topics.
Organization
NANOG meetings are organized by NewNOG, Inc., a Delaware non-profit organization, which took over responsibility for NANOG from the Merit Network in February 2011. Meetings are hosted by NewNOG and other organizations from the U.S. and Canada. Overall leadership is provided by the NANOG Steering Committee, established in 2005, and a Program Committee.
History
NANOG evolved from the NSFNET "Regional-Techs" meetings, where technical staff from the regional networks met to discuss operational issues of common concern with each other and with the Merit engineering staff. At the February 1994 regional techs meeting in San Diego, the group revised its charter to include a broader base of network service providers, and subsequently adopted NANOG as its new name.
NANOG was organized by Merit Network, a non-profit Michigan org |
https://en.wikipedia.org/wiki/How%20to%20Solve%20It | How to Solve It (1945) is a small volume by mathematician George Pólya describing methods of problem solving.
Four principles
How to Solve It suggests the following steps when solving a mathematical problem:
First, you have to understand the problem.
After understanding, make a plan.
Carry out the plan.
Look back on your work. How could it be better?
If this technique fails, Pólya advises: "If you cannot solve the proposed problem, try to solve first some related problem. Could you imagine a more accessible related problem?"
First principle: Understand the problem
"Understand the problem" is often neglected as being obvious and is not even mentioned in many mathematics classes. Yet students are often stymied in their efforts to solve it, simply because they don't understand it fully, or even in part. In order to remedy this oversight, Pólya taught teachers how to prompt each student with appropriate questions, depending on the situation, such as:
What are you asked to find or show?
Can you restate the problem in your own words?
Can you think of a picture or a diagram that might help you understand the problem?
Is there enough information to enable you to find a solution?
Do you understand all the words used in stating the problem?
Do you need to ask a question to get the answer?
The teacher is to select the question with the appropriate level of difficulty for each student to ascertain if each student understands at their own level, moving up or down the list to prompt each student, until each one can respond with something constructive.
Second principle: Devise a plan
Pólya mentions that there are many reasonable ways to solve problems. The skill at choosing an appropriate strategy is best learned by solving many problems. You will find choosing a strategy increasingly easy. A partial list of strategies is included:
Guess and check
Make an orderly list
Eliminate possibilities
Use symmetry
Consider special cases
Use direct reasoning
Sol |
https://en.wikipedia.org/wiki/Needle%20and%20syringe%20programmes | A needle and syringe programme (NSP), also known as needle exchange program (NEP), is a social service that allows injecting drug users (IDUs) to obtain clean and unused hypodermic needles and associated paraphernalia at little or no cost. It is based on the philosophy of harm reduction that attempts to reduce the risk factors for blood-borne diseases such as HIV/AIDS and hepatitis.
History
Needle-exchange programmes can be traced back to informal activities undertaken during the 1970s. The idea is likely to have been rediscovered in multiple locations. The first government-approved initiative (Netherlands) was undertaken in the early to mid-1980s, followed closely by initiatives in the United Kingdom and Australia by 1986. While the initial programme was motivated by an outbreak of hepatitis B, the AIDS pandemic motivated the rapid adoption of these programmes around the world.
Operation
Needle and syringe programs operate differently in different parts of the world; the first NSPs in Europe and Australia gave out sterile equipment to drug users, having begun in the context of the early AIDS epidemic. The United States took a far more reluctant approach, typically requiring IDUs to already have used needles to exchange for sterile ones - this "One-for-one" system is where the same number of syringes must be returned.
According to Santa Cruz County, California, exchange staff interviewed by Santa Cruz Local in 2019, it is a common practice not to count the number of exchanged needles exactly, but rather to estimate the number based on a container’s volume. Holyoke, Massachusetts, also uses the volume system. United Nations Office on Drugs and Crime for South Asia suggests visual estimation or asking the client how many they brought back. The volume-based method left potential for gaming the system and an exchange agency in Vancouver devoted significant effort to game the system.
Some, such as the Columbus Public Health in Ohio weigh the returned sharps rathe |
https://en.wikipedia.org/wiki/XLR%20connector | The XLR connector is a type of electrical connector primarily used in professional audio, video, and stage lighting equipment. XLR connectors are cylindrical in design, with three to seven connector pins, and are often employed for analog balanced audio interconnections, AES3 digital audio, portable intercom, DMX512 lighting control, and for low-voltage power supply. XLR connectors are included to the international standard for dimensions, IEC 61076-2-103. The XLR connector is superficially similar to the smaller DIN connector, with which it is physically incompatible.
History and manufacturers
The XLR connector (also Cannon plug and Cannon connector) was invented by James H. Cannon, founder of the Cannon Electric company, Los Angeles, California. The XLR connector originated from the Cannon X series of connectors; by 1950, a latching mechanism was added to the connector, which produced the Cannon XL model of connector, and by 1955, the female connector featured synthetic-rubber insulation polychloroprene (neoprene), identified with the part-number prefix XLR. There was also the XLP series of connectors with hard plastic insulation, but the XLR model name is commonly used for all of the variants.
Originally, the ITT Cannon company manufactured XLR connectors in two locations: Kanagawa, Japan, and Melbourne, Australia. The Australian factory was sold to Alcatel Components in 1992 and then acquired by Amphenol in 1998. Later, the Switchcraft corporation manufactured compatible connectors, followed by the Neutrik company, which made improvements to the connector, and produced a second-generation design (the X-series) that had only four parts for the cable connector, and eliminated the small screws used in the models of XLR connectors made by Cannon and Switchcraft.
Design
XLR connectors are available in male and female versions in both cable and chassis mounting designs, a total of four styles. This is slightly unusual as many other connector designs omit one of |
https://en.wikipedia.org/wiki/Affine%20variety | In algebraic geometry, an affine algebraic set is the set of the common zeros over an algebraically closed field of some family of polynomials in the polynomial ring An affine variety or affine algebraic variety, is an affine algebraic set such that the ideal generated by the defining polynomials is prime.
Some texts call variety any algebraic set, and irreducible variety an algebraic set whose defining ideal is prime (affine variety in the above sense).
In some contexts (see, for example, Hilbert's Nullstellensatz), it is useful to distinguish the field in which the coefficients are considered, from the algebraically closed field (containing ) over which the common zeros are considered (that is, the points of the affine algebric set are in ). In this case, the variety is said defined over , and the points of the variety that belong to are said -rational or rational over . In the common case where is the field of real numbers, a -rational point is called a real point. When the field is not specified, a rational point is a point that is rational over the rational numbers. For example, Fermat's Last Theorem asserts that the affine algebraic variety (it is a curve) defined by has no rational points for any integer greater than two.
Introduction
An affine algebraic set is the set of solutions in an algebraically closed field of a system of polynomial equations with coefficients in . More precisely, if are polynomials with coefficients in , they define an affine algebraic set
An affine (algebraic) variety is an affine algebraic set which is not the union of two proper affine algebraic subsets. Such an affine algebraic set is often said to be irreducible.
If is an affine algebraic set, and is the ideal of all polynomials that are zero on , then the quotient ring is called the of X. If X is an affine variety, then I is prime, so the coordinate ring is an integral domain. The elements of the coordinate ring R are also called the regular functions or |
https://en.wikipedia.org/wiki/Projective%20variety | In algebraic geometry, a projective variety over an algebraically closed field k is a subset of some projective n-space over k that is the zero-locus of some finite family of homogeneous polynomials of n + 1 variables with coefficients in k, that generate a prime ideal, the defining ideal of the variety. Equivalently, an algebraic variety is projective if it can be embedded as a Zariski closed subvariety of .
A projective variety is a projective curve if its dimension is one; it is a projective surface if its dimension is two; it is a projective hypersurface if its dimension is one less than the dimension of the containing projective space; in this case it is the set of zeros of a single homogeneous polynomial.
If X is a projective variety defined by a homogeneous prime ideal I, then the quotient ring
is called the homogeneous coordinate ring of X. Basic invariants of X such as the degree and the dimension can be read off the Hilbert polynomial of this graded ring.
Projective varieties arise in many ways. They are complete, which roughly can be expressed by saying that there are no points "missing". The converse is not true in general, but Chow's lemma describes the close relation of these two notions. Showing that a variety is projective is done by studying line bundles or divisors on X.
A salient feature of projective varieties are the finiteness constraints on sheaf cohomology. For smooth projective varieties, Serre duality can be viewed as an analog of Poincaré duality. It also leads to the Riemann–Roch theorem for projective curves, i.e., projective varieties of dimension 1. The theory of projective curves is particularly rich, including a classification by the genus of the curve. The classification program for higher-dimensional projective varieties naturally leads to the construction of moduli of projective varieties. Hilbert schemes parametrize closed subschemes of with prescribed Hilbert polynomial. Hilbert schemes, of which Grassmannians are special |
https://en.wikipedia.org/wiki/Computer-mediated%20communication | Computer-mediated communication (CMC) is defined as any human communication that occurs through the use of two or more electronic devices. While the term has traditionally referred to those communications that occur via computer-mediated formats (e.g., instant messaging, email, chat rooms, online forums, social network services), it has also been applied to other forms of text-based interaction such as text messaging. Research on CMC focuses largely on the social effects of different computer-supported communication technologies. Many recent studies involve Internet-based social networking supported by social software.
Forms
Computer-mediated communication can be broken down into two forms: synchronous and asynchronous. Synchronous computer-mediated communication refers to communication that occurs in real-time. All parties are engaged in the communication simultaneously; however, they are not necessarily all in the same location. Examples of synchronous communication are video chats and FaceTime audio calls. On the contrary, asynchronous computer-mediated communication refers to communication that takes place when the parties engaged are not communicating in unison. In other words, the sender does not receive an immediate response from the receiver. Most forms of computer-mediated technology are asynchronous. Examples of asynchronous communication are text messages and emails.
Scope
Scholars from a variety of fields study phenomena that can be described under the umbrella term of computer-mediated communication (CMC) (see also Internet studies). For example, many take a sociopsychological approach to CMC by examining how humans use "computers" (or digital media) to manage interpersonal interaction, form impressions and maintain relationships. These studies have often focused on the differences between online and offline interactions, though contemporary research is moving towards the view that CMC should be studied as embedded in everyday life. Another branc |
https://en.wikipedia.org/wiki/Tarski%27s%20theorem%20about%20choice | In mathematics, Tarski's theorem, proved by , states that in ZF the theorem "For every infinite set , there is a bijective map between the sets and " implies the axiom of choice. The opposite direction was already known, thus the theorem and axiom of choice are equivalent.
Tarski told that when he tried to publish the theorem in Comptes Rendus de l'Académie des Sciences de Paris, Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest.
Proof
The goal is to prove that the axiom of choice is implied by the statement "for every infinite set ".
It is known that the well-ordering theorem is equivalent to the axiom of choice; thus it is enough to show that the statement implies that for every set there exists a well-order.
Since the collection of all ordinals such that there exists a surjective function from to the ordinal is a set, there exists an infinite ordinal, such that there is no surjective function from to
We assume without loss of generality that the sets and are disjoint.
By the initial assumption, thus there exists a bijection
For every it is impossible that because otherwise we could define a surjective function from to
Therefore, there exists at least one ordinal such that so the set is not empty.
We can define a new function:
This function is well defined since is a non-empty set of ordinals, and so has a minimum.
For every the sets and are disjoint.
Therefore, we can define a well order on for every we define since the image of that is, is a set of ordinals and therefore well ordered.
References
Axiom of choice
Cardinal numbers
Set theory
Theorems in the foundations of mathematics
fr:Ordinal de Hartogs#Produit cardinal |
https://en.wikipedia.org/wiki/Impedance%20matching | In electronics, impedance matching is the practice of designing or adjusting the input impedance or output impedance of an electrical device for a desired value. Often, the desired value is selected to maximize power transfer or minimize signal reflection. For example, impedance matching typically is used to improve power transfer from a radio transmitter via the interconnecting transmission line to the antenna. Signals on a transmission line will be transmitted without reflections if the transmission line is terminated with a matching impedance.
Techniques of impedance matching include transformers, adjustable networks of lumped resistance, capacitance and inductance, or properly proportioned transmission lines. Practical impedance-matching devices will generally provide best results over a specified frequency band.
The concept of impedance matching is widespread in electrical engineering, but is relevant in other applications in which a form of energy, not necessarily electrical, is transferred between a source and a load, such as in acoustics or optics.
Theory
Impedance is the opposition by a system to the flow of energy from a source. For constant signals, this impedance can also be constant. For varying signals, it usually changes with frequency. The energy involved can be electrical, mechanical, acoustic, magnetic, optical, or thermal. The concept of electrical impedance is perhaps the most commonly known. Electrical impedance, like electrical resistance, is measured in ohms. In general, impedance (symbol: Z) has a complex value; this means that loads generally have a resistance component (symbol: R) which forms the real part and a reactance component (symbol: X) which forms the imaginary part.
In simple cases (such as low-frequency or direct current power transmission) the reactance may be negligible or zero; the impedance can be considered a pure resistance, expressed as a real number. In the following summary we will consider the general case when r |
https://en.wikipedia.org/wiki/Cantor%20function | In mathematics, the Cantor function is an example of a function that is continuous, but not absolutely continuous. It is a notorious counterexample in analysis, because it challenges naive intuitions about continuity, derivative, and measure. Though it is continuous everywhere and has zero derivative almost everywhere, its value still goes from 0 to 1 as its argument reaches from 0 to 1. Thus, in one sense the function seems very much like a constant one which cannot grow, and in another, it does indeed monotonically grow.
It is also called the Cantor ternary function, the Lebesgue function, Lebesgue's singular function, the Cantor–Vitali function, the Devil's staircase, the Cantor staircase function, and the Cantor–Lebesgue function. introduced the Cantor function and mentioned that Scheeffer pointed out that it was a counterexample to an extension of the fundamental theorem of calculus claimed by Harnack. The Cantor function was discussed and popularized by , and .
Definition
To define the Cantor function , let be any number in and obtain by the following steps:
Express in base 3.
If the base-3 representation of contains a 1, replace every digit strictly after the first 1 by 0.
Replace any remaining 2s with 1s.
Interpret the result as a binary number. The result is .
For example:
has the ternary representation 0.02020202... There are no 1s so the next stage is still 0.02020202... This is rewritten as 0.01010101... This is the binary representation of , so .
has the ternary representation 0.01210121... The digits after the first 1 are replaced by 0s to produce 0.01000000... This is not rewritten since it has no 2s. This is the binary representation of , so .
has the ternary representation 0.21102 (or 0.211012222...). The digits after the first 1 are replaced by 0s to produce 0.21. This is rewritten as 0.11. This is the binary representation of , so .
Equivalently, if is the Cantor set on [0,1], then the Cantor function can be defined as
This f |
https://en.wikipedia.org/wiki/Constant%20function | In mathematics, a constant function is a function whose (output) value is the same for every input value. For example, the function is a constant function because the value of is 4 regardless of the input value (see image).
Basic properties
As a real-valued function of a real-valued argument, a constant function has the general form or just
Example: The function or just is the specific constant function where the output value is The domain of this function is the set of all real numbers R. The codomain of this function is just {2}. The independent variable x does not appear on the right side of the function expression and so its value is "vacuously substituted". Namely and so on. No matter what value of x is input, the output is "2".
Real-world example: A store where every item is sold for the price of 1 dollar.
The graph of the constant function is a horizontal line in the plane that passes through the point
In the context of a polynomial in one variable x, the non-zero constant function is a polynomial of degree 0 and its general form is where is nonzero. This function has no intersection point with the x-axis, that is, it has no root (zero). On the other hand, the polynomial is the identically zero function. It is the (trivial) constant function and every x is a root. Its graph is the x-axis in the plane.
A constant function is an even function, i.e. the graph of a constant function is symmetric with respect to the y-axis.
In the context where it is defined, the derivative of a function is a measure of the rate of change of function values with respect to change in input values. Because a constant function does not change, its derivative is 0. This is often written: . The converse is also true. Namely, if for all real numbers x, then y is a constant function.
Example: Given the constant function The derivative of y is the identically zero function
Other properties
For functions between preordered sets, constant functions are both order-p |
https://en.wikipedia.org/wiki/Symmedian | In geometry, symmedians are three particular lines associated with every triangle. They are constructed by taking a median of the triangle (a line connecting a vertex with the midpoint of the opposite side), and reflecting the line over the corresponding angle bisector (the line through the same vertex that divides the angle there in half). The angle formed by the symmedian and the angle bisector has the same measure as the angle between the median and the angle bisector, but it is on the other side of the angle bisector.
The three symmedians meet at a triangle center called the Lemoine point. Ross Honsberger has called its existence "one of the crown jewels of modern geometry".
Isogonality
Many times in geometry, if we take three special lines through the vertices of a triangle, or cevians, then their reflections about the corresponding angle bisectors, called isogonal lines, will also have interesting properties. For instance, if three cevians of a triangle intersect at a point , then their isogonal lines also intersect at a point, called the isogonal conjugate of .
The symmedians illustrate this fact.
In the diagram, the medians (in black) intersect at the centroid .
Because the symmedians (in red) are isogonal to the medians, the symmedians also intersect at a single point, .
This point is called the triangle's symmedian point, or alternatively the Lemoine point or Grebe point.
The dotted lines are the angle bisectors; the symmedians and medians are symmetric about the angle bisectors (hence the name "symmedian.")
Construction of the symmedian
Let be a triangle. Construct a point by intersecting the tangents from and to the circumcircle. Then is the symmedian of .
first proof. Let the reflection of across the angle bisector of meet at . Then:
second proof. Define as the isogonal conjugate of . It is easy to see that the reflection of about the bisector is the line through parallel to . The same is true for , and so, is a parallelogram. |
https://en.wikipedia.org/wiki/Microdot | A microdot is text or an image substantially reduced in size to prevent detection by unintended recipients. Microdots are normally circular and around in diameter but can be made into different shapes and sizes and made from various materials such as polyester or metal. The name comes from the fact that the microdots have often been about the size and shape of a typographical dot, such as a period or the tittle of a lowercase i or j. Microdots are, fundamentally, a steganographic approach to message protection.
History
In 1870 during the Franco-Prussian War, Paris was under siege and messages were sent by carrier pigeon. Parisian photographer René Dagron used microfilm to permit each pigeon to carry a high volume of messages, as pigeons can carry little weight.
Improvement in technology since then has made even more miniaturization possible.
At the International Congress of Photography in Paris in 1925 Emanuel Goldberg presented a method of producing extreme reduction microdots using a two-stage process. First, an initial reduced negative was made, then the image of the negative was projected from the eyepiece of a modified microscope onto a collodium emulsion where the microscope specimen slide would be. The reduction was such that a page of text would be legibly reproduced in a surface of 0.01 mm2. This density is comparable to the entire text of the Bible fifty times over in one square inch. Goldberg's "Mikrat" (microdot) was prominently reported at the time in English, French and German publications.
A technique comparable to modern microdots for steganographic purposes was first used in Germany between World War I and World War II. It was also later used by many countries to pass messages through insecure postal channels. Later microdot techniques used film with aniline dye, rather than silver halide layers, as this was even harder for counter-espionage agents to find.
A popular article on espionage by J. Edgar Hoover in the Reader's Digest in 1946 attri |
https://en.wikipedia.org/wiki/Neohesperidin%20dihydrochalcone | Neohesperidin dihydrochalcone, sometimes abbreviated to neohesperidin DC or simply NHDC, is an artificial sweetener derived from citrus.
It is particularly effective in masking the bitter tastes of other compounds found in citrus, including limonin and naringin. Industrially, it is produced by extracting neohesperidin from the bitter orange, and then hydrogenating this to make NHDC.
Discovery
NHDC was discovered during the 1960s as part of a United States Department of Agriculture research program to find methods for minimizing the taste of bitter flavorants in citrus juices. Neohesperidin is one such bitter compound. When treated with potassium hydroxide or another strong base, and then catalytically hydrogenated, it becomes NHDC.
Profile
NHDC in pure form is found as a white substance not unlike powdered sugar. It has an intense sweet taste because it stimulates the sweet receptor TAS1R2+TAS1R3 in humans, although this is species-dependent, as the equivalent receptor in rats does not respond to the molecule.
It is roughly 1500–1800 times sweeter than sugar at threshold concentrations; around 340 times sweeter than sugar. Its potency is naturally affected by such factors as the application in which it is used, and the pH of the product.
Like other highly sweet glycosides, such as glycyrrhizin and those found in stevia, NHDC's sweet taste has a slower onset than sugar's and lingers in the mouth for some time.
Unlike aspartame, NHDC is stable to elevated temperatures and to acidic or basic conditions, and so can be used in applications that require a long shelf life. NHDC itself can stay foodsafe for up to five years when stored in optimal conditions.
The product is well known for having a strong synergistic effect when used in conjunction with other artificial sweeteners such as aspartame, saccharin, acesulfame potassium, and cyclamate, as well as sugar alcohols such as xylitol. NHDC usage boosts the effects of these sweeteners at lower concentrations th |
https://en.wikipedia.org/wiki/Model%20checking | In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification (also known as correctness). This is typically associated with hardware or software systems, where the specification contains liveness requirements (such as avoidance of livelock) as well as safety requirements (such as avoidance of states representing a system crash).
In order to solve such a problem algorithmically, both the model of the system and its specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task in logic, namely to check whether a structure satisfies a given logical formula. This general concept applies to many kinds of logic and many kinds of structures. A simple model-checking problem consists of verifying whether a formula in the propositional logic is satisfied by a given structure.
Overview
Property checking is used for verification when two descriptions are not equivalent. During refinement, the specification is complemented with details that are unnecessary in the higher-level specification. There is no need to verify the newly introduced properties against the original specification since this is not possible. Therefore, the strict bi-directional equivalence check is relaxed to a one-way property check. The implementation or design is regarded as a model of the system, whereas the specifications are properties that the model must satisfy.
An important class of model-checking methods has been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula. Pioneering work in temporal logic specification was done by Amir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science". Model checking began with the pioneering work of E. M. Clarke, E. A. Emerson, by J. P. Queille, and J. Sifakis. Clarke, Emerson, and Sifakis shar |
https://en.wikipedia.org/wiki/Energy%20flow%20%28ecology%29 | Energy flow is the flow of energy through living things within an ecosystem. All living organisms can be organized into producers and consumers, and those producers and consumers can further be organized into a food chain. Each of the levels within the food chain is a trophic level. In order to more efficiently show the quantity of organisms at each trophic level, these food chains are then organized into trophic pyramids. The arrows in the food chain show that the energy flow is unidirectional, with the head of an arrow indicating the direction of energy flow; energy is lost as heat at each step along the way.
The unidirectional flow of energy and the successive loss of energy as it travels up the food web are patterns in energy flow that are governed by thermodynamics, which is the theory of energy exchange between systems. Trophic dynamics relates to thermodynamics because it deals with the transfer and transformation of energy (originating externally from the sun via solar radiation) to and among organisms.
Energetics and the carbon cycle
The first step in energetics is photosynthesis, wherein water and carbon dioxide from the air are taken in with energy from the sun, and are converted into oxygen and glucose. Cellular respiration is the reverse reaction, wherein oxygen and sugar are taken in and release energy as they are converted back into carbon dioxide and water. The carbon dioxide and water produced by respiration can be recycled back into plants.
Energy loss can be measured either by efficiency (how much energy makes it to the next level), or by biomass (how much living material exists at those levels at one point in time, measured by standing crop). Of all the net primary productivity at the producer trophic level, in general only 10% goes to the next level, the primary consumers, then only 10% of that 10% goes on to the next trophic level, and so on up the food pyramid. Ecological efficiency may be anywhere from 5% to 20% depending on how efficient |
https://en.wikipedia.org/wiki/Perfect%20information | In economics, perfect information (sometimes referred to as "no hidden information") is a feature of perfect competition. With perfect information in a market, all consumers and producers have complete and instantaneous knowledge of all market prices, their own utility, and own cost functions.
In game theory, a sequential game has perfect information if each player, when making any decision, is perfectly informed of all the events that have previously occurred, including the "initialization event" of the game (e.g. the starting hands of each player in a card game).
Perfect information is importantly different from complete information, which implies common knowledge of each player's utility functions, payoffs, strategies and "types". A game with perfect information may or may not have complete information.
Games where some aspect of play is hidden from opponents – such as the cards in poker and bridge – are examples of games with imperfect information.
Examples
Chess is an example of a game with perfect information, as each player can see all the pieces on the board at all times. Other games with perfect information include tic-tac-toe, Reversi, checkers, and Go.
Academic literature has not produced consensus on a standard definition of perfect information which defines whether games with chance, but no secret information, and games with simultaneous moves are games of perfect information.
Games which are sequential (players alternate in moving) and which have chance events (with known probabilities to all players) but no secret information, are sometimes considered games of perfect information. This includes games such as backgammon and Monopoly. But there are some academic papers which do not regard such games as games of perfect information because the results of chance themselves are unknown prior to them occurring.
Games with simultaneous moves are generally not considered games of perfect information. This is because each player holds information which |
https://en.wikipedia.org/wiki/Supervised%20injection%20site | Supervised injection sites (SIS) are medically supervised facilities designed to provide a hygienic environment in which people are able to consume illicit recreational drugs intravenously and prevent deaths due to drug overdoses. Proponents say they saves lives and connect users to services while opponents believe they promote drug use and attract crime to the community around the site. The legality of such a facility is dependent by location and political jurisdiction. Supervised injection sites are part of a harm reduction approach towards drug problems. The facilities provide sterile injection equipment, information about drugs and basic health care, treatment referrals, access to medical staff, and, at some facilities, counseling. Most programs prohibit the sale or purchase of recreational drugs at the facility.
Terminology
Supervised injection sites are also known as overdose prevention centers (OPC), supervised injection facilities, safe consumption rooms, safe injection sites, safe injection rooms, fix rooms, fixing rooms, safer injection facilities (SIF), drug consumption facilities (DCF), drug consumption rooms (DCRs), and harm reduction centers.
Facilities
Australia
"Shooting galleries" (the term "shooting" is slang for injecting drugs) have existed for a long time; there were illicit for-profit facilities in Sydney, Australia during the 1990s. Authors differentiated the legally sanctioned sites in Australia from those examples in the care they provide. While the operators of the shooting galleries exemplified in Sydney had little regard for the health of their clients, modern supervised injection facilities are a professionally staffed health and welfare service. The same journal describes the same facility in Australian context as "in general" may be defined as "legally sanctioned and supervised facilities designed to reduce the health and public order problems associated with illegal injection drug use"
The legality of supervised injection is h |
https://en.wikipedia.org/wiki/Term%20logic | In logic and formal semantics, term logic, also known as traditional logic, syllogistic logic or Aristotelian logic, is a loose name for an approach to formal logic that began with Aristotle and was developed further in ancient history mostly by his followers, the Peripatetics. It was revived after the third century CE by Porphyry's Isagoge.
Term logic revived in medieval times, first in Islamic logic by Alpharabius in the tenth century, and later in Christian Europe in the twelfth century with the advent of new logic, remaining dominant until the advent of predicate logic in the late nineteenth century.
However, even if eclipsed by newer logical systems, term logic still plays a significant role in the study of logic. Rather than radically breaking with term logic, modern logics typically expand it.
Aristotle's system
Aristotle's logical work is collected in the six texts that are collectively known as the Organon. Two of these texts in particular, namely the Prior Analytics and De Interpretatione, contain the heart of Aristotle's treatment of judgements and formal inference, and it is principally this part of Aristotle's works that is about term logic. Modern work on Aristotle's logic builds on the tradition started in 1951 with the establishment by Jan Lukasiewicz of a revolutionary paradigm. Lukasiewicz's approach was reinvigorated in the early 1970s by John Corcoran and Timothy Smiley – which informs modern translations of Prior Analytics by Robin Smith in 1989 and Gisela Striker in 2009.
The Prior Analytics represents the first formal study of logic, where logic is understood as the study of arguments. An argument is a series of true or false statements which lead to a true or false conclusion. In the Prior Analytics, Aristotle identifies valid and invalid forms of arguments called syllogisms. A syllogism is an argument that consists of at least three sentences: at least two premises and a conclusion. Although Aristotle does not call them "categorical |
https://en.wikipedia.org/wiki/Tessellation | A tessellation or tiling is the covering of a surface, often a plane, using one or more geometric shapes, called tiles, with no overlaps and no gaps. In mathematics, tessellation can be generalized to higher dimensions and a variety of geometries.
A periodic tiling has a repeating pattern. Some special kinds include regular tilings with regular polygonal tiles all of the same shape, and semiregular tilings with regular tiles of more than one shape and with every corner identically arranged. The patterns formed by periodic tilings can be categorized into 17 wallpaper groups. A tiling that lacks a repeating pattern is called "non-periodic". An aperiodic tiling uses a small set of tile shapes that cannot form a repeating pattern (an aperiodic set of prototiles). A tessellation of space, also known as a space filling or honeycomb, can be defined in the geometry of higher dimensions.
A real physical tessellation is a tiling made of materials such as cemented ceramic squares or hexagons. Such tilings may be decorative patterns, or may have functions such as providing durable and water-resistant pavement, floor, or wall coverings. Historically, tessellations were used in Ancient Rome and in Islamic art such as in the Moroccan architecture and decorative geometric tiling of the Alhambra palace. In the twentieth century, the work of M. C. Escher often made use of tessellations, both in ordinary Euclidean geometry and in hyperbolic geometry, for artistic effect. Tessellations are sometimes employed for decorative effect in quilting. Tessellations form a class of patterns in nature, for example in the arrays of hexagonal cells found in honeycombs.
History
Tessellations were used by the Sumerians (about 4000 BC) in building wall decorations formed by patterns of clay tiles.
Decorative mosaic tilings made of small squared blocks called tesserae were widely employed in classical antiquity, sometimes displaying geometric patterns.
In 1619, Johannes Kepler made an early docu |
https://en.wikipedia.org/wiki/UTF-7 | UTF-7 (7-bit Unicode Transformation Format) is an obsolete variable-length character encoding for representing Unicode text using a stream of ASCII characters. It was originally intended to provide a means of encoding Unicode text for use in Internet E-mail messages that was more efficient than the combination of UTF-8 with quoted-printable.
UTF-7 (according to its RFC) isn't a "Unicode Transformation Format", as the definition can only encode code points in the BMP (the first 65536 Unicode code points, which does not include emojis and many other characters). However if a UTF-7 translator is to/from UTF-16 then it can (and probably does) encode each surrogate half as though it was a 16-bit code point, and thus can encode all code points. It is unclear if other UTF-7 software (such as translators to UTF-32 or UTF-8) support this.
UTF-7 has never been an official standard of the Unicode Consortium. It is known to have security issues, which is why software has been changed to disable its use. It is prohibited in HTML 5..
Motivation
MIME, the modern standard of E-mail format, forbids encoding of headers using byte values above the ASCII range. Although MIME allows encoding the message body in various character sets (broader than ASCII), the underlying transmission infrastructure (SMTP, the main E-mail transfer standard) is still not guaranteed to be 8-bit clean. Therefore, a non-trivial content transfer encoding has to be applied in case of doubt. Unfortunately base64 has a disadvantage of making even US-ASCII characters unreadable in non-MIME clients. On the other hand, UTF-8 combined with quoted-printable produces a very size-inefficient format requiring 6–9 bytes for non-ASCII characters from the BMP and 12 bytes for characters outside the BMP.
Provided certain rules are followed during encoding, UTF-7 can be sent in e-mail without using an underlying MIME transfer encoding, but still must be explicitly identified as the text character set. In addition, if use |
https://en.wikipedia.org/wiki/Multiply%20perfect%20number | In mathematics, a multiply perfect number (also called multiperfect number or pluperfect number) is a generalization of a perfect number.
For a given natural number k, a number n is called (or perfect) if the sum of all positive divisors of n (the divisor function, σ(n)) is equal to kn; a number is thus perfect if and only if it is . A number that is for a certain k is called a multiply perfect number. As of 2014, numbers are known for each value of k up to 11.
It is unknown whether there are any odd multiply perfect numbers other than 1. The first few multiply perfect numbers are:
1, 6, 28, 120, 496, 672, 8128, 30240, 32760, 523776, 2178540, 23569920, 33550336, 45532800, 142990848, 459818240, ... .
Example
The sum of the divisors of 120 is
1 + 2 + 3 + 4 + 5 + 6 + 8 + 10 + 12 + 15 + 20 + 24 + 30 + 40 + 60 + 120 = 360
which is 3 × 120. Therefore 120 is a number.
Smallest known k-perfect numbers
The following table gives an overview of the smallest known numbers for k ≤ 11 :
Properties
It can be proven that:
For a given prime number p, if n is and p does not divide n, then pn is . This implies that an integer n is a number divisible by 2 but not by 4, if and only if n/2 is an odd perfect number, of which none are known.
If 3n is and 3 does not divide n, then n is .
Odd multiply perfect numbers
It is unknown whether there are any odd multiply perfect numbers other than 1. However if an odd number n exists where k > 2, then it must satisfy the following conditions:
The largest prime factor is ≥ 100129
The second largest prime factor is ≥ 1009
The third largest prime factor is ≥ 101
Bounds
In little-o notation, the number of multiply perfect numbers less than x is for all ε > 0.
The number of k-perfect numbers n for n ≤ x is less than , where c and c are constants independent of k.
Under the assumption of the Riemann hypothesis, the following inequality is true for all numbers n, where k > 3
where is Euler's gamma constant. This can |
https://en.wikipedia.org/wiki/Abundant%20number | In number theory, an abundant number or excessive number is a positive integer for which the sum of its proper divisors is greater than the number. The integer 12 is the first abundant number. Its proper divisors are 1, 2, 3, 4 and 6 for a total of 16. The amount by which the sum exceeds the number is the abundance. The number 12 has an abundance of 4, for example.
Definition
A number n for which the sum of divisors σ(n) > 2n, or, equivalently, the sum of proper divisors (or aliquot sum) s(n) > n.
Abundance is the value σ(n) − 2n (or s(n) − n).
Examples
The first 28 abundant numbers are:
12, 18, 20, 24, 30, 36, 40, 42, 48, 54, 56, 60, 66, 70, 72, 78, 80, 84, 88, 90, 96, 100, 102, 104, 108, 112, 114, 120, ... .
For example, the proper divisors of 24 are 1, 2, 3, 4, 6, 8, and 12, whose sum is 36. Because 36 is greater than 24, the number 24 is abundant. Its abundance is 36 − 24 = 12.
Properties
The smallest odd abundant number is 945.
The smallest abundant number not divisible by 2 or by 3 is 5391411025 whose distinct prime factors are 5, 7, 11, 13, 17, 19, 23, and 29 . An algorithm given by Iannucci in 2005 shows how to find the smallest abundant number not divisible by the first k primes. If represents the smallest abundant number not divisible by the first k primes then for all we have
for sufficiently large k.
Every multiple of a perfect number (except the perfect number itself) is abundant. For example, every multiple of 6 greater than 6 is abundant because
Every multiple of an abundant number is abundant. For example, every multiple of 20 (including 20 itself) is abundant because
Consequently, infinitely many even and odd abundant numbers exist.
Furthermore, the set of abundant numbers has a non-zero natural density. Marc Deléglise showed in 1998 that the natural density of the set of abundant numbers and perfect numbers is between 0.2474 and 0.2480.
An abundant number which is not the multiple of an abundant number or perfect number (i.e. all |
https://en.wikipedia.org/wiki/Deficient%20number | In number theory, a deficient number or defective number is a positive integer for which the sum of divisors of is less than . Equivalently, it is a number for which the sum of proper divisors (or aliquot sum) is less than . For example, the proper divisors of 8 are , and their sum is less than 8, so 8 is deficient.
Denoting by the sum of divisors, the value is called the number's deficiency. In terms of the aliquot sum , the deficiency is .
Examples
The first few deficient numbers are
1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 19, 21, 22, 23, 25, 26, 27, 29, 31, 32, 33, 34, 35, 37, 38, 39, 41, 43, 44, 45, 46, 47, 49, 50, ...
As an example, consider the number 21. Its divisors are 1, 3, 7 and 21, and their sum is 32. Because 32 is less than 42, the number 21 is deficient. Its deficiency is 2 × 21 − 32 = 10.
Properties
Since the aliquot sums of prime numbers equal 1, all prime numbers are deficient. More generally, all odd numbers with one or two distinct prime factors are deficient. It follows that there are infinitely many odd deficient numbers. There are also an infinite number of even deficient numbers as all powers of two have the sum ().
More generally, all prime powers are deficient because their only proper divisors are which sum to , which is at most .
All proper divisors of deficient numbers are deficient. Moreover, all proper divisors of perfect numbers are deficient.
There exists at least one deficient number in the interval for all sufficiently large n.
Related concepts
Closely related to deficient numbers are perfect numbers with σ(n) = 2n, and abundant numbers with σ(n) > 2n.
The natural numbers were first classified as either deficient, perfect or abundant by Nicomachus in his Introductio Arithmetica (circa 100 CE).
See also
Almost perfect number
Amicable number
Sociable number
Superabundant number
References
External links
The Prime Glossary: Deficient number
Arithmetic dynamics
Divisor function
Int |
https://en.wikipedia.org/wiki/Coding%20theory | Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data.
There are four types of coding:
Data compression (or source coding)
Error control (or channel coding)
Cryptographic coding
Line coding
Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, ZIP data compression makes data files smaller, for purposes such as to reduce Internet traffic. Data compression and error correction may be studied in combination.
Error correction adds useful redundancy to the data from a source to make the transmission more robust to disturbances present on the transmission channel. The ordinary user may not be aware of many applications using error correction. A typical music compact disc (CD) uses the Reed–Solomon code to correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones also use coding techniques to correct for the fading and noise of high frequency radio transmission. Data modems, telephone transmissions, and the NASA Deep Space Network all employ channel coding techniques to get the bits through, for example the turbo code and LDPC codes.
History of coding theory
In 1948, Claude Shannon published "A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the information a sender wants to transmit. |
https://en.wikipedia.org/wiki/Cullen%20number | In mathematics, a Cullen number is a member of the integer sequence (where is a natural number). Cullen numbers were first studied by James Cullen in 1905. The numbers are special cases of Proth numbers.
Properties
In 1976 Christopher Hooley showed that the natural density of positive integers for which Cn is a prime is of the order o(x) for . In that sense, almost all Cullen numbers are composite. Hooley's proof was reworked by Hiromi Suyama to show that it works for any sequence of numbers n·2n + a + b where a and b are integers, and in particular also for Woodall numbers. The only known Cullen primes are those for n equal to:
1, 141, 4713, 5795, 6611, 18496, 32292, 32469, 59656, 90825, 262419, 361275, 481899, 1354828, 6328548, 6679881 .
Still, it is conjectured that there are infinitely many Cullen primes.
A Cullen number Cn is divisible by p = 2n − 1 if p is a prime number of the form 8k − 3; furthermore, it follows from Fermat's little theorem that if p is an odd prime, then p divides Cm(k) for each m(k) = (2k − k)
(p − 1) − k (for k > 0). It has also been shown that the prime number p divides C(p + 1)/2 when the Jacobi symbol (2 | p) is −1, and that p divides C(3p − 1)/2 when the Jacobi symbol (2 | p) is + 1.
It is unknown whether there exists a prime number p such that Cp is also prime.
Cp follows the recurrence relation
.
Generalizations
Sometimes, a generalized Cullen number base b is defined to be a number of the form n·bn + 1, where n + 2 > b; if a prime can be written in this form, it is then called a generalized Cullen prime. Woodall numbers are sometimes called Cullen numbers of the second kind.
As of October 2021, the largest known generalized Cullen prime is 2525532·732525532 + 1. It has 4,705,888 digits and was discovered by Tom Greer, a PrimeGrid participant.
According to Fermat's little theorem, if there is a prime p such that n is divisible by p − 1 and n + 1 is divisible by p (especially, when n = p − 1) and p does not divide b |
https://en.wikipedia.org/wiki/Quasiperfect%20number | In mathematics, a quasiperfect number is a natural number n for which the sum of all its divisors (the divisor function σ(n)) is equal to 2n + 1. Equivalently, n is the sum of its non-trivial divisors (that is, its divisors excluding 1 and n). No quasiperfect numbers have been found so far.
The quasiperfect numbers are the abundant numbers of minimal abundance (which is 1).
Theorems
If a quasiperfect number exists, it must be an odd square number greater than 1035 and have at least seven distinct prime factors.
Related
Numbers do exist where the sum of all the divisors σ(n) is equal to 2n + 2: 20, 104, 464, 650, 1952, 130304, 522752 ... .
Many of these numbers are of the form 2n−1(2n − 3) where 2n − 3 is prime (instead of 2n − 1 with perfect numbers). In addition, numbers exist where the sum of all the divisors σ(n) is equal to 2n − 1, such as the powers of 2.
Betrothed numbers relate to quasiperfect numbers like amicable numbers relate to perfect numbers.
Notes
References
Arithmetic dynamics
Divisor function
Integer sequences
Unsolved problems in mathematics |
https://en.wikipedia.org/wiki/Semiperfect%20number | In number theory, a semiperfect number or pseudoperfect number is a natural number n that is equal to the sum of all or some of its proper divisors. A semiperfect number that is equal to the sum of all its proper divisors is a perfect number.
The first few semiperfect numbers are: 6, 12, 18, 20, 24, 28, 30, 36, 40, ...
Properties
Every multiple of a semiperfect number is semiperfect. A semiperfect number that is not divisible by any smaller semiperfect number is called primitive.
Every number of the form 2mp for a natural number m and an odd prime number p such that p < 2m+1 is also semiperfect.
In particular, every number of the form 2m(2m+1 − 1) is semiperfect, and indeed perfect if 2m+1 − 1 is a Mersenne prime.
The smallest odd semiperfect number is 945 (see, e.g., Friedman 1993).
A semiperfect number is necessarily either perfect or abundant. An abundant number that is not semiperfect is called a weird number.
With the exception of 2, all primary pseudoperfect numbers are semiperfect.
Every practical number that is not a power of two is semiperfect.
The natural density of the set of semiperfect numbers exists.
Primitive semiperfect numbers
A primitive semiperfect number (also called a primitive pseudoperfect number, irreducible semiperfect number or irreducible pseudoperfect number) is a semiperfect number that has no semiperfect proper divisor.
The first few primitive semiperfect numbers are 6, 20, 28, 88, 104, 272, 304, 350, ...
There are infinitely many such numbers. All numbers of the form 2mp, with p a prime between 2m and 2m+1, are primitive semiperfect, but this is not the only form: for example, 770. There are infinitely many odd primitive semiperfect numbers, the smallest being 945, a result of Paul Erdős: there are also infinitely many primitive semiperfect numbers that are not harmonic divisor numbers.
Every semiperfect number is a multiple of a primitive semiperfect number.
See also
Hemiperfect number
Erdős–Nicolas number
No |
https://en.wikipedia.org/wiki/Weird%20number | In number theory, a weird number is a natural number that is abundant but not semiperfect. In other words, the sum of the proper divisors (divisors including 1 but not itself) of the number is greater than the number, but no subset of those divisors sums to the number itself.
Examples
The smallest weird number is 70. Its proper divisors are 1, 2, 5, 7, 10, 14, and 35; these sum to 74, but no subset of these sums to 70. The number 12, for example, is abundant but not weird, because the proper divisors of 12 are 1, 2, 3, 4, and 6, which sum to 16; but 2 + 4 + 6 = 12.
The first few weird numbers are
70, 836, 4030, 5830, 7192, 7912, 9272, 10430, 10570, 10792, 10990, 11410, 11690, 12110, 12530, 12670, 13370, 13510, 13790, 13930, 14770, ... .
Properties
Infinitely many weird numbers exist. For example, 70p is weird for all primes p ≥ 149. In fact, the set of weird numbers has positive asymptotic density.
It is not known if any odd weird numbers exist. If so, they must be greater than 1021.
Sidney Kravitz has shown that for k a positive integer, Q a prime exceeding 2k, and
also prime and greater than 2k, then
is a weird number.
With this formula, he found the large weird number
Primitive weird numbers
A property of weird numbers is that if n is weird, and p is a prime greater than the sum of divisors σ(n), then pn is also weird. This leads to the definition of primitive weird numbers: weird numbers that are not a multiple of other weird numbers . Among the 1765 weird numbers less than one million, there are 24 primitive weird numbers. The construction of Kravitz yields primitive weird numbers, since all weird numbers of the form are primitive, but the existence of infinitely many k and Q which yield a prime R is not guaranteed. It is conjectured that there exist infinitely many primitive weird numbers, and Melfi has shown that the infiniteness of primitive weird numbers is a consequence of Cramér's conjecture.
Primitive weird numbers with as many as 16 prim |
https://en.wikipedia.org/wiki/Woodall%20number | In number theory, a Woodall number (Wn) is any natural number of the form
for some natural number n. The first few Woodall numbers are:
1, 7, 23, 63, 159, 383, 895, … .
History
Woodall numbers were first studied by Allan J. C. Cunningham and H. J. Woodall in 1917, inspired by James Cullen's earlier study of the similarly defined Cullen numbers.
Woodall primes
Woodall numbers that are also prime numbers are called Woodall primes; the first few exponents n for which the corresponding Woodall numbers Wn are prime are 2, 3, 6, 30, 75, 81, 115, 123, 249, 362, 384, ... ; the Woodall primes themselves begin with 7, 23, 383, 32212254719, ... .
In 1976 Christopher Hooley showed that almost all Cullen numbers are composite. In October 1995, Wilfred Keller published a paper discussing several new Cullen primes and the efforts made to factorise other Cullen and Woodall numbers. Included in that paper is a personal communication to Keller from Hiromi Suyama, asserting that Hooley's method can be reformulated to show that it works for any sequence of numbers , where a and b are integers, and in particular, that almost all Woodall numbers are composite. It is an open problem whether there are infinitely many Woodall primes. , the largest known Woodall prime is 17016602 × 217016602 − 1. It has 5,122,515 digits and was found by Diego Bertolotti in March 2018 in the distributed computing project PrimeGrid.
Restrictions
Starting with W4 = 63 and W5 = 159, every sixth Woodall number is divisible by 3; thus, in order for Wn to be prime, the index n cannot be congruent to 4 or 5 (modulo 6). Also, for a positive integer m, the Woodall number W2m may be prime only if 2m + m is prime. As of January 2019, the only known primes that are both Woodall primes and Mersenne primes are W2 = M3 = 7, and W512 = M521.
Divisibility properties
Like Cullen numbers, Woodall numbers have many divisibility properties. For example, if p is a prime number, then p divides
W(p + 1) / 2 if the Jacobi sy |
https://en.wikipedia.org/wiki/Perpetuity | In finance, a perpetuity is an annuity that has no end, or a stream of cash payments that continues forever. There are few actual perpetuities in existence. For example, the United Kingdom (UK) government issued them in the past; these were known as consols and were all finally redeemed in 2015.
Real estate and preferred stock are among some types of investments that affect the results of a perpetuity, and prices can be established using techniques for valuing a perpetuity. Perpetuities are but one of the time value of money methods for valuing financial assets.
Perpetuities can be structured as a perpetual bond and are a form of ordinary annuities. The concept is closely linked to terminal value and terminal growth rate in valuation.
Detailed description
A perpetuity is an annuity in which the periodic payments begin on a fixed date and continue indefinitely. It is sometimes referred to as a perpetual annuity. Fixed coupon payments on permanently invested (irredeemable) sums of money are prime examples of perpetuities. Scholarships paid perpetually from an endowment fit the definition of perpetuity.
The value of the perpetuity is finite because receipts that are anticipated far in the future have extremely low present value (present value of the future cash flows). Unlike a typical bond, because the principal is never repaid, there is no present value for the principal. Assuming that payments begin at the end of the current period, the price of a perpetuity is simply the coupon amount over the appropriate discount rate or yield; that is,
where PV = present value of the perpetuity, A = the amount of the periodic payment, and r = yield, discount rate or interest rate.
To give a numerical example, a 3% UK government war loan will trade at 50 pence per pound in a yield environment of 6%, while at 3% yield it is trading at par. That is, if the face value of the loan is £100 and the annual payment £3, the value of the loan is £50 when market interest rates are |
https://en.wikipedia.org/wiki/Almost%20perfect%20number | In mathematics, an almost perfect number (sometimes also called slightly defective or least deficient number) is a natural number n such that the sum of all divisors of n (the sum-of-divisors function σ(n)) is equal to 2n − 1, the sum of all proper divisors of n, s(n) = σ(n) − n, then being equal to n − 1. The only known almost perfect numbers are powers of 2 with non-negative exponents . Therefore the only known odd almost perfect number is 20 = 1, and the only known even almost perfect numbers are those of the form 2k for some positive integer k; however, it has not been shown that all almost perfect numbers are of this form. It is known that an odd almost perfect number greater than 1 would have at least six prime factors.
If m is an odd almost perfect number then is a Descartes number. Moreover if a and b are positive odd integers such that and such that and are both primes, then would be an odd weird number.
See also
Perfect number
Quasiperfect number
References
Further reading
External links
Arithmetic dynamics
Divisor function
Integer sequences |
https://en.wikipedia.org/wiki/Chord%20%28peer-to-peer%29 | In computing, Chord is a protocol and algorithm for a peer-to-peer distributed hash table. A distributed hash table stores key-value pairs by assigning keys to different computers (known as "nodes"); a node will store the values for all the keys for which it is responsible. Chord specifies how keys are assigned to nodes, and how a node can discover the value for a given key by first locating the node responsible for that key.
Chord is one of the four original distributed hash table protocols, along with CAN, Tapestry, and Pastry. It was introduced in 2001 by Ion Stoica, Robert Morris, David Karger, Frans Kaashoek, and Hari Balakrishnan, and was developed at MIT. The 2001 Chord paper won an ACM SIGCOMM Test of Time award in 2011.
Subsequent research by Pamela Zave has shown that the original Chord algorithm (as specified in the 2001 SIGCOMM paper, the 2001 Technical report,
the 2002 PODC paper, and
the 2003 TON paper
) can mis-order the ring, produce several rings, and break the ring.
Overview
Nodes and keys are assigned an -bit identifier using consistent hashing. The SHA-1 algorithm is the base hashing function for consistent hashing. Consistent hashing is integral to the robustness and performance of Chord because both keys and nodes (in fact, their IP addresses) are uniformly distributed in the same identifier space with a negligible possibility of collision. Thus, it also allows nodes to join and leave the network without disruption. In the protocol, the term node is used to refer to both a node itself and its identifier (ID) without ambiguity. So is the term key.
Using the Chord lookup protocol, nodes and keys are arranged in an identifier circle that has at most nodes, ranging from to . ( should be large enough to avoid collision.) Some of these nodes will map to machines or keys while others (most) will be empty.
Each node has a successor and a predecessor. The successor to a node is the next node in the identifier circle in a clockwise direction. Th |
https://en.wikipedia.org/wiki/Hyperperfect%20number | In number theory, a -hyperperfect number is a natural number for which the equality holds, where is the divisor function (i.e., the sum of all positive divisors of ). A hyperperfect number is a -hyperperfect number for some integer . Hyperperfect numbers generalize perfect numbers, which are 1-hyperperfect.
The first few numbers in the sequence of -hyperperfect numbers are , with the corresponding values of being . The first few -hyperperfect numbers that are not perfect are .
List of hyperperfect numbers
The following table lists the first few -hyperperfect numbers for some values of , together with the sequence number in the On-Line Encyclopedia of Integer Sequences (OEIS) of the sequence of -hyperperfect numbers:
It can be shown that if is an odd integer and and are prime numbers, then is -hyperperfect; Judson S. McCranie has conjectured in 2000 that all -hyperperfect numbers for odd are of this form, but the hypothesis has not been proven so far. Furthermore, it can be proven that if are odd primes and is an integer such that then is -hyperperfect.
It is also possible to show that if and is prime, then for all such that is prime, is -hyperperfect. The following table lists known values of and corresponding values of for which is -hyperperfect:
Hyperdeficiency
The newly introduced mathematical concept of hyperdeficiency is related to the hyperperfect numbers.
Definition (Minoli 2010): For any integer and for integer , define the -hyperdeficiency (or simply the hyperdeficiency) for the number as
A number is said to be -hyperdeficient if
Note that for one gets which is the standard traditional definition of deficiency.
Lemma: A number is -hyperperfect (including ) if and only if the -hyperdeficiency of ,
Lemma: A number is -hyperperfect (including ) if and only if for some , for at least one .
References
Further reading
Articles
.
.
.
.
.
.
.
.
.
Books
Daniel Minoli, Voice over MPLS, McGraw-Hill, New |
https://en.wikipedia.org/wiki/Ecological%20succession | Ecological succession is the process of change in the species that make up an ecological community over time.
The process of succession occurs either after the initial colonization of a newly created habitat, or after a disturbance substantially alters a pre-existing habitat. Succession that begins in new habitats, uninfluenced by pre-existing communities, is called primary succession, whereas succession that follows disruption of a pre-existing community is called secondary succession. Primary succession may happen after a lava flow or the emergence of a new island from the ocean. Surtsey, a volcanic island off the southern coast of Iceland, is an important example of a place where primary succession has been observed. On the other hand, secondary succession happens after disturbance of a community, such as from a fire, severe windthrow, or logging.
Succession was among the first theories advanced in ecology. Ecological succession was first documented in the Indiana Dunes of Northwest Indiana and remains an important ecological topic of study. Over time, the understanding of succession has changed from a linear progression to a stable climax state, to a more complex, cyclical model that de-emphasizes the idea of organisms having fixed roles or relationships.
History
Precursors of the idea of ecological succession go back to the beginning of the 19th century. As early as 1742 French naturalist Buffon noted that poplars precede oaks and beeches in the natural evolution of a forest. Buffon was later forced by the theological committee at the University of Paris to recant many of his ideas because they contradicted the biblical narrative of Creation.
Swiss geologist Jean-André Deluc and the later French naturalist Adolphe Dureau de la Malle were the first to make use of the word succession concerning the vegetation development after forest clear-cutting. In 1859 Henry David Thoreau wrote an address called "The Succession of Forest Trees" in which he described succ |
https://en.wikipedia.org/wiki/Project%20Orion%20%28nuclear%20propulsion%29 | Project Orion was a study conducted in the 1950s and 1960s by the United States Air Force, DARPA, and NASA into the viability of a nuclear pulse spaceship that would be directly propelled by a series of atomic explosions behind the craft. Early versions of the vehicle were proposed to take off from the ground; later versions were presented for use only in space. The design effort took place at General Atomics in San Diego, and supporters included Wernher von Braun, who issued a white paper advocating the idea. Non-nuclear tests were conducted with models, but the project was eventually abandoned for several reasons, including the 1963 Partial Test Ban Treaty, which banned nuclear explosions in space, and concerns over nuclear fallout.
Physicist Stanislaw Ulam proposed the general idea of nuclear pulse propulsion in 1946, and preliminary calculations were made by Frederick Reines and Ulam in a Los Alamos memorandum dated 1947. In August 1955, Ulam co-authored a classified paper proposing the use of nuclear fission bombs, "ejected and detonated at a considerable distance," for propelling a vehicle in outer space. The project was led by Ted Taylor at General Atomics and physicist Freeman Dyson who, at Taylor's request, took a year away from the Institute for Advanced Study in Princeton to work on the project. In July 1958, DARPA agreed to sponsor Orion at an initial level of $1 million per year, at which point the project received its name and formally began. The agency granted a study of the concept to the General Dynamics Corporation, but decided to withdraw support in late 1959. The U.S. Air Force agreed to support Orion if a military use was found for the project, and the NASA Office of Manned Spaceflight also contributed funding. The concept investigated by the government used a blast shield and shock absorber to protect the crew and convert the detonations into a continuous propulsion force. The most successful model test, in November 1959, reached roughly 100 m |
https://en.wikipedia.org/wiki/Ectasia | Ectasia (), also called ectasis (), is dilation or distention of a tubular structure, either normal or pathophysiologic but usually the latter (except in atelectasis, where absence of ectasis is the problem).
Specific conditions
Bronchiectasis, chronic dilatation of the bronchi
Duct ectasia of breast, a dilated milk duct. Duct ectasia syndrome is a synonym for nonpuerperal (unrelated to pregnancy and breastfeeding) mastitis.
Dural ectasia, dilation of the dural sac surrounding the spinal cord, usually in the very low back.
Pyelectasis, dilation of a part of the kidney, most frequently seen in prenatal ultrasounds. It usually resolves on its own.
Rete tubular ectasia, dilation of tubular structures in the testicles. It is usually found in older men.
Acral arteriolar ectasia
Corneal ectasia (secondary keratoconus), a bulging of the cornea.
Vascular ectasias
Most broadly, any abnormal dilatation of a blood vessel, including aneurysms
Annuloaortic ectasia, dilation of the aorta. It can be associated with Marfan syndrome.
Dolichoectasias, weakening of arteries, usually caused by high blood pressure.
Intracranial dolichoectasias, dilation of arteries inside the head.
Gastric antral vascular ectasia, dilation of small blood vessels in the last part of the stomach.
Telangiectasias are small dilated blood vessels found anywhere on the body, but commonly seen on the face around the nose, cheeks, and chin.
Venous ectasia, dilation of veins or venules, such as:
Chronic venous insufficiency, often in the leg
Jugular vein ectasia, in the jugular veins returning blood from the head
See also
References
Anatomy
Pathophysiology |
https://en.wikipedia.org/wiki/Acesulfame%20potassium | Acesulfame potassium (, or ), also known as acesulfame K (K is the symbol for potassium) or Ace K, is a synthetic calorie-free sugar substitute (artificial sweetener) often marketed under the trade names Sunett and Sweet One. In the European Union, it is known under the E number (additive code) E950. It was discovered accidentally in 1967 by German chemist Karl Clauss at Hoechst AG (now Nutrinova). In chemical structure, acesulfame potassium is the potassium salt of 6-methyl-1,2,3-oxathiazine-4(3H)-one 2,2-dioxide. It is a white crystalline powder with molecular formula and a molecular weight of 201.24 g/mol.
Properties
Acesulfame K is 200 times sweeter than sucrose (common sugar), as sweet as aspartame, about two-thirds as sweet as saccharin, and one-third as sweet as sucralose. Like saccharin, it has a slightly bitter aftertaste, especially at high concentrations. Kraft Foods patented the use of sodium ferulate to mask acesulfame's aftertaste. Acesulfame K is often blended with other sweeteners (usually sucralose or aspartame). These blends are reputed to give a more sucrose-like taste whereby each sweetener masks the other's aftertaste, or exhibits a synergistic effect by which the blend is sweeter than its components. Acesulfame potassium has a smaller particle size than sucrose, allowing for its mixtures with other sweeteners to be more uniform.
Unlike aspartame, acesulfame K is stable under heat, even under moderately acidic or basic conditions, allowing it to be used as a food additive in baking, or in products that require a long shelf life. Although acesulfame potassium has a stable shelf life, it can eventually degrade to acetoacetamide, which is toxic in high doses. In carbonated drinks, it is almost always used in conjunction with another sweetener, such as aspartame or sucralose. It is also used as a sweetener in protein shakes and pharmaceutical products, especially chewable and liquid medications, where it can make the active ingredients more pa |
https://en.wikipedia.org/wiki/Public%20works | Public works are a broad category of infrastructure projects, financed and procured by a government body for recreational, employment, and health and safety uses in the greater community. They include public buildings (municipal buildings, schools, and hospitals), transport infrastructure (roads, railroads, bridges, pipelines, canals, ports, and airports), public spaces (public squares, parks, and beaches), public services (water supply and treatment, sewage treatment, electrical grid, and dams), and other, usually long-term, physical assets and facilities. Though often interchangeable with public infrastructure and public capital, public works does not necessarily carry an economic component, thereby being a broader term. Construction may be undertaken either by directly employed labour or by a private operator.
Public works has been encouraged since antiquity. The Roman emperor Nero encouraged the construction of various infrastructure projects during widespread deflation.
Overview
Public works is a multi-dimensional concept in economics and politics, touching on multiple arenas including: recreation (parks, beaches, trails), aesthetics (trees, green space), economy (goods and people movement, energy), law (police and courts), and neighborhood (community centers, social services buildings). It represents any constructed object that augments a nation's physical infrastructure.
Municipal infrastructure, urban infrastructure, and rural development usually represent the same concept but imply either large cities or developing nations' concerns respectively. The terms public infrastructure or critical infrastructure are at times used interchangeably. However, critical infrastructure includes public works (dams, waste water systems, bridges, etc.) as well as facilities like hospitals, banks, and telecommunications systems and views them from a national security viewpoint and the impact on the community that the loss of such facilities would entail.
Furthermore, th |
https://en.wikipedia.org/wiki/Io%20%28programming%20language%29 | Io is a pure object-oriented programming language inspired by Smalltalk, Self, Lua, Lisp, Act1, and NewtonScript. Io has a prototype-based object model similar to the ones in Self and NewtonScript, eliminating the distinction between instance and class. Like Smalltalk, everything is an object and it uses dynamic typing. Like Lisp, programs are just data trees. Io uses actors for concurrency.
Remarkable features of Io are its minimal size and openness to using external code resources. Io is executed by a small, portable virtual machine.
History
The language was created by Steve Dekorte in 2002, after trying to help a friend, Dru Nelson, with his language, Cel. He found out that he really didn't know much about how languages worked, and set out to write a tiny language to understand the problems better.
Philosophy
Io's goal is to explore conceptual unification and dynamic languages, so the tradeoffs tend to favor simplicity and flexibility over performance.
Features
Pure object-oriented based on prototypes
Code-as-data / homoiconic
Lazy evaluation of function parameters
Higher-order functions
Introspection, reflection and metaprogramming
Actor-based concurrency
Coroutines
Exception handling
Incremental garbage collecting supporting weak links
Highly portable
DLL/shared library dynamic loading on most platforms
Small virtual machine
Syntax
In its simplest form, it is composed of a single identifier:
doStuff
Assuming the above doStuff is a method, it is being called with zero arguments and as a result, explicit parentheses are not required.
If doStuff had arguments, it would look like this:
doStuff(42)
Io is a message passing language, and since everything in Io is a message (excluding comments), each message is sent to a receiver. The above example demonstrates this well, but not fully. To describe this point better, let's look at the next example:
System version
The above example demonstrates message passing in Io; the "version" messa |
https://en.wikipedia.org/wiki/Theoretical%20computer%20science | Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, formal language theory, the lambda calculus and type theory.
It is difficult to circumscribe the theoretical areas precisely. The ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) provides the following description:
History
While logical inference and mathematical proof had existed previously, in 1931 Kurt Gödel proved with his incompleteness theorem that there are fundamental limitations on what statements could be proved or disproved.
Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established. In 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory.
With the development of quantum mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously. This led to the concept of a quantum computer in the latter half of the 20th century that took off in the 1990s when Peter Shor showed that such methods could be used to factor large numbers in polynomial time, which, if implemented, would render some modern public key cryptography algorithms like RSA insecure.
Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below:
Topics
Algorithms
An |
https://en.wikipedia.org/wiki/Wilson%20prime | In number theory, a Wilson prime is a prime number such that divides , where "" denotes the factorial function; compare this with Wilson's theorem, which states that every prime divides . Both are named for 18th-century English mathematician John Wilson; in 1770, Edward Waring credited the theorem to Wilson, although it had been stated centuries earlier by Ibn al-Haytham.
The only known Wilson primes are 5, 13, and 563 . Costa et al. write that "the case is trivial", and credit the observation that 13 is a Wilson prime to . Early work on these numbers included searches by N. G. W. H. Beeger and Emma Lehmer, but 563 was not discovered until the early 1950s, when computer searches could be applied to the problem. If any others exist, they must be greater than 2 × 1013. It has been conjectured that infinitely many Wilson primes exist, and that the number of Wilson primes in an interval is about .
Several computer searches have been done in the hope of finding new Wilson primes.
The Ibercivis distributed computing project includes a search for Wilson primes. Another search was coordinated at the Great Internet Mersenne Prime Search forum.
Generalizations
Wilson primes of order
Wilson's theorem can be expressed in general as for every integer and prime . Generalized Wilson primes of order are the primes such that divides .
It was conjectured that for every natural number , there are infinitely many Wilson primes of order .
The smallest generalized Wilson primes of order are:
Near-Wilson primes
A prime satisfying the congruence with small can be called a near-Wilson prime. Near-Wilson primes with are bona fide Wilson primes. The table on the right lists all such primes with from up to 4.
Wilson numbers
A Wilson number is a natural number such that , where and where the term is positive if and only if has a primitive root and negative otherwise. For every natural number , is divisible by , and the quotients (called generalized Wilson quotien |
https://en.wikipedia.org/wiki/Newman%E2%80%93Shanks%E2%80%93Williams%20prime | In mathematics, a Newman–Shanks–Williams prime (NSW prime) is a prime number p which can be written in the form
NSW primes were first described by Morris Newman, Daniel Shanks and Hugh C. Williams in 1981 during the study of finite simple groups with square order.
The first few NSW primes are 7, 41, 239, 9369319, 63018038201, … , corresponding to the indices 3, 5, 7, 19, 29, … .
The sequence S alluded to in the formula can be described by the following recurrence relation:
The first few terms of the sequence are 1, 1, 3, 7, 17, 41, 99, … . Each term in this sequence is half the corresponding term in the sequence of companion Pell numbers. These numbers also appear in the continued fraction convergents to .
Further reading
External links
The Prime Glossary: NSW number
Classes of prime numbers
Unsolved problems in mathematics |
https://en.wikipedia.org/wiki/MediaWiki | MediaWiki is free and open-source wiki software originally developed by Magnus Manske for use on Wikipedia on January 25, 2002 and further improved by Lee Daniel Crocker, after which it has been coordinated by the Wikimedia Foundation. It powers most websites hosted by the Foundation including Wikipedia, Wiktionary, Wikimedia Commons, Wikiquote, Meta-Wiki and Wikidata, which define a large part of the set requirements for the software.
MediaWiki is written in the PHP programming language and stores all text content into a database. The software is optimized to efficiently handle large projects, which can have terabytes of content and hundreds of thousands of views per second. Because Wikipedia is one of the world's largest and most visited websites, achieving scalability through multiple layers of caching and database replication has been a major concern for developers. Another major aspect of MediaWiki is its internationalization; its interface is available in more than 400 languages. The software has more than 1,000 configuration settings and more than 1,800 extensions available for enabling various features to be added or changed.
Besides its usage on Wikimedia sites, MediaWiki has been used as a knowledge management and content management system on websites such as Fandom, wikiHow and major internal installations like Intellipedia and Diplopedia.
License
MediaWiki is free and open-source and is distributed under the terms of the GNU General Public License version 2 or any later version. Its documentation, located at its official website at www.mediawiki.org, is released under the Creative Commons BY-SA 4.0 license and partly in the public domain. Specifically, the manuals and other content at MediaWiki.org are Creative Commons-licensed, while the set of help pages intended to be freely copied into fresh wiki installations and/or distributed with MediaWiki software is public domain. This was done to eliminate legal issues arising from the help pages being impo |
https://en.wikipedia.org/wiki/Cultural%20ecology | Cultural ecology is the study of human adaptations to social and physical environments. Human adaptation refers to both biological and cultural processes that enable a population to survive and reproduce within a given or changing environment. This may be carried out diachronically (examining entities that existed in different epochs), or synchronically (examining a present system and its components). The central argument is that the natural environment, in small scale or subsistence societies dependent in part upon it, is a major contributor to social organization and other human institutions. In the academic realm, when combined with study of political economy, the study of economies as polities, it becomes political ecology, another academic subfield. It also helps interrogate historical events like the Easter Island Syndrome.
History
Anthropologist Julian Steward (1902-1972) coined the term, envisioning cultural ecology as a methodology for understanding how humans adapt to such a wide variety of environments. In his Theory of Culture Change: The Methodology of Multilinear Evolution (1955), cultural ecology represents the "ways in which culture change is induced by adaptation to the environment." A key point is that any particular human adaptation is in part historically inherited and involves the technologies, practices, and knowledge that allow people to live in an environment. This means that while the environment influences the character of human adaptation, it does not determine it. In this way, Steward wisely separated the vagaries of the environment from the inner workings of a culture that occupied a given environment. Viewed over the long term, this means that environment and culture are on more or less separate evolutionary tracks and that the ability of one to influence the other is dependent on how each is structured. It is this assertion - that the physical and biological environment affects culture - that has proved controversial, because it impl |
https://en.wikipedia.org/wiki/Lsh | lsh is a free software implementation of the Secure Shell (SSH) protocol version 2, by the GNU Project including both server and client programs. Featuring Secure Remote Password protocol (SRP) as specified in secsh-srp besides, public-key authentication. Kerberos is somewhat supported as well. Currently however for password verification only, not as a single sign-on (SSO) method.
lsh was started from scratch and predates OpenSSH.
Karim Yaghmour concluded in 2003 that lsh was "not fit for use" in production embedded Linux systems, because of its dependencies upon other software packages that have a multiplicity of further dependencies. The lsh package requires the GNU MP library, zlib, and liboop, the latter of which in turn requires GLib, which then requires pkg-config. Yaghmour further notes that lsh suffers from cross-compilation problems that it inherits from glib. "If […] your target isn't the same architecture as your host," he states, "LSH isn't a practical choice at this time."
Debian provides packages of lsh as lsh-server, lsh-utils, lsh-doc and lsh-client.
See also
Comparison of SSH servers
Comparison of SSH clients
TCP Wrappers
GnuTLS
References
External links
lsh homepage
GNU Project software
Unix network-related software
Free security software
Free network-related software
Secure Shell |
https://en.wikipedia.org/wiki/Right%20to%20privacy | The right to privacy is an element of various legal traditions that intends to restrain governmental and private actions that threaten the privacy of individuals. Over 150 national constitutions mention the right to privacy. On 10 December 1948, the United Nations General Assembly adopted the Universal Declaration of Human Rights (UDHR), originally written to guarantee individual rights of everyone everywhere; while right to privacy does not appear in the document, many interpret this through Article 12, which states: "No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks."
Since the global surveillance disclosures of 2013, initiated by ex-NSA employee Edward Snowden, the right to privacy has been a subject of international debate. Government agencies, such as the NSA, FBI, CIA, R&AW and GCHQ, have engaged in mass, global surveillance. Some current debates around the right to privacy include whether privacy can co-exist with the current capabilities of intelligence agencies to access and analyze many details of an individual's life; whether or not the right to privacy is forfeited as part of the social contract to bolster defense against supposed terrorist threats; and whether threats of terrorism are a valid excuse to spy on the general population. Private sector actors can also threaten the right to privacyparticularly technology companies, such as Amazon, Apple, Meta, Google, Microsoft, and Yahoo that use and collect personal data. These concerns have been strengthened by scandals, including the Facebook–Cambridge Analytica data scandal, which focused on psychographic company Cambridge Analytica which used personal data from Facebook to influence large groups of people.
History
The concept of a human "right to privacy" begins when the Latin word ius expanded from meaning "what is |
https://en.wikipedia.org/wiki/Financial%20engineering | Financial engineering is a multidisciplinary field involving financial theory, methods of engineering, tools of mathematics and the practice of programming. It has also been defined as the application of technical methods, especially from mathematical finance and computational finance, in the practice of finance.
Financial engineering plays a key role in a bank's customer-driven derivatives business
— delivering bespoke OTC-contracts and "exotics", and implementing various structured products —
which encompasses quantitative modelling, quantitative programming and risk managing financial products in compliance with the regulations and Basel capital/liquidity requirements.
An older use of the term "financial engineering" that is less common today is aggressive restructuring of corporate balance sheets.
Mathematical finance is the application of mathematics to finance. Computational finance and mathematical finance are both subfields of financial engineering. Computational finance is a field in computer science and deals with the data and algorithms that arise in financial modeling.
Discipline
Financial engineering draws on tools from applied mathematics, computer science, statistics and economic theory.
In the broadest sense, anyone who uses technical tools in finance could be called a financial engineer, for example any computer programmer in a bank or any statistician in a government economic bureau. However, most practitioners restrict the term to someone educated in the full range of tools of modern finance and whose work is informed by financial theory. It is sometimes restricted even further, to cover only those originating new financial products and strategies.
Despite its name, financial engineering does not belong to any of the fields in traditional professional engineering even though many financial engineers have studied engineering beforehand and many universities offering a postgraduate degree in this field require applicants to have a background |
https://en.wikipedia.org/wiki/Atiyah%E2%80%93Singer%20index%20theorem | In differential geometry, the Atiyah–Singer index theorem, proved by Michael Atiyah and Isadore Singer (1963), states that for an elliptic differential operator on a compact manifold, the analytical index (related to the dimension of the space of solutions) is equal to the topological index (defined in terms of some topological data). It includes many other theorems, such as the Chern–Gauss–Bonnet theorem and Riemann–Roch theorem, as special cases, and has applications to theoretical physics.
History
The index problem for elliptic differential operators was posed by Israel Gel'fand. He noticed the homotopy invariance of the index, and asked for a formula for it by means of topological invariants. Some of the motivating examples included the Riemann–Roch theorem and its generalization the Hirzebruch–Riemann–Roch theorem, and the Hirzebruch signature theorem. Friedrich Hirzebruch and Armand Borel had proved the integrality of the  genus of a spin manifold, and Atiyah suggested that this integrality could be explained if it were the index of the Dirac operator (which was rediscovered by Atiyah and Singer in 1961).
The Atiyah–Singer theorem was announced in 1963. The proof sketched in this announcement was never published by them, though it appears in Palais's book. It appears also in the "Séminaire Cartan-Schwartz 1963/64" that was held in Paris simultaneously with the seminar led by Richard Palais at Princeton University. The last talk in Paris was by Atiyah on manifolds with boundary. Their first published proof replaced the cobordism theory of the first proof with K-theory, and they used this to give proofs of various generalizations in another sequence of papers.
1965: Sergey P. Novikov published his results on the topological invariance of the rational Pontryagin classes on smooth manifolds.
Robion Kirby and Laurent C. Siebenmann's results, combined with René Thom's paper proved the existence of rational Pontryagin classes on topological manifolds. The rat |
https://en.wikipedia.org/wiki/Bimonster%20group | In mathematics, the bimonster is a group that is the wreath product of the monster group M with Z2:
The Bimonster is also a quotient of the Coxeter group corresponding to the Dynkin diagram Y555, a Y-shaped graph with 16 nodes:
John H. Conway conjectured that a presentation of the bimonster could be given by adding a certain extra relation to the presentation defined by the Y555 diagram; this was proved in 1990 by A. A. Ivanov a mathematician not the famous painter and Simon P. Norton.
See also
Triality - simple Lie group D4, Y111
Affine E_6 Y222
References
.
.
.
.
.
.
External links
(Note: incorrectly named here as [36,6,6])
Group theory |
https://en.wikipedia.org/wiki/Ethnomathematics | In mathematics education, ethnomathematics is the study of the relationship between mathematics and culture. Often associated with "cultures without written expression", it may also be defined as "the mathematics which is practised among identifiable cultural groups". It refers to a broad cluster of ideas ranging from distinct numerical and mathematical systems to multicultural mathematics education. The goal of ethnomathematics is to contribute both to the understanding of culture and the understanding of mathematics, and mainly to lead to an appreciation of the connections between the two.
Development and meaning
The term "ethnomathematics" was introduced by the Brazilian educator and mathematician Ubiratan D'Ambrosio in 1977 during a presentation for the American Association for the Advancement of Science. Since D'Ambrosio put forth the term, people - D'Ambrosio included - have struggled with its meaning ("An etymological abuse leads me to use the words, respectively, ethno and mathema for their categories of analysis and tics from (from techne)".).
The following is a sampling of some of the definitions of ethnomathematics proposed between 1985 and 2006:
"The mathematics which is practiced among identifiable cultural groups such as national-tribe societies, labour groups, children of certain age brackets and professional classes".
"The mathematics implicit in each practice".
"The study of mathematical ideas of a non-literate culture".
"The codification which allows a cultural group to describe, manage and understand reality".
"Mathematics…is conceived as a cultural product which has developed as a result of various activities".
"The study and presentation of mathematical ideas of traditional peoples".
"Any form of cultural knowledge or social activity characteristic of a social group and/or cultural group that can be recognized by other groups such as Western anthropologists, but not necessarily by the group of origin, as mathematical knowledge or mathematica |
https://en.wikipedia.org/wiki/Rational%20ignorance | Rational ignorance is refraining from acquiring knowledge when the supposed cost of educating oneself on an issue exceeds the expected potential benefit that the knowledge would provide.
Ignorance about an issue is said to be "rational" when the cost of educating oneself about the issue sufficiently to make an informed decision can outweigh any potential benefit one could reasonably expect to gain from that decision, and so it would be irrational to waste time doing so. This has consequences for the quality of decisions made by large numbers of people, such as in general elections, where the probability of any one vote changing the outcome is very small.
The term is most often found in economics, particularly public choice theory, but also used in other disciplines which study rationality and choice, including philosophy (epistemology) and game theory.
The term was coined by Anthony Downs in An Economic Theory of Democracy.
Example
Consider an employer attempting to choose between two candidates offering to complete a task at the cost of $10/hour. The length of time needed to complete the task may be longer or shorter depending on the skill of the person performing the task, so it is in the employer's best interests to find the fastest worker possible. Assume that the cost of another day of interviewing the candidates is $100. If the employer had deduced from the interviews so far that both candidates would complete the task in somewhere between 195 and 205 hours, it would be in the employer's best interests to choose one or the other by some easily applied metric (for example, flipping a coin) rather than spend the $100 on determining the better candidate, saving at most $100 in labor. In many cases, the decision may be made on the basis of heuristics; a simple decision model which may not be completely accurate. For example, in deciding which brand of prepared food is most nutritious, a shopper might simply choose the one with (for example) the lowest amount |
https://en.wikipedia.org/wiki/Radiological%20warfare | Radiological warfare is any form of warfare involving deliberate radiation poisoning or contamination of an area with radiological sources.
Radiological weapons are normally classified as weapons of mass destruction (WMDs), although radiological weapons can also be specific in whom they target, such as the radiation poisoning of Alexander Litvinenko by the Russian FSB, using radioactive polonium-210.
Numerous countries have expressed an interest in radiological weapons programs, several have actively pursued them, and three have performed radiological weapons tests.
Salted nuclear weapons
A salted bomb is a nuclear weapon that is equipped with a large quantity of radiologically inert salting material. The radiological warfare agents are produced through neutron capture by the salting materials of the neutron radiation emitted by the nuclear weapon. This avoids the problems of having to stockpile the highly radioactive material, as it is produced when the bomb explodes. The result is a more intense fallout than from regular nuclear weapons and can render an area uninhabitable for a long period.
The cobalt bomb is an example of a radiological warfare weapon, where cobalt-59 is converted to cobalt-60 by neutron capture. Initially, gamma radiation of the nuclear fission products from an equivalent sized "clean" fission-fusion-fission bomb (assuming the amount of radioactive dust particles generated are equal) are much more intense than cobalt-60: 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that cobalt-60 fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long-lived isotopes produced by fission would overtake the cobalt-60 again after about 75 years.
Other salted bomb variants that don't use cobalt have also been theorized. For example, salting with sodium-23, that transmutes to sodium-24 |
https://en.wikipedia.org/wiki/Modular%20group | In mathematics, the modular group is the projective special linear group of matrices with integer coefficients and determinant 1. The matrices and are identified. The modular group acts on the upper-half of the complex plane by fractional linear transformations, and the name "modular group" comes from the relation to moduli spaces and not from modular arithmetic.
Definition
The modular group is the group of linear fractional transformations of the upper half of the complex plane, which have the form
where , , , are integers, and . The group operation is function composition.
This group of transformations is isomorphic to the projective special linear group , which is the quotient of the 2-dimensional special linear group over the integers by its center . In other words, consists of all matrices
where , , , are integers, , and pairs of matrices and are considered to be identical. The group operation is the usual multiplication of matrices.
Some authors define the modular group to be , and still others define the modular group to be the larger group .
Some mathematical relations require the consideration of the group of matrices with determinant plus or minus one. ( is a subgroup of this group.) Similarly, is the quotient group . A matrix with unit determinant is a symplectic matrix, and thus , the symplectic group of matrices.
Finding elements
To find an explicit matrix in , begin with two coprime integers , and solve the determinant equation(Notice the determinant equation forces to be coprime since otherwise there would be a factor such that , , hencewould have no integer solutions.) For example, if then the determinant equation readsthen taking and gives , henceis a matrix. Then, using the projection, these matrices define elements in .
Number-theoretic properties
The unit determinant of
implies that the fractions , , , are all irreducible, that is having no common factors (provided the denominators are non-zero, of course). M |
https://en.wikipedia.org/wiki/Domain%20theory | Domain theory is a branch of mathematics that studies special kinds of partially ordered sets (posets) commonly called domains. Consequently, domain theory can be considered as a branch of order theory. The field has major applications in computer science, where it is used to specify denotational semantics, especially for functional programming languages. Domain theory formalizes the intuitive ideas of approximation and convergence in a very general way and is closely related to topology.
Motivation and intuition
The primary motivation for the study of domains, which was initiated by Dana Scott in the late 1960s, was the search for a denotational semantics of the lambda calculus. In this formalism, one considers "functions" specified by certain terms in the language. In a purely syntactic way, one can go from simple functions to functions that take other functions as their input arguments. Using again just the syntactic transformations available in this formalism, one can obtain so-called fixed-point combinators (the best-known of which is the Y combinator); these, by definition, have the property that f(Y(f)) = Y(f) for all functions f.
To formulate such a denotational semantics, one might first try to construct a model for the lambda calculus, in which a genuine (total) function is associated with each lambda term. Such a model would formalize a link between the lambda calculus as a purely syntactic system and the lambda calculus as a notational system for manipulating concrete mathematical functions. The combinator calculus is such a model. However, the elements of the combinator calculus are functions from functions to functions; in order for the elements of a model of the lambda calculus to be of arbitrary domain and range, they could not be true functions, only partial functions.
Scott got around this difficulty by formalizing a notion of "partial" or "incomplete" information to represent computations that have not yet returned a result. This was modeled b |
https://en.wikipedia.org/wiki/Mezzanine | A mezzanine (; or in Italian, a mezzanino) is an intermediate floor in a building which is partly open to the double-height ceilinged floor below, or which does not extend over the whole floorspace of the building, a loft with non-sloped walls. However, the term is often used loosely for the floor above the ground floor, especially where a very high-ceilinged original ground floor has been split horizontally into two floors.
Mezzanines may serve a wide variety of functions. Industrial mezzanines, such as those used in warehouses, may be temporary or semi-permanent structures.
In Royal Italian architecture, mezzanino also means a chamber created by partitioning that does not go up all the way to the arch vaulting or ceiling; these were historically common in Italy and France, for example in the palaces for the nobility at the Quirinal Palace.
Definition
A mezzanine is an intermediate floor (or floors) in a building which is open to the floor below. It is placed halfway (mezzo means 'half' in Italian) up the wall on a floor which has a ceiling at least twice as high as a floor with minimum height. A mezzanine does not count as one of the floors in a building, and generally does not count in determining maximum floorspace. The International Building Code permits a mezzanine to have as much as one-third of the floor space of the floor below. Local building codes may vary somewhat from this standard. A space may have more than one mezzanine, as long as the sum total of floor space of all the mezzanines is not greater than one-third the floor space of the complete floor below.
Mezzanines help to make a high-ceilinged space feel more personal and less vast, and can create additional floor space. Mezzanines, however, may have lower-than-normal ceilings due to their location. The term "mezzanine" does not imply any particular function; mezzanines can be used for a wide array of purposes.
Mezzanines are commonly used in modern architecture, which places a heavy emphasis |
https://en.wikipedia.org/wiki/Figurate%20number | The term figurate number is used by different writers for members of different sets of numbers, generalizing from triangular numbers to different shapes (polygonal numbers) and different dimensions (polyhedral numbers). The term can mean
polygonal number
a number represented as a discrete -dimensional regular geometric pattern of -dimensional balls such as a polygonal number (for ) or a polyhedral number (for ).
a member of the subset of the sets above containing only triangular numbers, pyramidal numbers, and their analogs in other dimensions.
Terminology
Some kinds of figurate number were discussed in the 16th and 17th centuries under the name "figural number".
In historical works about Greek mathematics the preferred term used to be figured number.
In a use going back to Jacob Bernoulli's Ars Conjectandi, the term figurate number is used for triangular numbers made up of successive integers, tetrahedral numbers made up of successive triangular numbers, etc. These turn out to be the binomial coefficients. In this usage the square numbers (4, 9, 16, 25, ...) would not be considered figurate numbers when viewed as arranged in a square.
A number of other sources use the term figurate number as synonymous for the polygonal numbers, either just the usual kind or both those and the centered polygonal numbers.
History
The mathematical study of figurate numbers is said to have originated with Pythagoras, possibly based on Babylonian or Egyptian precursors. Generating whichever class of figurate numbers the Pythagoreans studied using gnomons is also attributed to Pythagoras. Unfortunately, there is no trustworthy source for these claims, because all surviving writings about the Pythagoreans are from centuries later. Speusippus is the earliest source to expose the view that ten, as the fourth triangular number, was in fact the tetractys, supposed to be of great importance for Pythagoreanism. Figurate numbers were a concern of the Pythagorean worldview. It was w |
https://en.wikipedia.org/wiki/Servomechanism | In mechanical engineering and control engineering, a servomechanism (also called a servo (to be differentiated from a servomotor, which may also be called "servo") or a servo system) is a control system for the position and its time derivatives (for example, velocity) of a mechanical system using closed-loop control to reduce steady-state error and improved dynamic response. In closed-loop control, error-sensing negative feedback is used to correct the action of the mechanism. In displacement-controlled applications, it usually includes a built-in encoder or other position feedback mechanism to ensure the output is achieving the desired effect. Following a specified motion trajectory is called servoing, where "servo" is used as a verb. The servo prefix originates from the Latin word servus meaning slave.
The term correctly applies only to systems where the feedback or error-correction signals help control mechanical position, speed, attitude or any other measurable variables. For example, an automotive power window control is not a servomechanism, as there is no automatic feedback that controls position—the operator does this by observation. By contrast a car's cruise control uses closed-loop feedback, which classifies it as a servomechanism.
Applications
Position control
A common type of servo provides position control. Commonly, servos are electric, hydraulic, or pneumatic. They operate on the principle of negative feedback, where the control input is compared to the actual position of the mechanical system as measured by some type of transducer at the output. Any difference between the actual and wanted values (an "error signal") is amplified (and converted) and used to drive the system in the direction necessary to reduce or eliminate the error. This procedure is one widely used application of control theory. Typical servos can give a rotary (angular) or linear output.
Speed control
Speed control via a governor is another type of servomechanism. The steam |
https://en.wikipedia.org/wiki/Life%20on%20Earth%20%28TV%20series%29 | Life on Earth: A Natural History by David Attenborough is a British television natural history series made by the BBC in association with Warner Bros. Television and Reiner Moritz Productions. It was transmitted in the UK from 16 January 1979.
During the course of the series presenter David Attenborough, following the format established by Kenneth Clark's Civilisation and Jacob Bronowski's The Ascent of Man (both series which he designed and produced as director of BBC2), travels the globe in order to trace the story of the evolution of life on the planet. Like the earlier series, it was divided into 13 programmes (each of around 55 minutes' duration). The executive producer was Christopher Parsons and the music was composed by Edward Williams.
Highly acclaimed, it is the first in Attenborough's Life series of programmes and was followed by The Living Planet (1984). It established Attenborough as not only the foremost television naturalist, but also an iconic figure in British cultural life.
Filming techniques
Several special filming techniques were devised to obtain some of the footage of rare and elusive animals. One cameraman spent hundreds of hours waiting for the fleeting moment when a Darwin's frog, which incubates its young in its mouth, finally spat them out. Another built a replica of a mole rat burrow in a horizontally mounted wheel, so that as the mole rat ran along the tunnel, the wheel could be spun to keep the animal adjacent to the camera. To illustrate the motion of bats' wings in flight, a slow-motion sequence was filmed in a wind tunnel. The series was also the first to include footage of a live (although dying) coelacanth.
The cameramen took advantage of improved film stock to produce some of the sharpest and most colourful wildlife footage that had been seen to date.
The programmes also pioneered a style of presentation whereby David Attenborough would begin describing a certain species' behaviour in one location, before cutting to another t |
https://en.wikipedia.org/wiki/Successor%20function | In mathematics, the successor function or successor operation sends a natural number to the next one. The successor function is denoted by S, so S(n) = n +1. For example, S(1) = 2 and S(2) = 3. The successor function is one of the basic components used to build a primitive recursive function.
Successor operations are also known as zeration in the context of a zeroth hyperoperation: H0(a, b) = 1 + b. In this context, the extension of zeration is addition, which is defined as repeated succession.
Overview
The successor function is part of the formal language used to state the Peano axioms, which formalise the structure of the natural numbers. In this formalisation, the successor function is a primitive operation on the natural numbers, in terms of which the standard natural numbers and addition is defined. For example, 1 is defined to be S(0), and addition on natural numbers is defined recursively by:
{|
|-
| m + 0 || = m,
|-
| m + S(n) || = S(m + n).
|}
This can be used to compute the addition of any two natural numbers. For example, 5 + 2 = 5 + S(1) = S(5 + 1) = S(5 + S(0)) = S(S(5 + 0)) = S(S(5)) = S(6) = 7.
Several constructions of the natural numbers within set theory have been proposed. For example, John von Neumann constructs the number 0 as the empty set {}, and the successor of n, S(n), as the set n ∪ {n}. The axiom of infinity then guarantees the existence of a set that contains 0 and is closed with respect to S. The smallest such set is denoted by N, and its members are called natural numbers.
The successor function is the level-0 foundation of the infinite Grzegorczyk hierarchy of hyperoperations, used to build addition, multiplication, exponentiation, tetration, etc. It was studied in 1986 in an investigation involving generalization of the pattern for hyperoperations.
It is also one of the primitive functions used in the characterization of computability by recursive functions.
See also
Successor ordinal
Successor cardinal
Increment and decrement |
https://en.wikipedia.org/wiki/Hopf%20algebra | In mathematics, a Hopf algebra, named after Heinz Hopf, is a structure that is simultaneously an (unital associative) algebra and a (counital coassociative) coalgebra, with these structures' compatibility making it a bialgebra, and that moreover is equipped with an antiautomorphism satisfying a certain property. The representation theory of a Hopf algebra is particularly nice, since the existence of compatible comultiplication, counit, and antipode allows for the construction of tensor products of representations, trivial representations, and dual representations.
Hopf algebras occur naturally in algebraic topology, where they originated and are related to the H-space concept, in group scheme theory, in group theory (via the concept of a group ring), and in numerous other places, making them probably the most familiar type of bialgebra. Hopf algebras are also studied in their own right, with much work on specific classes of examples on the one hand and classification problems on the other. They have diverse applications ranging from condensed-matter physics and quantum field theory to string theory and LHC phenomenology.
Formal definition
Formally, a Hopf algebra is an (associative and coassociative) bialgebra H over a field K together with a K-linear map S: H → H (called the antipode) such that the following diagram commutes:
Here Δ is the comultiplication of the bialgebra, ∇ its multiplication, η its unit and ε its counit. In the sumless Sweedler notation, this property can also be expressed as
As for algebras, one can replace the underlying field K with a commutative ring R in the above definition.
The definition of Hopf algebra is self-dual (as reflected in the symmetry of the above diagram), so if one can define a dual of H (which is always possible if H is finite-dimensional), then it is automatically a Hopf algebra.
Structure constants
Fixing a basis for the underlying vector space, one may define the algebra in terms of structure constants for multip |
https://en.wikipedia.org/wiki/Social%20network%20analysis | Social network analysis (SNA) is the process of investigating social structures through the use of networks and graph theory. It characterizes networked structures in terms of nodes (individual actors, people, or things within the network) and the ties, edges, or links (relationships or interactions) that connect them. Examples of social structures commonly visualized through social network analysis include social media networks, meme spread, information circulation, friendship and acquaintance networks, peer learner networks, business networks, knowledge networks, difficult working relationships, collaboration graphs, kinship, disease transmission, and sexual relationships. These networks are often visualized through sociograms in which nodes are represented as points and ties are represented as lines. These visualizations provide a means of qualitatively assessing networks by varying the visual representation of their nodes and edges to reflect attributes of interest.
Social network analysis has emerged as a key technique in modern sociology. It has also gained significant popularity in the following - anthropology, biology, demography, communication studies, economics, geography, history, information science, organizational studies, political science, public health, social psychology, development studies, sociolinguistics, and computer science, education and distance education research, and is now commonly available as a consumer tool (see the list of SNA software).
The advantages of SNA are twofold. Firstly, it can process a large amount of relational data and describe the overall relational network structure. tem and parameter selection to confirm the influential nodes in the network, such as in-degree and out-degree centrality. SNA context and choose which parameters to define the “center” according to the characteristics of the network. Through analyzing nodes, clusters and relations, the communication structure and position of individuals can be clearly |
https://en.wikipedia.org/wiki/Phenomenalism | In metaphysics, phenomenalism is the view that physical objects cannot justifiably be said to exist in themselves, but only as perceptual phenomena or sensory stimuli (e.g. redness, hardness, softness, sweetness, etc.) situated in time and in space. In particular, some forms of phenomenalism reduce all talk about physical objects in the external world to talk about bundles of sense data.
History
Phenomenalism is a radical form of empiricism. Its roots as an ontological view of the nature of existence can be traced back to George Berkeley and his subjective idealism, upon which David Hume further elaborated. John Stuart Mill had a theory of perception which is commonly referred to as classical phenomenalism. This differs from Berkeley's idealism in its account of how objects continue to exist when no one is perceiving them. Berkeley claimed that an omniscient God perceived all objects and that this was what kept them in existence, whereas Mill claimed that permanent possibilities of experience were sufficient for an object's existence. These permanent possibilities could be analysed into counterfactual conditionals, such as "if I were to have y-type sensations, then I would also have x-type sensations".
As an epistemological theory about the possibility of knowledge of objects in the external world, however, the most accessible formulation of phenomenalism is perhaps to be found in the transcendental idealism of Immanuel Kant. According to Kant, space and time, which are the a priori forms and preconditions of all sensory experience, "refer to objects only to the extent that these are considered as phenomena, but do not represent the things in themselves". While Kant insisted that knowledge is limited to phenomena, he never denied or excluded the existence of objects which were not knowable by way of experience, the things-in-themselves or noumena, though he never proved them.
Kant's "epistemological phenomenalism", as it has been called, is therefore quite distin |
https://en.wikipedia.org/wiki/Architectural%20state | Architectural state is the collection of information in a computer system that defines the state of a program during execution. Architectural state includes main memory, architectural registers, and the program counter. Architectural state is defined by the instruction set architecture and can be manipulated by the programmer using instructions. A core dump is a file recording the architectural state of a computer program at some point in time, such as when it has crashed.
Examples of architectural state include:
Main Memory (Primary storage)
Control registers
Instruction flag registers (such as EFLAGS in x86)
Interrupt mask registers
Memory management unit registers
Status registers
General purpose registers (such as AX, BX, CX, DX, etc. in x86)
Address registers
Counter registers
Index registers
Stack registers
String registers
Architectural state is not microarchitectural state. Microarchitectural state is hidden machine state used for implementing the microarchitecture. Examples of microarchitectural state include pipeline registers, cache tags, and branch predictor state. While microarchitectural state can change to suit the needs of each processor implementation in a processor family, binary compatibility among processors in a processor family requires a common architectural state.
Architectural state naturally does not include state-less elements of a computer such as busses and computation units (e.g., the ALU).
References
Central processing unit |
https://en.wikipedia.org/wiki/Glossary%20of%20graph%20theory | This is a glossary of graph theory. Graph theory is the study of graphs, systems of nodes or vertices connected in pairs by lines or edges.
Symbols
A
B
C
D
E
F
G
H
I
K
L
M
N
O
P
Q
R
S
T
U
V
W
See also
List of graph theory topics
Gallery of named graphs
Graph algorithms
Glossary of areas of mathematics
References
Graph theory
Glossaries of mathematics
Wikipedia glossaries using description lists
he:גרף (תורת הגרפים)#תת גרף |
https://en.wikipedia.org/wiki/Graph%20%28discrete%20mathematics%29 | In discrete mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related". The objects correspond to mathematical abstractions called vertices (also called nodes or points) and each of the related pairs of vertices is called an edge (also called link or line). Typically, a graph is depicted in diagrammatic form as a set of dots or circles for the vertices, joined by lines or curves for the edges. Graphs are one of the objects of study in discrete mathematics.
The edges may be directed or undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this graph is undirected because any person A can shake hands with a person B only if B also shakes hands with A. In contrast, if an edge from a person A to a person B means that A owes money to B, then this graph is directed, because owing money is not necessarily reciprocated.
Graphs are the basic subject studied by graph theory. The word "graph" was first used in this sense by J. J. Sylvester in 1878 due to a direct relation between mathematics and chemical structure (what he called a chemico-graphical image).
Definitions
Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures.
Graph
A graph (sometimes called an undirected graph to distinguish it from a directed graph, or a simple graph to distinguish it from a multigraph) is a pair , where is a set whose elements are called vertices (singular: vertex), and is a set of paired vertices, whose elements are called edges (sometimes links or lines).
The vertices and of an edge are called the endpoints of the edge. The edge is said to join and and to be incident on and . A vertex may belong to no edge, in which case it is not joined to any other vertex.
A multigraph is a generalization that allows multip |
https://en.wikipedia.org/wiki/List%20of%20assets%20owned%20by%20General%20Electric | List of assets owned by General Electric:
Primary business units
GE Aerospace
GE Power
GE Renewable Energy
Other business units
GE Additive
GE Capital
GE Energy Financial Services
GE Digital
GE Research
GE Licensing
See also
Lists of corporate assets
References
Sources
http://www.ge.com
General Electric
General E
Assets |
https://en.wikipedia.org/wiki/Pollen%20tube | A pollen tube is a tubular structure produced by the male gametophyte of seed plants when it germinates. Pollen tube elongation is an integral stage in the plant life cycle. The pollen tube acts as a conduit to transport the male gamete cells from the pollen grain—either from the stigma (in flowering plants) to the ovules at the base of the pistil or directly through ovule tissue in some gymnosperms. In maize, this single cell can grow longer than to traverse the length of the pistil.
Pollen tubes were first discovered by Giovanni Battista Amici in the 19th century.
They are used as a model for understanding plant cell behavior. Research is ongoing to comprehend how the pollen tube responds to extracellular guidance signals to achieve fertilization.
Description
Pollen tubes are produced by the male gametophytes of seed plants. Pollen tubes act as conduits to transport the male gamete cells from the pollen grain—either from the stigma (in flowering plants) to the ovules at the base of the pistil or directly through ovule tissue in some gymnosperms. Pollen tubes are unique to seed plants and their structures have evolved over their history since the Carboniferous period. Pollen tube formation is complex and the mechanism is not fully understood, but is of great interest to scientists because pollen tubes transport the male gametes produced by pollen grains to the female gametophyte. Once a pollen grain has implanted on a compatible stigma, its germination is initiated. During this process, the pollen grain begins to bulge outwards to form a tube-like structure, known as the pollen tube. The pollen tube structure rapidly descends down the length of the style via tip-directed growth, reaching rates of 1 cm/h, whilst carrying two non-motile sperm cells. Upon reaching the ovule the pollen tube ruptures, thereby delivering the sperm cells to the female gametophyte. In flowering plants a double fertilization event occurs. The first fertilization event produces a diplo |
https://en.wikipedia.org/wiki/Windows%207 | Windows 7 is a major release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on July 22, 2009, and became generally available on October 22, 2009. It is the successor to Windows Vista, released nearly three years earlier. Windows 7's server counterpart, Windows Server 2008 R2, was released at the same time. Windows 7 remained an operating system for use on personal computers, including home and business desktops, laptops, tablet PCs and media center PCs, and itself was replaced in November 2012 by Windows 8, the name spanning more than three years of the product.
Extended support ended on January 14, 2020, over ten years after the release of Windows 7, after which the operating system ceased receiving further updates. A paid support program was available for enterprises, providing security updates for Windows 7 for up to three years since the official end of life.
Windows 7 was intended to be an incremental upgrade to Microsoft Windows, addressing Windows Vista's poor critical reception while maintaining hardware and software compatibility. Windows 7 continued improvements on the Windows Aero user interface with the addition of a redesigned taskbar that allows pinned applications, and new window management features. Other new features were added to the operating system, including libraries, the new file-sharing system HomeGroup, and support for multitouch input. A new "Action Center" was also added to provide an overview of system security and maintenance information, and tweaks were made to the User Account Control system to make it less intrusive. Windows 7 also shipped with updated versions of several stock applications, including Internet Explorer 8, Windows Media Player, and Windows Media Center.
Unlike Windows Vista, Windows 7 received critical acclaim, with critics considering the operating system to be a major improvement over its predecessor because of its improved performance, its more intuitive interface, few |
https://en.wikipedia.org/wiki/Isoperimetric%20inequality | In mathematics, the isoperimetric inequality is a geometric inequality involving the perimeter of a set and its volume. In -dimensional space the inequality lower bounds the surface area or perimeter of a set by its volume ,
,
where is a unit sphere. The equality holds only when is a sphere in .
On a plane, i.e. when , the isoperimetric inequality relates the square of the circumference of a closed curve and the area of a plane region it encloses. Isoperimetric literally means "having the same perimeter". Specifically in , the isoperimetric inequality states, for the length L of a closed curve and the area A of the planar region that it encloses, that
and that equality holds if and only if the curve is a circle.
The isoperimetric problem is to determine a plane figure of the largest possible area whose boundary has a specified length. The closely related Dido's problem asks for a region of the maximal area bounded by a straight line and a curvilinear arc whose endpoints belong to that line. It is named after Dido, the legendary founder and first queen of Carthage. The solution to the isoperimetric problem is given by a circle and was known already in Ancient Greece. However, the first mathematically rigorous proof of this fact was obtained only in the 19th century. Since then, many other proofs have been found.
The isoperimetric problem has been extended in multiple ways, for example, to curves on surfaces and to regions in higher-dimensional spaces. Perhaps the most familiar physical manifestation of the 3-dimensional isoperimetric inequality is the shape of a drop of water. Namely, a drop will typically assume a symmetric round shape. Since the amount of water in a drop is fixed, surface tension forces the drop into a shape which minimizes the surface area of the drop, namely a round sphere.
The isoperimetric problem in the plane
The classical isoperimetric problem dates back to antiquity. The problem can be stated as follows: Among all closed curves |
https://en.wikipedia.org/wiki/Power%20center%20%28geometry%29 | In geometry, the power center of three circles, also called the radical center, is the intersection point of the three radical axes of the pairs of circles. If the radical center lies outside of all three circles, then it is the center of the unique circle (the radical circle) that intersects the three given circles orthogonally; the construction of this orthogonal circle corresponds to Monge's problem. This is a special case of the three conics theorem.
The three radical axes meet in a single point, the radical center, for the following reason. The radical axis of a pair of circles is defined as the set of points that have equal power with respect to both circles. For example, for every point on the radical axis of circles 1 and 2, the powers to each circle are equal: . Similarly, for every point on the radical axis of circles 2 and 3, the powers must be equal, . Therefore, at the intersection point of these two lines, all three powers must be equal, . Since this implies that , this point must also lie on the radical axis of circles 1 and 3. Hence, all three radical axes pass through the same point, the radical center.
The radical center has several applications in geometry. It has an important role in a solution to Apollonius' problem published by Joseph Diaz Gergonne in 1814. In the power diagram of a system of circles, all of the vertices of the diagram are located at radical centers of triples of circles. The Spieker center of a triangle is the radical center of its excircles. Several types of radical circles have been defined as well, such as the radical circle of the Lucas circles.
Notes
Further reading
External links
Radical Center at Cut-the-Knot
Radical Axis and Center at Cut-the-Knot
Elementary geometry
Geometric centers |
https://en.wikipedia.org/wiki/Reverse%20mathematics | Reverse mathematics is a program in mathematical logic that seeks to determine which axioms are required to prove theorems of mathematics. Its defining method can briefly be described as "going backwards from the theorems to the axioms", in contrast to the ordinary mathematical practice of deriving theorems from axioms. It can be conceptualized as sculpting out necessary conditions from sufficient ones.
The reverse mathematics program was foreshadowed by results in set theory such as the classical theorem that the axiom of choice and Zorn's lemma are equivalent over ZF set theory. The goal of reverse mathematics, however, is to study possible axioms of ordinary theorems of mathematics rather than possible axioms for set theory.
Reverse mathematics is usually carried out using subsystems of second-order arithmetic, where many of its definitions and methods are inspired by previous work in constructive analysis and proof theory. The use of second-order arithmetic also allows many techniques from recursion theory to be employed; many results in reverse mathematics have corresponding results in computable analysis. In higher-order reverse mathematics, the focus is on subsystems of higher-order arithmetic, and the associated richer language.
The program was founded by and brought forward by Steve Simpson. A standard reference for the subject is , while an introduction for non-specialists is . An introduction to higher-order reverse mathematics, and also the founding paper, is .
General principles
In reverse mathematics, one starts with a framework language and a base theory—a core axiom system—that is too weak to prove most of the theorems one might be interested in, but still powerful enough to develop the definitions necessary to state these theorems. For example, to study the theorem “Every bounded sequence of real numbers has a supremum” it is necessary to use a base system that can speak of real numbers and sequences of real numbers.
For each theorem tha |
https://en.wikipedia.org/wiki/Lagrange%27s%20four-square%20theorem | Lagrange's four-square theorem, also known as Bachet's conjecture, states that every natural number can be represented as a sum of four non-negative integer squares. That is, the squares form an additive basis of order four.
where the four numbers are integers. For illustration, 3, 31, and 310 in several ways, can be represented as the sum of four squares as follows:
This theorem was proven by Joseph Louis Lagrange in 1770. It is a special case of the Fermat polygonal number theorem.
Historical development
From examples given in the Arithmetica, it is clear that Diophantus was aware of the theorem. This book was translated in 1621 into Latin by Bachet (Claude Gaspard Bachet de Méziriac), who stated the theorem in the notes of his translation. But the theorem was not proved until 1770 by Lagrange.
Adrien-Marie Legendre extended the theorem in 1797–8 with his three-square theorem, by proving that a positive integer can be expressed as the sum of three squares if and only if it is not of the form for integers and . Later, in 1834, Carl Gustav Jakob Jacobi discovered a simple formula for the number of representations of an integer as the sum of four squares with his own four-square theorem.
The formula is also linked to Descartes' theorem of four "kissing circles", which involves the sum of the squares of the curvatures of four circles. This is also linked to Apollonian gaskets, which were more recently related to the Ramanujan–Petersson conjecture.
Proofs
The classical proof
Several very similar modern versions of Lagrange's proof exist. The proof below is a slightly simplified version, in which the cases for which m is even or odd do not require separate arguments.
Proof using the Hurwitz integers
Another way to prove the theorem relies on Hurwitz quaternions, which are the analog of integers for quaternions.
Generalizations
Lagrange's four-square theorem is a special case of the Fermat polygonal number theorem and Waring's problem. Another possible gen |
https://en.wikipedia.org/wiki/Finitely%20generated%20module | In mathematics, a finitely generated module is a module that has a finite generating set. A finitely generated module over a ring R may also be called a finite R-module, finite over R, or a module of finite type.
Related concepts include finitely cogenerated modules, finitely presented modules, finitely related modules and coherent modules all of which are defined below. Over a Noetherian ring the concepts of finitely generated, finitely presented and coherent modules coincide.
A finitely generated module over a field is simply a finite-dimensional vector space, and a finitely generated module over the integers is simply a finitely generated abelian group.
Definition
The left R-module M is finitely generated if there exist a1, a2, ..., an in M such that for any x in M, there exist r1, r2, ..., rn in R with x = r1a1 + r2a2 + ... + rnan.
The set {a1, a2, ..., an} is referred to as a generating set of M in this case. A finite generating set need not be a basis, since it need not be linearly independent over R. What is true is: M is finitely generated if and only if there is a surjective R-linear map:
for some n (M is a quotient of a free module of finite rank).
If a set S generates a module that is finitely generated, then there is a finite generating set that is included in S, since only finitely many elements in S are needed to express any finite generating set, and these finitely many elements form a generating set. However, it may occur that S does not contain any finite generating set of minimal cardinality. For example the set of the prime numbers is a generating set of viewed as -module, and a generating set formed from prime numbers has at least two elements, while the singleton is also a generating set.
In the case where the module M is a vector space over a field R, and the generating set is linearly independent, n is well-defined and is referred to as the dimension of M (well-defined means that any linearly independent generating set has n elements |
https://en.wikipedia.org/wiki/Free%20module | In mathematics, a free module is a module that has a basis, that is, a generating set consisting of linearly independent elements. Every vector space is a free module, but, if the ring of the coefficients is not a division ring (not a field in the commutative case), then there exist non-free modules.
Given any set and ring , there is a free -module with basis , which is called the free module on or module of formal -linear combinations of the elements of .
A free abelian group is precisely a free module over the ring of integers.
Definition
For a ring and an -module , the set is a basis for if:
is a generating set for ; that is to say, every element of is a finite sum of elements of multiplied by coefficients in ; and
is linearly independent if for every of distinct elements, implies that (where is the zero element of and is the zero element of ).
A free module is a module with a basis.
An immediate consequence of the second half of the definition is that the coefficients in the first half are unique for each element of M.
If has invariant basis number, then by definition any two bases have the same cardinality. For example, nonzero commutative rings have invariant basis number. The cardinality of any (and therefore every) basis is called the rank of the free module . If this cardinality is finite, the free module is said to be free of finite rank, or free of rank if the rank is known to be .
Examples
Let R be a ring.
R is a free module of rank one over itself (either as a left or right module); any unit element is a basis.
More generally, If R is commutative, a nonzero ideal I of R is free if and only if it is a principal ideal generated by a nonzerodivisor, with a generator being a basis.
Over a principal ideal domain (e.g., ), a submodule of a free module is free.
If R is commutative, the polynomial ring in indeterminate X is a free module with a possible basis 1, X, X2, ....
Let be a polynomial ring over a commutative ring A, f a m |
https://en.wikipedia.org/wiki/Undersampling | In signal processing, undersampling or bandpass sampling is a technique where one samples a bandpass-filtered signal at a sample rate below its Nyquist rate (twice the upper cutoff frequency), but is still able to reconstruct the signal.
When one undersamples a bandpass signal, the samples are indistinguishable from the samples of a low-frequency alias of the high-frequency signal. Such sampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF-to-digital conversion.
Description
The Fourier transforms of real-valued functions are symmetrical around the 0 Hz axis. After sampling, only a periodic summation of the Fourier transform (called discrete-time Fourier transform) is still available. The individual frequency-shifted copies of the original transform are called aliases. The frequency offset between adjacent aliases is the sampling-rate, denoted by fs. When the aliases are mutually exclusive (spectrally), the original transform and the original continuous function, or a frequency-shifted version of it (if desired), can be recovered from the samples. The first and third graphs of Figure 1 depict a baseband spectrum before and after being sampled at a rate that completely separates the aliases.
The second graph of Figure 1 depicts the frequency profile of a bandpass function occupying the band (A, A+B) (shaded blue) and its mirror image (shaded beige). The condition for a non-destructive sample rate is that the aliases of both bands do not overlap when shifted by all integer multiples of fs. The fourth graph depicts the spectral result of sampling at the same rate as the baseband function. The rate was chosen by finding the lowest rate that is an integer sub-multiple of A and also satisfies the baseband Nyquist criterion: fs > 2B. Consequently, the bandpass function has effectively been converted to baseband. All the other rates that avoid overlap are given by these more general criteria, where A and A+B are replaced |
https://en.wikipedia.org/wiki/Mathematics%20education | In contemporary education, mathematics education—known in Europe as the didactics or pedagogy of mathematics—is the practice of teaching, learning, and carrying out scholarly research into the transfer of mathematical knowledge.
Although research into mathematics education is primarily concerned with the tools, methods, and approaches that facilitate practice or the study of practice, it also covers an extensive field of study encompassing a variety of different concepts, theories and methods. National and international organisations regularly hold conferences and publish literature in order to improve mathematics education.
History
Ancient
Elementary mathematics were a core part of education in many ancient civilisations, including ancient Egypt, ancient Babylonia, ancient Greece, ancient Rome, and Vedic India. In most cases, formal education was only available to male children with sufficiently high status, wealth, or caste. The oldest known mathematics textbook is the Rhind papyrus, dated from circa 1650 BCE.
Pythagorean theorem
Historians of Mesopotamia have confirmed that use of the Pythagorean rule dates back to the Old Babylonian Empire (20th–16th centuries BC) and that it was being taught in scribal schools over one thousand years before the birth of Pythagoras.
In Plato's division of the liberal arts into the trivium and the quadrivium, the quadrivium included the mathematical fields of arithmetic and geometry. This structure was continued in the structure of classical education that was developed in medieval Europe. The teaching of geometry was almost universally based on Euclid's Elements. Apprentices to trades such as masons, merchants, and moneylenders could expect to learn such practical mathematics as was relevant to their profession.
Medieval and early modern
In the Middle Ages, the academic status of mathematics declined, because it was strongly associated with trade and commerce, and considered somewhat un-Christian. Although it continued |
https://en.wikipedia.org/wiki/Irritation | Irritation, in biology and physiology, is a state of inflammation or painful reaction to allergy or cell-lining damage. A stimulus or agent which induces the state of irritation is an irritant. Irritants are typically thought of as chemical agents (for example phenol and capsaicin) but mechanical, thermal (heat), and radiative stimuli (for example ultraviolet light or ionising radiations) can also be irritants. Irritation also has non-clinical usages referring to bothersome physical or psychological pain or discomfort.
Irritation can also be induced by some allergic response due to exposure of some allergens for example contact dermatitis, irritation of mucosal membranes and pruritus. Mucosal membrane is the most common site of irritation because it contains secretory glands that release mucous which attracts the allergens due to its sticky nature.
Chronic irritation is a medical term signifying that afflictive health conditions have been present for a while. There are many disorders that can cause chronic irritation, the majority involve the skin, vagina, eyes and lungs.
Irritation in organisms
In higher organisms, an allergic response may be the cause of irritation. An allergen is defined distinctly from an irritant, however, as allergy requires a specific interaction with the immune system and is thus dependent on the (possibly unique) sensitivity of the organism involved while an irritant, classically, acts in a non-specific manner.
It is a form of stress, but conversely, if one is stressed by unrelated matters, mild imperfections can cause more irritation than usual: one is irritable; see also sensitivity (human).
In more basic organisms, the status of pain is the perception of the being stimulated, which is not observable although it may be shared (see gate control theory of pain).
It is not proven that oysters can feel pain, but it is known that they react to irritants. When an irritating object becomes trapped within an oyster's shell, it deposits laye |
https://en.wikipedia.org/wiki/YAML | YAML () (see ) is a human-readable data serialization language. It is commonly used for configuration files and in applications where data is being stored or transmitted. YAML targets many of the same communications applications as Extensible Markup Language (XML) but has a minimal syntax which intentionally differs from Standard Generalized Markup Language (SGML). It uses both Python-style indentation to indicate nesting, and a more compact format that uses for lists and for maps but forbids tab characters to use as indentation thus only some JSON files are valid YAML 1.2.
Custom data types are allowed, but YAML natively encodes scalars (such as strings, integers, and floats), lists, and associative arrays (also known as maps, dictionaries or hashes). These data types are based on the Perl programming language, though all commonly used high-level programming languages share very similar concepts. The colon-centered syntax, used for expressing key-value pairs, is inspired by electronic mail headers as defined in , and the document separator is borrowed from MIME (). Escape sequences are reused from C, and whitespace wrapping for multi-line strings is inspired by HTML. Lists and hashes can contain nested lists and hashes, forming a tree structure; arbitrary graphs can be represented using YAML aliases (similar to XML in SOAP). YAML is intended to be read and written in streams, a feature inspired by SAX.
Support for reading and writing YAML is available for many programming languages. Some source-code editors such as Vim, Emacs, and various integrated development environments have features that make editing YAML easier, such as folding up nested structures or automatically highlighting syntax errors.
The official recommended filename extension for YAML files has been since 2006.
History and name
YAML (, rhymes with camel) was first proposed by Clark Evans in 2001, who designed it together with Ingy döt Net and Oren Ben-Kiki. Originally YAML was said to mean |
https://en.wikipedia.org/wiki/Ramsey%20cardinal | In mathematics, a Ramsey cardinal is a certain kind of large cardinal number introduced by and named after Frank P. Ramsey, whose theorem establishes that ω enjoys a certain property that Ramsey cardinals generalize to the uncountable case.
Let [κ]<ω denote the set of all finite subsets of κ. A cardinal number κ is called Ramsey if, for every function
f: [κ]<ω → {0, 1}
there is a set A of cardinality κ that is homogeneous for f. That is, for every n, the function f is constant on the subsets of cardinality n from A. A cardinal κ is called ineffably Ramsey if A can be chosen to be a stationary subset of κ. A cardinal κ is called virtually Ramsey if for every function
f: [κ]<ω → {0, 1}
there is C, a closed and unbounded subset of κ, so that for every λ in C of uncountable cofinality, there is an unbounded subset of λ that is homogenous for f; slightly weaker is the notion of almost Ramsey where homogenous sets for f are required of order type λ, for every λ < κ.
The existence of any of these kinds of Ramsey cardinal is sufficient to prove the existence of 0#, or indeed that every set with rank less than κ has a sharp.
Every measurable cardinal is a Ramsey cardinal, and every Ramsey cardinal is a Rowbottom cardinal.
A property intermediate in strength between Ramseyness and measurability is existence of a κ-complete normal non-principal ideal I on κ such that for every and for every function
f: [κ]<ω → {0, 1}
there is a set B ⊂ A not in I that is homogeneous for f. This is strictly stronger than κ being ineffably Ramsey.
The existence of a Ramsey cardinal implies the existence of 0# and this in turn implies the falsity of the Axiom of Constructibility of Kurt Gödel.
References
Large cardinals
Ramsey theory |
https://en.wikipedia.org/wiki/Value%20engineering | Value engineering (VE) is a systematic analysis of the functions of various components and materials to lower the cost of goods, products and services with a tolerable loss of performance or functionality. Value, as defined, is the ratio of function to cost. Value can therefore be manipulated by either improving the function or reducing the cost. It is a primary tenet of value engineering that basic functions be preserved and not be reduced as a consequence of pursuing value improvements. The term "value management" is sometimes used as a synonym of "value engineering", and both promote the planning and delivery of projects with improved performance
The reasoning behind value engineering is as follows: if marketers expect a product to become practically or stylistically obsolete within a specific length of time, they can design it to only last for that specific lifetime. The products could be built with higher-grade components, but with value engineering they are not because this would impose an unnecessary cost on the manufacturer, and to a limited extent also an increased cost on the purchaser. Value engineering will reduce these costs. A company will typically use the least expensive components that satisfy the product's lifetime projections at a risk of product and company reputation.
Due to the very short life spans, however, which is often a result of this "value engineering technique", planned obsolescence has become associated with product deterioration and inferior quality. Vance Packard once claimed this practice gave engineering as a whole a bad name, as it directed creative engineering energies toward short-term market ends. Philosophers such as Herbert Marcuse and Jacque Fresco have also criticized the economic and societal implications of this model.
History
Value engineering began at General Electric Co. during World War II. Because of the war, there were shortages of skilled labour, raw materials, and component parts. Lawrence Miles, Jerry Leftow |
https://en.wikipedia.org/wiki/Differential%20amplifier | A differential amplifier is a type of electronic amplifier that amplifies the difference between two input voltages but suppresses any voltage common to the two inputs. It is an analog circuit with two inputs and and one output , in which the output is ideally proportional to the difference between the two voltages:
where is the gain of the amplifier.
Single amplifiers are usually implemented by either adding the appropriate feedback resistors to a standard op-amp, or with a dedicated integrated circuit containing internal feedback resistors. It is also a common sub-component of larger integrated circuits handling analog signals.
Theory
The output of an ideal differential amplifier is given by
where and are the input voltages, and is the differential gain.
In practice, however, the gain is not quite equal for the two inputs. This means, for instance, that if and are equal, the output will not be zero, as it would be in the ideal case. A more realistic expression for the output of a differential amplifier thus includes a second term:
where is called the common-mode gain of the amplifier.
As differential amplifiers are often used to null out noise or bias voltages that appear at both inputs, a low common-mode gain is usually desired.
The common-mode rejection ratio (CMRR), usually defined as the ratio between differential-mode gain and common-mode gain, indicates the ability of the amplifier to accurately cancel voltages that are common to both inputs. The common-mode rejection ratio is defined as
In a perfectly symmetric differential amplifier, is zero, and the CMRR is infinite. Note that a differential amplifier is a more general form of amplifier than one with a single input; by grounding one input of a differential amplifier, a single-ended amplifier results.
Long-tailed pair
Historical background
Modern differential amplifiers are usually implemented with a basic two-transistor circuit called a “long-tailed” pair or differential |
https://en.wikipedia.org/wiki/Online%20service%20provider | An online service provider (OSP) can, for example, be an Internet service provider, an email provider, a news provider (press), an entertainment provider (music, movies), a search engine, an e-commerce site, an online banking site, a health site, an official government site, social media, a wiki, or a Usenet newsgroup.
In its original more limited definition, it referred only to a commercial computer communication service in which paid members could dial via a computer modem the service's private computer network and access various services and information resources such as bulletin board systems, downloadable files and programs, news articles, chat rooms, and electronic mail services. The term "online service" was also used in references to these dial-up services. The traditional dial-up online service differed from the modern Internet service provider in that they provided a large degree of content that was only accessible by those who subscribed to the online service, while ISP mostly serves to provide access to the Internet and generally provides little if any exclusive content of its own.
In the U.S., the Online Copyright Infringement Liability Limitation Act (OCILLA) portion of the U.S. Digital Millennium Copyright Act has expanded the legal definition of online service in two different ways for different portions of the law. It states in section 512(k)(1):
(A) As used in subsection (a), the term "service provider" means an entity offering the transmission, routing, or providing of connections for digital online communications, between or among points specified by a user, of material of the user’s choosing, without modification to the content of the material as sent or received.
(B) As used in this section, other than subsection (a), the term "service provider" means a provider of online services or network access, or the operator of facilities therefore, and includes an entity described in subparagraph (A).
These broad definitions make it possible for |
https://en.wikipedia.org/wiki/Chebyshev%20filter | Chebyshev filters are analog or digital filters that have a steeper roll-off than Butterworth filters, and have either passband ripple (type I) or stopband ripple (type II). Chebyshev filters have the property that they minimize the error between the idealized and the actual filter characteristic over the operating frequency range of the filter, but they achieve this with ripples in the passband. This type of filter is named after Pafnuty Chebyshev because its mathematical characteristics are derived from Chebyshev polynomials. Type I Chebyshev filters are usually referred to as "Chebyshev filters", while type II filters are usually called "inverse Chebyshev filters". Because of the passband ripple inherent in Chebyshev filters, filters with a smoother response in the passband but a more irregular response in the stopband are preferred for certain applications.
Type I Chebyshev filters (Chebyshev filters)
Type I Chebyshev filters are the most common types of Chebyshev filters. The gain (or amplitude) response, , as a function of angular frequency of the th-order low-pass filter is equal to the absolute value of the transfer function evaluated at :
where is the ripple factor, is the cutoff frequency and is a Chebyshev polynomial of the th order.
The passband exhibits equiripple behavior, with the ripple determined by the ripple factor . In the passband, the Chebyshev polynomial alternates between -1 and 1 so the filter gain alternate between maxima at and minima at .
The ripple factor ε is thus related to the passband ripple δ in decibels by:
At the cutoff frequency the gain again has the value but continues to drop into the stopband as the frequency increases. This behavior is shown in the diagram on the right. The common practice of defining the cutoff frequency at −3 dB is usually not applied to Chebyshev filters; instead the cutoff is taken as the point at which the gain falls to the value of the ripple for the final time.
The 3 dB frequency is r |
https://en.wikipedia.org/wiki/Web%20syndication | Web syndication is making content available from one website to other sites. Most commonly, websites are made available to provide either summaries or full renditions of a website's recently added content. The term may also describe other kinds of content licensing for reuse.
Motivation
For the subscribing sites, syndication is an effective way of adding greater depth and immediacy of information to their pages, making them more attractive to users. For the provider site, syndication increases exposure. This generates new traffic for the provider site—making syndication an easy and relatively cheap, or even free, form of advertisement.
Content syndication has become an effective strategy for link building, as search engine optimization has become an increasingly important topic among website owners and online marketers. Links embedded within the syndicated content are typically optimized around anchor terms that will point an optimized link back to the website that the content author is trying to promote. These links tell the algorithms of the search engines that the website being linked to is an authority for the keyword that is being used as the anchor text. However the rollout of Google Panda's algorithm may not reflect this authority in its SERP rankings based on quality scores generated by the sites linking to the authority.
The prevalence of web syndication is also of note to online marketers, since web surfers are becoming increasingly wary of providing personal information for marketing materials (such as signing up for a newsletter) and expect the ability to subscribe to a feed instead. Although the format could be anything transported over HTTP, such as HTML or JavaScript, it is more commonly XML. Web syndication formats include RSS, Atom, and JSON Feed.
History
Syndication first arose in earlier media such as print, radio, and television, allowing content creators to reach a wider audience. In the case of radio, the United States Federal government p |
https://en.wikipedia.org/wiki/Electrical%20efficiency | The efficiency of a system in electronics and electrical engineering is defined as useful power output divided by the total electrical power consumed (a fractional expression), typically denoted by the Greek small letter eta (η – ήτα).
If energy output and input are expressed in the same units, efficiency is a dimensionless number. Where it is not customary or convenient to represent input and output energy in the same units, efficiency-like quantities have units associated with them. For example, the heat rate of a fossil fuel power plant may be expressed in BTU per kilowatt-hour. Luminous efficacy of a light source expresses the amount of visible light for a certain amount of power transfer and has the units of lumens per watt.
Efficiency of typical electrical devices
Efficiency should not be confused with effectiveness: a system that wastes most of its input power but produces exactly what it is meant to is effective but not efficient. The term "efficiency" makes sense only in reference to the wanted effect. A light bulb, for example, might have 2% efficiency at emitting light yet still be 98% efficient at heating a room (In practice it is nearly 100% efficient at heating a room because the light energy will also be converted to heat eventually, apart from the small fraction that leaves through the windows). An electronic amplifier that delivers 10 watts of power to its load (e.g., a loudspeaker), while drawing 20 watts of power from a power source is 50% efficient. (10/20 × 100 = 50%)
Electric kettle: more than 90% (comparatively little heat energy is lost during the 2 to 3 minutes a kettle takes to boil water).
A premium efficiency electric motor: more than 90% (see Main Article: Premium efficiency).
A large power transformer used in the electrical grid may have efficiency of more than 99%. Early 19th century transformers were much less efficient, wasting up to a third of the energy passing through them.
A steam power plant used to generate electrici |
https://en.wikipedia.org/wiki/Versatile%20Real-Time%20Executive | Versatile Real-Time Executive (VRTX) is a real-time operating system (RTOS) developed and marketed by the company Mentor Graphics. VRTX is suitable for both traditional board-based embedded systems and system on a chip (SoC) architectures. It has been superseded by the Nucleus RTOS.
History
The VRTX operating system began as a product of Hunter & Ready, a company founded by James Ready and Colin Hunter in 1980 which later became Ready Systems. This firm later merged with Microtec Research in 1993, and went public in 1994. This firm was then acquired by Mentor Graphics in 1995 and VRTX became a Mentor product.
The VRTX operating system was released in September 1981.
Since the 1980s, the chief rival to VRTX has been VxWorks, a Wind River Systems product. VxWorks had its start in the mid 1980s as compiler and assembly language tools to supplement VRTX, named VRTX works, or VxWorks. Later, Wind River created their own real-time kernel offering similar to VRTX.
VRTX
VRTX comes in several flavors:
VRTX: 16-bit VRTX, for Z8000, 8086, etc.
VRTX-32: 32-bit VRTX, for M68K, AMD29K, etc.
MPV: Multiprocessor VRTX for distributed applications, such as distributed across VME backplanes.
VRTX-mc: Micro-Controller VRTX, for small systems needing minimal memory use.
VRTX-oc: On-chip VRTX, freeware community source code for personal and academic use, license required for commercial use.
VRTX-sa: Scalable Architecture VRTX for full operating system features. Loosely based on Carnegie Mellon University's Mach microkernel principles.
SPECTRA: Virtual machine (VM) implementation for running a VRTX VM on Unix-like hosts. Also includes an open integrated development environment allowing third-party tools open access to cross-development resources.
Most companies developing software with VRTX use reduced instruction set computer (RISC) microprocessors including ARM, MIPS, PowerPC, or others.
Implementations
VRTX runs the Hubble Space Telescope.
VRTX runs the Wide Area Augment |
https://en.wikipedia.org/wiki/MSX%20BASIC | MSX BASIC is a dialect of the BASIC programming language. It is an extended version of Microsoft's MBASIC Version 4.5, adding support for graphic, music, and various peripherals attached to MSX microcomputers. Generally, MSX BASIC is designed to follow GW-BASIC, released the same year for IBM PCs and clones. During the creation of MSX BASIC, effort was made to make the system flexible and expandable.
Distribution
MSX BASIC came bundled in the ROM of all MSX computers. At system start-up MSX BASIC is invoked, causing its command prompt to be displayed, unless other software placed in ROM takes control (which is the typical case of game cartridges and disk interfaces, the latter causing the MSX-DOS prompt to be shown if there is a disk present which contains the DOS system files).
When MSX BASIC is invoked, the ROM code for BIOS and the BASIC interpreter itself are visible on the lower 32K of the Z80 addressing space. The upper 32K are set to RAM, of which about 23K to 28K are available for BASIC code and data (the exact amount depends on the presence of disk controller and on the MSX-DOS kernel version).
Development Environment
MSX BASIC development environment is very similar to other versions of Microsoft BASIC. It has a command line-based Integrated Development Environment (IDE) system; all program lines must be numbered, all non-numbered lines are considered to be commands in direct mode (i.e., to be executed immediately). The user interface is entirely command-line-based.
Versions of MSX BASIC
Every new version of the MSX computer was bundled with an updated version of MSX BASIC. All versions are backward compatible and provide new capabilities to fully explore the new and extended hardware found on the newer MSX computers.
MSX BASIC 1.0
Bundled with MSX1 computers
16 KB in size
No native support for floppy disk requiring the Disk BASIC cartridge extension (4 KB overhead)
Support for all available screen modes:
Screen 0 (text mode 40 x 24 characte |
https://en.wikipedia.org/wiki/Content-addressable%20memory | Content-addressable memory (CAM) is a special type of computer memory used in certain very-high-speed searching applications. It is also known as associative memory or associative storage and compares input search data against a table of stored data, and returns the address of matching data.
CAM is frequently used in networking devices where it speeds up forwarding information base and routing table operations. This kind of associative memory is also used in cache memory. In associative cache memory, both address and content is stored side by side. When the address matches, the corresponding content is fetched from cache memory.
History
Dudley Allen Buck invented the concept of content-addressable memory in 1955. Buck is credited with the idea of recognition unit.
Hardware associative array
Unlike standard computer memory, random-access memory (RAM), in which the user supplies a memory address and the RAM returns the data word stored at that address, a CAM is designed such that the user supplies a data word and the CAM searches its entire memory to see if that data word is stored anywhere in it. If the data word is found, the CAM returns a list of one or more storage addresses where the word was found. Thus, a CAM is the hardware embodiment of what in software terms would be called an associative array.
A similar concept can be found in the data word recognition unit, as proposed by Dudley Allen Buck in 1955.
Standards
A major interface definition for CAMs and other network search engines was specified in an interoperability agreement called the Look-Aside Interface (LA-1 and LA-1B) developed by the Network Processing Forum. Numerous devices conforming to the interoperability agreement have been produced by Integrated Device Technology, Cypress Semiconductor, IBM, Broadcom and others. On December 11, 2007, the OIF published the serial look-aside (SLA) interface agreement.
Semiconductor implementations
CAM is much faster than RAM in data search applications |
https://en.wikipedia.org/wiki/Paleoethnobotany | Paleoethnobotany (also spelled palaeoethnobotany), or archaeobotany, is the study of past human-plant interactions through the recovery and analysis of ancient plant remains. Both terms are synonymous, though paleoethnobotany (from the Greek words palaios [παλαιός] meaning ancient, ethnos [έθνος] meaning race or ethnicity, and votano [βότανο] meaning plants) is generally used in North America and acknowledges the contribution that ethnographic studies have made towards our current understanding of ancient plant exploitation practices, while the term archaeobotany (from the Greek words archaios [αρχαίος] meaning ancient and votano) is preferred in Europe and emphasizes the discipline's role within archaeology.
As a field of study, paleoethnobotany is a subfield of environmental archaeology. It involves the investigation of both ancient environments and human activities related to those environments, as well as an understanding of how the two co-evolved. Plant remains recovered from ancient sediments within the landscape or at archaeological sites serve as the primary evidence for various research avenues within paleoethnobotany, such as the origins of plant domestication, the development of agriculture, paleoenvironmental reconstructions, subsistence strategies, paleodiets, economic structures, and more.
Paleoethnobotanical studies are divided into two categories: those concerning the Old World (Eurasia and Africa) and those that pertain to the New World (the Americas). While this division has an inherent geographical distinction to it, it also reflects the differences in the flora of the two separate areas. For example, maize only occurs in the New World, while olives only occur in the Old World. Within this broad division, paleoethnobotanists tend to further focus their studies on specific regions, such as the Near East or the Mediterranean, since regional differences in the types of recovered plant remains also exist.
Macrobotanical vs. microbotanical remains
|
https://en.wikipedia.org/wiki/Host%20adapter | In computer hardware, a host controller, host adapter, or host bus adapter (HBA), connects a computer system bus, which acts as the host system, to other network and storage devices. The terms are primarily used to refer to devices for connecting SCSI, SAS, NVMe, Fibre Channel and SATA devices. Devices for connecting to FireWire, USB and other devices may also be called host controllers or host adapters.
Host adapters can be integrated in the motherboard or be on a separate expansion card.
The term network interface controller (NIC) is more often used for devices connecting to computer networks, while the term converged network adapter can be applied when protocols such as iSCSI or Fibre Channel over Ethernet allow storage and network functionality over the same physical connection.
SCSI
A connects a host system and a peripheral SCSI device or storage system. These adapters manage service and task communication between the host and target. Typically a device driver, linked to the operating system, controls the host adapter itself.
In a typical parallel SCSI subsystem, each device has assigned to it a unique numerical ID. As a rule, the host adapter appears as SCSI ID 7, which gives it the highest priority on the SCSI bus (priority descends as the SCSI ID descends; on a 16-bit or "wide" bus, ID 8 has the lowest priority, a feature that maintains compatibility with the priority scheme of the 8-bit or "narrow" bus).
The host adapter usually assumes the role of SCSI initiator, in that it issues commands to other SCSI devices.
A computer can contain more than one host adapter, which can greatly increase the number of SCSI devices available.
Major SCSI adapter manufacturers are HP, ATTO Technology, Promise Technology, Adaptec, and LSI Corporation. LSI, Adaptec, and ATTO offer PCIe SCSI adapters which fit in Apple Mac, on Intel PCs, and low-profile motherboards which lack SCSI support due to the inclusion of SAS and/or SATA connectivity.
Fibre Channel
The te |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.