source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Hurewicz%20space | In mathematics, a Hurewicz space is a topological space that satisfies a certain basic selection principle that generalizes σ-compactness. A Hurewicz space is a space in which for every sequence of open covers of the space there are finite sets such that every point of the space belongs to all but finitely many sets .
History
In 1926, Witold Hurewicz introduced the above property of topological spaces that is formally stronger than the Menger property. He didn't know whether Menger's conjecture is true, and whether his property is strictly stronger than the Menger property, but he conjectured that in the class of metric spaces his property is equivalent to -compactness.
Hurewicz's conjecture
Hurewicz conjectured that in ZFC every Hurewicz metric space is σ-compact. Just, Miller, Scheepers, and Szeptycki proved that Hurewicz's conjecture is false, by showing that there is, in ZFC, a set of real numbers that is Menger but not σ-compact. Their proof was dichotomic, and the set witnessing the failure of the conjecture heavily depends on whether a certain (undecidable) axiom holds or not.
Bartoszyński and Shelah (see also Tsaban's solution based on their work ) gave a uniform ZFC example of a Hurewicz subset of the real line that is not σ-compact.
Hurewicz's problem
Hurewicz asked whether in ZFC his property is strictly stronger than the Menger property. In 2002, Chaber and Pol in unpublished note, using dichotomy proof, showed that there is a Hurewicz subset of the real line that is not Menger. In 2008, Tsaban and Zdomskyy gave a uniform example of a Hurewicz subset of the real line that is Menger but not Hurewicz.
Characterizations
Combinatorial characterization
For subsets of the real line, the Hurewicz property can be characterized using continuous functions into the Baire space . For functions , write if for all but finitely many natural numbers . A subset of is bounded if there is a function such that for all functions . A subset of is unbounded |
https://en.wikipedia.org/wiki/Coulomb%20collision | A Coulomb collision is a binary elastic collision between two charged particles interacting through their own electric field. As with any inverse-square law, the resulting trajectories of the colliding particles is a hyperbolic Keplerian orbit. This type of collision is common in plasmas where the typical kinetic energy of the particles is too large to produce a significant deviation from the initial trajectories of the colliding particles, and the cumulative effect of many collisions is considered instead. The importance of Coulomb collisions was first pointed out by Lev Landau in 1936, who also derived the corresponding kinetic equation which is known as the Landau kinetic equation.
Simplified mathematical treatment for plasmas
In a plasma, a Coulomb collision rarely results in a large deflection. The cumulative effect of the many small angle collisions, however, is often larger than the effect of the few large angle collisions that occur, so it is instructive to consider the collision dynamics in the limit of small deflections.
We can consider an electron of charge and mass passing a stationary ion of charge and much larger mass at a distance with a speed . The perpendicular force is at the closest approach and the duration of the encounter is about . The product of these expressions divided by the mass is the change in perpendicular velocity:
Note that the deflection angle is proportional to . Fast particles are "slippery" and thus dominate many transport processes. The efficiency of velocity-matched interactions is also the reason that fusion products tend to heat the electrons rather than (as would be desirable) the ions. If an electric field is present, the faster electrons feel less drag and become even faster in a "run-away" process.
In passing through a field of ions with density , an electron will have many such encounters simultaneously, with various impact parameters (distance to the ion) and directions. The cumulative effect can be described |
https://en.wikipedia.org/wiki/Head%20grammar | Head grammar (HG) is a grammar formalism introduced in Carl Pollard (1984) as an extension of the context-free grammar class of grammars. Head grammar is therefore a type of phrase structure grammar, as opposed to a dependency grammar. The class of head grammars is a subset of the linear context-free rewriting systems.
One typical way of defining head grammars is to replace the terminal strings of CFGs with indexed terminal strings, where the index denotes the "head" word of the string. Thus, for example, a CF rule such as might instead be , where the 0th terminal, the a, is the head of the resulting terminal string. For convenience of notation, such a rule could be written as just the terminal string, with the head terminal denoted by some sort of mark, as in .
Two fundamental operations are then added to all rewrite rules: wrapping and concatenation.
Operations on headed strings
Wrapping
Wrapping is an operation on two headed strings defined as follows:
Let and be terminal strings headed by x and y, respectively.
Concatenation
Concatenation is a family of operations on n > 0 headed strings, defined for n = 1, 2, 3 as follows:
Let , , and be terminal strings headed by x, y, and z, respectively.
And so on for . One can sum up the pattern here simply as "concatenate some number of terminal strings m, with the head of string n designated as the head of the resulting string".
Form of rules
Head grammar rules are defined in terms of these two operations, with rules taking either of the forms
where , , ... are each either a terminal string or a non-terminal symbol.
Example
Head grammars are capable of generating the language . We can define the grammar as follows:
The derivation for "abcd" is thus:
And for "":
Formal properties
Equivalencies
Vijay-Shanker and Weir (1994) demonstrate that linear indexed grammars, combinatory categorial grammar, tree-adjoining grammars, and head grammars are weakly equivalent formalisms, in that they all define the |
https://en.wikipedia.org/wiki/Plant%20nucleus%20movement | Plant nucleus movement is the movement of the cell nucleus in plants by the cytoskeleton.
In response to stimuli
An important aspect of plant behavior includes responding to directional stimuli, which requires changes in the cellular signaling to control spatial elements. The integration of the stimuli in plant cells is not fully understood, but the movement of the cell nucleus provides one example of a cellular process that underlies plant behavior, and highlights the importance of the cytoskeleton in solving spatial problems within the plant cell. Unlike the static nature typically depicted in textbooks, the plant cell nucleus is a highly dynamic structure, constantly moving around cells via actin networks and myosins. The nucleus undergoes a characteristic program during cell division to guide asymmetric cell division, but there are several stimuli that have been demonstrated to cause movements of the nucleus in the plant cell.
Blue light
A well-studied stimulus is strong blue light, which drives movement of nuclei to anticlinal (perpendicular to the plane of the leaf) cell walls in mesophyll and epidermal cells of Arabidopsis thaliana plants. Chloroplasts moving in response to blue light associate with the nucleus to move the nucleus to the appropriate location. This is highly dependent on the blue light receptor phototropin and the actin cytoskeleton, as actin bundles are seen to form along the anticlinal wall in blue light. A protein called ANGUSTIFOLIA was also recently discovered to regulate nucleus movement in the dark by forming a complex that adjusts the alignment of actin filaments. The movement of the nucleus in response to blue light may serve several physiological purposes. The first is to avoid damaging mutations caused by UV radiation, as the nucleus stores the genetic material of a cell. A key problem faced as photosynthetic organisms transitioned from ocean to land was avoiding excessive mutations caused by UV radiation, but by moving the nucl |
https://en.wikipedia.org/wiki/BESYS | BESYS (Bell Operating System) was an early computing environment originally implemented as a batch processing operating system in 1957 at Bell Labs for the IBM 704 computer.
Overview
The system was developed because Bell recognized a "definite mismatch…between the 704's internal speed, the sluggishness of its on-line unit-record equipment, and the inherent slowness of manual operations associated with stand-alone use." According to Drummond, the name BESYS, though commonly thought to stand for BEll SYStem, is actually a concatenation of the preexisting SHARE-assigned installation code BE for Bell Telephone Laboratories, Murray Hill, NJ and the code assigned by SHARE for systems software, SYS.
The goals of the system were:
Flexible use of hardware, nonstop operation.
Efficient batch processing, tape-to-tape operation with offline spooling of unit-record data.
Use of control cards to minimize the need for operator intervention.
Allow user programs access to input/output functions, system control and program libraries.
Core dump facilities for debugging.
Simulation of L1 and L2 interpreters to provide software compatibility with the IBM 650.
The initial version of the system BESYS-1 was in use by October 16, 1957. It was created by George H. Mealy and Gwen Hansen with Wanda Lee Mammel and utilized IBM's FORTRAN and United Aircraft's Symbolic Assembly Program (SAP) programming languages. It was designed to efficiently deal with a large number of jobs originating on punched cards and producing results suitable for printing on paper and punched cards. The system also provided processing capabilities for data stored on magnetic tapes and magnetic disk storage units. Typically punched card and print processing was handled off line by peripheral Electronic Accounting Machines, IBM 1401 computers, and eventually direct coupled computers.
The first system actually used at Bell Labs was BESYS-2. The system was resident on magnetic tape, and occupied the lowest 64 (36 |
https://en.wikipedia.org/wiki/Sweep%20generator | A sweep generator is a piece of electronic test equipment similar to, and sometimes included on, a function generator which creates an electrical waveform with a linearly varying frequency and a constant amplitude. Sweep generators are commonly used to test the frequency response of electronic filter circuits. These circuits are mostly transistor circuits with inductors and capacitors to create linear characteristics.
Sweeps are a popular method in the field of audio measurement to describe the change in a measured output value over a progressing input parameter. The most commonly-used progressive input parameter is frequency varied over the standard audio bandwidth of 20 Hz to 20 kHz.
Glide Sweep
A glide sweep (or chirp) is a continuous signal in which the frequency increases or decreases logarithmically with time. This provides the complete range of testing frequencies between the start and stop frequency. An advantage over the stepped sweep is that the signal duration can be reduced by the user without any loss of frequency resolution in the results. This allows for rapid testing. Although the theory behind the glide sweep has been known for several decades, its use in audio measuring devices has only evolved over the past several years. The reason for this lies with the high computing power required.
Stepped Sweep
In a stepped sweep, one variable input parameter (frequency or amplitude) is incremented or decremented in discrete steps. After each change, the analyzer waits until a stable reading is detected before switching to the next step. The scaling of the steps is linear or logarithmic. Since the settling time of different test objects cannot be predicted, the duration of a stepped sweep cannot be determined exactly in advance. For the determination of amplitude or frequency response, the stepped sweep has been largely replaced by the glide sweep. The main application for the stepped sweep is to measure the linearity of systems. Here, the frequency of t |
https://en.wikipedia.org/wiki/Magnetospheric%20eternally%20collapsing%20object | The magnetospheric eternally collapsing object (MECO) is an alternative model for black holes initially proposed by Indian scientist Abhas Mitra in 1998 and later generalized by American researchers Darryl J. Leiter and Stanley L. Robertson. A proposed observable difference between MECOs and black holes is that a MECO can produce its own intrinsic magnetic field. An uncharged black hole cannot produce its own magnetic field, though its accretion disc can.
Theoretical model
In the theoretical model a MECO begins to form in much the same way as a black hole, with a large amount of matter collapsing inward toward a single point. However, as it becomes smaller and denser, a MECO does not form an event horizon.
As the matter becomes denser and hotter, it glows more brightly. Eventually its interior approaches the Eddington limit. At this point the internal radiation pressure is sufficient to slow the inward collapse almost to a standstill.
In fact, the collapse gets slower and slower, so a singularity could only form in an infinite future. Unlike a black hole, the MECO never fully collapses. Rather, according to the model it slows down and enters an eternal collapse.
Mitra provides a review of the evolution of black hole alternatives including his model of eternal collapse and MECOs.
Eternal collapse
Mitra's paper claiming non-occurrence of event horizons and exact black holes later appeared in Pramana - Journal of Physics. In this paper, Mitra proposes that so-called black holes are eternally collapsing while Schwarzschild black holes have a gravitational mass M = 0. He argued that all proposed black holes are instead quasi-black holes rather than exact black holes and that during the gravitational collapse to a black hole, the entire mass energy and angular momentum of the collapsing objects is radiated away before formation of exact mathematical black holes. Mitra proposes that in his formulation since a mathematical zero-mass black hole requires infinite pro |
https://en.wikipedia.org/wiki/US%20FWS%20John%20R.%20Manning | US FWS John R. Manning (FWS 1002) was an American fisheries research vessel in commission in the fleet of the United States Fish and Wildlife Service from 1950 to 1969. She explored the Pacific Ocean in search of commercially valuable populations of fish and shellfish. After the end of her Fish and Wildlife Service career, she operated as the commercial fishing vessel MV R. B. Hendrickson until she sank in 1979.
Origin
In August 1947, the United States Congress authorized a new "Pacific Ocean Fishery Program" calling for the "investigation, exploration, and development of the high seas fisheries of the Territories and Island Possessions [of the United States] and intervening areas in the tropical and subtropical Pacific Ocean." The United States Department of the Interior's Fish and Wildlife Service (which in 1956 would become the United States Fish and Wildlife Service) was responsible for carrying out the program, which was to be overseen by a new office, Pacific Ocean Fishery Investigations (POFI), under the direction of Oscar Elton Sette. In addition to the construction of the Pacific Ocean Fisheries Laboratory at the University of Hawaii in Honolulu, Territory of Hawaii, and the development of a Fish and Wildlife Service (FWS) docking and warehouse site at Pearl Harbor, Hawaii, the Congress funded the conversion or construction of three ocean-going vessels to support POFI's work. During 1949 and 1950, these three vessels joined the Fish and Wildlife Service fleet as , and US FWS John R. Manning.
Construction and commissioning
Unlike Henry O’Malley and Hugh M. Smith, which were converted patrol boats the FWS acquired from the United States Navy, John R. Manning was purpose-built for the FWS as a fisheries research vessel. The firm of Pillsbury & Martignoni designed her as a purse-seiner capable of long-distance deployments to remote areas with limited refueling options. The FWS awarded a contract for her construction to the Pacific Boatbuilding Company in T |
https://en.wikipedia.org/wiki/Germs%3A%20Biological%20Weapons%20and%20America%27s%20Secret%20War | Germs: Biological Weapons and America's Secret War is a 2001 book written by New York Times journalists Judith Miller, Stephen Engelberg, and William Broad. It describes how humanity has dealt with biological weapons, and the dangers of bioterrorism. It was the 2001 New York Times #1 Non-Fiction Bestseller the weeks of October 28 and November 4.
Overview
Germs, is a work of investigative journalism employing biographical and historical narrative to provide context. The three authors interviewed hundreds of scientists and senior U.S. officials, and reviewed recently declassified documents, and reports from the former Soviet Union's bioweapons laboratories.
Summary
The book opens with an account of the 1984 salmonella poisonings in The Dalles, Oregon, caused by followers of Bhagwan Shree Rajneesh who sprayed salmonella onto salad bars. Other research shows how Moscow scientists created an untraceable germ that would induce the body to self-destruct, and reveals that the U.S. military planned for germ warfare on Cuba during the 1960s. Three classified U.S. biodefense projects are detailed: Project Bacchus, Project Clear Vision, and Project Jefferson. Germs concludes with an assessment of the United States' ability to deter future bio-attack.
Reviews
The New York Times Book Review was favorable, though it criticized the book's tone as "somewhat alarmist". BusinessWeek was also generally favorable, except for pointing out some conflicting views on bioterrorism. The Guardian'''s book review by British psychiatrist Simon Wessely, cautioned against panic, stating that biological weapons can cause destruction through fear, effectively giving the biodefense industry "the equivalent of a blank cheque".
Adaptations
On November 13, 2001, the science TV series Nova aired an episode entitled Bioterror''. Two years in the making, it chronicled Miller, Engelberg, and Broad's research and investigation into biological weapons. |
https://en.wikipedia.org/wiki/Ancillary%20data | Ancillary data is data that has been added to given data and uses the same form of transport. Common examples are cover art images for media files or streams, or digital data added to radio or television broadcasts.
Television
Ancillary data (commonly abbreviated as ANC data), in the context of television systems, refers to a means which by non-video information (such as audio, other forms of essence, and metadata) may be embedded within the serial digital interface. Ancillary data is standardized by SMPTE as SMPTE 291M: Ancillary Data Packet and Space Formatting.
Ancillary data can be located in non-picture portions of horizontal scan lines. This is known as horizontal ancillary data (HANC). Ancillary data can also be located in non-picture regions of the frame, This is known as vertical ancillary data (VANC).
Technical details
Location
Ancillary data packets may be located anywhere within a serial digital data stream, with the following exceptions:
They should not be located in the lines identified as a switch point (which may be lost when switching sources).
They should not be located in the active picture area.
They may not cross the TRS (timing reference signal) packets.
Ancillary data packets are commonly divided into two types, depending on where they are located—specific packet types are often constrained to be in one location or another.
Ancillary packets located in the horizontal blanking region (after EAV but before SAV), regardless of line, are known as horizontal ancillary data, or HANC. HANC is commonly used for higher-bandwidth data, and/or for things that need to be synchronized to a particular line; the most common type of HANC is embedded audio.
Ancillary packets located in the vertical blanking region, and after SAV but before EAV, are known as vertical ancillary data, or VANC. VANC is commonly used for low-bandwidth data, or for things that only need be updated on a per-field or per-frame rate. Closed caption data and VPID are ge |
https://en.wikipedia.org/wiki/Triaugmented%20dodecahedron | In geometry, the triaugmented dodecahedron is one of the Johnson solids (). It can be seen as a dodecahedron with three pentagonal pyramids () attached to nonadjacent faces. When pyramids are attached to a dodecahedron in other ways, they may result in an augmented dodecahedron (), a parabiaugmented dodecahedron (), a metabiaugmented dodecahedron (), or even a pentakis dodecahedron if the faces are made to be irregular.
External links
Johnson solids |
https://en.wikipedia.org/wiki/Hashcash | Hashcash is a proof-of-work system used to limit E-mail spam and denial-of-service attacks. Hashcash was proposed in 1997 by Adam Back and described more formally in Back's 2002 paper "Hashcash - A Denial of Service Counter-Measure".
Background
The idea "...to require a user to compute a moderately hard, but not intractable function..." was proposed by Cynthia Dwork and Moni Naor in their 1992 paper "Pricing via Processing or Combatting Junk Mail".
How it works
Hashcash is a cryptographic hash-based proof-of-work algorithm that requires a selectable amount of work to compute, but the proof can be verified efficiently. For email uses, a textual encoding of a hashcash stamp is added to the header of an email to prove the sender has expended a modest amount of CPU time calculating the stamp prior to sending the email. In other words, as the sender has taken a certain amount of time to generate the stamp and send the email, it is unlikely that they are a spammer. The receiver can, at negligible computational cost, verify that the stamp is valid. However, the only known way to find a header with the necessary properties is brute force, trying random values until the answer is found; though testing an individual string is easy, satisfactory answers are rare enough that it will require a substantial number of tries to find the answer.
The hypothesis is that spammers, whose business model relies on their ability to send large numbers of emails with very little cost per message, will cease to be profitable if there is even a small cost for each spam they send. Receivers can verify whether a sender made such an investment and use the results to help filter email.
Technical details
The header line looks something like this:
X-Hashcash: 1:20:1303030600:anni@cypherspace.org::McMybZIhxKXu57jd:ckvi
The header contains:
ver: Hashcash format version, 1 (which supersedes version 0).
bits: Number of "partial pre-image" (zero) bits in the hashed code.
date: The time that the |
https://en.wikipedia.org/wiki/Andre%20G.%20Journel | André Georges Journel is a French American engineer who excelled in formulating and promoting geostatistics in the earth sciences and engineering, first from the Centre of Mathematical Morphology in Fontainebleau, France and later from Stanford University.
In 1998, Journel was elected a member of the National Academy of Engineering for the theory and practice of geostatistics in earth resources and environmental assessment.
Education
D.Sc., University of Nancy, Applied Mathematics, 1977
M.S2., University of Nancy, Doctor of Engineering, 1974
B.Sc., École Nationale Supérieure des Mines de Nancy, Nancy, Mining Engineering, 1967
Early professional career in France
André joined the Paris School of Mines research group at Fontainebleau, under the direction of Georges Matheron, as a Mining Project Engineer in 1969, moving to Head of Research four years later. He remained active primarily teaching courses, doing consulting work for mining companies around the world, and simultaneously formulating new methods often to solve pressing problems. From these days are his contributions to kriging with a trend and to stochastic simulation. The book with Charles Huijbregts is a culmination to his endeavors as mining geostatistician from his native country.
Second stage at Stanford University
Journel accepted a position as an assistant professor at the Department of Applied Earth Sciences in 1978. The physical relocation was shortly followed by a reorientation of his research from mining to petroleum engineering. He was promoted to full professor in 1986 and within the year appointed chairman of the department, serving for a period of six years. He was the mentor of several generations of students and enthusiastic promoter of joint research with industry through the Stanford Center for Reservoir Forecasting that he founded also in 1986, remaining as its Director until 2007. Among his numerous research interest and accomplishments, it is worth mentioning his contributions |
https://en.wikipedia.org/wiki/Proton%20computed%20tomography | Proton computed tomography (pCT), or proton CT, is an imaging modality first proposed by Cormack in 1963 and initial experiment explorations identified several advantages over conventional X-ray CT (xCT). However, particle interactions such as multiple Coulomb scattering (MCS) and (in)elastic nuclear scattering events deflect the proton trajectory, resulting in nonlinear paths which can only be approximated via statistical assumptions, leading to lower spatial resolution than X-ray tomography. Further experiments were largely abandoned until the advent of proton radiation therapy in the 1990s which renewed interest in the topic due to the potential benefits of imaging and treating patients with the same particle.
Description
Proton computed tomography (pCT) uses measurements of a proton's position/trajectory and energy before and after traversing an object to reconstruct an image of the object where each voxel represents the relative stopping power (RSP) of the material composition of the corresponding region of the object. The deviations of a proton's path inside the object are primarily due to interactions between the Coulomb fields of the proton and the nuclei in the absorbing material, resulting in many small-angle deflections as it passes through the object. Statistical models of the effect of MCS on the trajectory of a proton were developed to calculate the most likely path (MLP) of a proton given its entry and exit position/trajectory and corresponding uncertainty at intermediate depths within the object. Additional (in)elastic nuclear scattering events can also occur which cause larger angle deviations, which cannot easily be modeled, but these are fairly easy to identify and remove from consideration in the image reconstruction process.
With an approximate path of a proton through the object, one can then identify the voxels through which the proton passed, and the difference between entry and exit energy indicates the energy collectively deposited |
https://en.wikipedia.org/wiki/Van%20der%20Waerden%27s%20theorem | Van der Waerden's theorem is a theorem in the branch of mathematics called Ramsey theory. Van der Waerden's theorem states that for any given positive integers r and k, there is some number N such that if the integers {1, 2, ..., N} are colored, each with one of r different colors, then there are at least k integers in arithmetic progression whose elements are of the same color. The least such N is the Van der Waerden number W(r, k), named after the Dutch mathematician B. L. van der Waerden.
Example
For example, when r = 2, you have two colors, say red and blue. W(2, 3) is bigger than 8, because you can color the integers from {1, ..., 8} like this:
and no three integers of the same color form an arithmetic progression. But you can't add a ninth integer to the end without creating such a progression. If you add a red 9, then the red 3, 6, and 9 are in arithmetic progression. Alternatively, if you add a blue 9, then the blue 1, 5, and 9 are in arithmetic progression.
In fact, there is no way of coloring 1 through 9 without creating such a progression (it can be proved by considering examples). Therefore, W(2, 3) is 9.
Open problem
It is an open problem to determine the values of W(r, k) for most values of r and k. The proof of the theorem provides only an upper bound. For the case of r = 2 and k = 3, for example, the argument given below shows that it is sufficient to color the integers {1, ..., 325} with two colors to guarantee there will be a single-colored arithmetic progression of length 3. But in fact, the bound of 325 is very loose; the minimum required number of integers is only 9. Any coloring of the integers {1, ..., 9} will have three evenly spaced integers of one color.
For r = 3 and k = 3, the bound given by the theorem is 7(2·37 + 1)(2·37·(2·37 + 1) + 1), or approximately 4.22·1014616. But actually, you don't need that many integers to guarantee a single-colored progression of length 3; you only need 27. (And it is possible to color {1, ..., |
https://en.wikipedia.org/wiki/Sesquipower | In mathematics, a sesquipower or Zimin word is a string over an alphabet with identical prefix and suffix. Sesquipowers are unavoidable patterns, in the sense that all sufficiently long strings contain one.
Formal definition
Formally, let A be an alphabet and A∗ be the free monoid of finite strings over A. Every non-empty word w in A+ is a sesquipower of order 1. If u is a sesquipower of order n then any word w = uvu is a sesquipower of order n + 1. The degree of a non-empty word w is the largest integer d such that w is a sesquipower of order d.
Bi-ideal sequence
A bi-ideal sequence is a sequence of words fi where f1 is in A+ and
for some gi in A∗ and i ≥ 1. The degree of a word w is thus the length of the longest bi-ideal sequence ending in w.
Unavoidable patterns
For a finite alphabet A on k letters, there is an integer M depending on k and n, such that any word of length M has a factor which is a sesquipower of order at least n. We express this by saying that the sesquipowers are unavoidable patterns.
Sesquipowers in infinite sequences
Given an infinite bi-ideal sequence, we note that each fi is a prefix of fi+1 and so the fi converge to an infinite sequence
We define an infinite word to be a sesquipower if it is the limit of an infinite bi-ideal sequence. An infinite word is a sesquipower if and only if it is a recurrent word, that is, every factor occurs infinitely often.
Fix a finite alphabet A and assume a total order on the letters. For given integers p and n, every sufficiently long word in A∗ has either a factor which is a p-power or a factor which is an n-sesquipower; in the latter case the factor has an n-factorisation into Lyndon words.
See also
ABACABA pattern |
https://en.wikipedia.org/wiki/Universal%20code%20%28data%20compression%29 | In data compression, a universal code for integers is a prefix code that maps the positive integers onto binary codewords, with the additional property that whatever the true probability distribution on integers, as long as the distribution is monotonic (i.e., p(i) ≥ p(i + 1) for all positive i), the expected lengths of the codewords are within a constant factor of the expected lengths that the optimal code for that probability distribution would have assigned. A universal code is asymptotically optimal if the ratio between actual and optimal expected lengths is bounded by a function of the information entropy of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity.
In general, most prefix codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message. Universal codes are generally not used for precisely known probability distributions, and no universal code is known to be optimal for any distribution used in practice.
A universal code should not be confused with universal source coding, in which the data compression method need not be a fixed prefix code and the ratio between actual and optimal expected lengths must approach one. However, note that an asymptotically optimal universal code can be used on independent identically-distributed sources, by using increasingly large blocks, as a method of universal source coding.
Universal and non-universal codes
These are some universal codes for integers; an asterisk (*) indicates a code that can be trivially restated in lexicographical order, while a double dagger (‡) indicates a code that is asymptotically optimal:
Elias gamma coding *
Elias delta coding * ‡
Elias omega coding * ‡
Exp-Golomb coding *, which has Elias gamma coding as a special case. (Used in H |
https://en.wikipedia.org/wiki/Emilio%20Aguinaldo | Emilio Aguinaldo y Famy (: March 22, 1869February 6, 1964) was a Filipino revolutionary, statesman, and military leader who is the youngest president of the Philippines (1899–1901) and became the first president of the Philippines and of an Asian constitutional republic. He led the Philippine forces first against Spain in the Philippine Revolution (1896–1898), then in the Spanish–American War (1898), and finally against the United States during the Philippine–American War (1899–1901).
Aguinaldo remains a controversial figure in Filipino history. Though he has been recommended as a national hero of the Philippines, some have pointed out his asserted involvement for the deaths of the revolutionary leader Andrés Bonifacio and general Antonio Luna, and for his collaboration with the Japanese Empire during their occupation of the Philippines in World War II.
Early life and career
Emilio Aguinaldo y Famy was born on March 22, 1869 in Cavite el Viejo (present-day Kawit) in the province of Cavite to Carlos Aguinaldo y Jamir and Trinidad Famy y Villanueva, a couple that had eight children, the seventh of whom was Emilio Sr. He was baptized and raised in Roman Catholicism. The Aguinaldo family was quite well-to-do, as his father, Carlos Aguinaldo, was the community's appointed gobernadorcillo (municipal governor) in the Spanish colonial administration and his grandparents Eugenio Aguinaldo y Kajigas and María Jamir y de los Santos. He studied at Colegio de San Juan de Letran, but could not finish his studies because of an outbreak of cholera in 1882.
He became the "Cabeza de Barangay" in 1895 when the Maura Law called for the reorganization of local governments was enacted. At the age of 25, Aguinaldo became Cavite el Viejo's first gobernadorcillo capitan municipal (municipal governor-captain) while he was on a business trip in Mindoro.
Philippine Revolution
On January 1, 1895, Aguinaldo became a Freemason, joining Pilar Lodge No. 203, Imus, Cavite by the codename "Colo |
https://en.wikipedia.org/wiki/Principal%20value | In mathematics, specifically complex analysis, the principal values of a multivalued function are the values along one chosen branch of that function, so that it is single-valued. A simple case arises in taking the square root of a positive real number. For example, 4 has two square roots: 2 and −2; of these the positive root, 2, is considered the principal root and is denoted as
Motivation
Consider the complex logarithm function . It is defined as the complex number such that
Now, for example, say we wish to find . This means we want to solve
for . The value is a solution.
However, there are other solutions, which is evidenced by considering the position of in the complex plane and in particular its argument . We can rotate counterclockwise radians from 1 to reach initially, but if we rotate further another we reach again. So, we can conclude that is also a solution for . It becomes clear that we can add any multiple of to our initial solution to obtain all values for .
But this has a consequence that may be surprising in comparison of real valued functions: does not have one definite value. For , we have
for an integer , where is the (principal) argument of defined to lie in the interval . As the principal argument is unique for a given complex number , is not included in the interval. Each value of determines what is known as a branch (or sheet), a single-valued component of the multiple-valued log function.
The branch corresponding to is known as the principal branch, and along this branch, the values the function takes are known as the principal values.
General case
In general, if is multiple-valued, the principal branch of is denoted
such that for in the domain of , is single-valued.
Principal values of standard functions
Complex valued elementary functions can be multiple-valued over some domains. The principal value of some of these functions can be obtained by decomposing the function into simpler ones whereby the principal va |
https://en.wikipedia.org/wiki/Entropy%20of%20network%20ensembles | A set of networks that satisfies given structural characteristics can be treated as a network ensemble. Brought up by Ginestra Bianconi in 2007, the entropy of a network ensemble measures the level of the order or uncertainty of a network ensemble.
The entropy is the logarithm of the number of graphs. Entropy can also be defined in one network. Basin entropy is the logarithm of the attractors in one Boolean network.
Employing approaches from statistical mechanics, the complexity, uncertainty, and randomness of networks can be described by network ensembles with different types of constraints.
Gibbs and Shannon entropy
By analogy to statistical mechanics, microcanonical ensembles and canonical ensembles of networks are introduced for the implementation. A partition function Z of an ensemble can be defined as:
where is the constraint, and () are the elements in the adjacency matrix, if and only if there is a link between node i and node j. is a step function with if , and if . The auxiliary fields and have been introduced as analogy to the bath in classical mechanics.
For simple undirected networks, the partition function can be simplified as
where , is the index of the weight, and for a simple network .
Microcanonical ensembles and canonical ensembles are demonstrated with simple undirected networks.
For a microcanonical ensemble, the Gibbs entropy is defined by:
where indicates the cardinality of the ensemble, i.e., the total number of networks in the ensemble.
The probability of having a link between nodes i and j, with weight is given by:
For a canonical ensemble, the entropy is presented in the form of a Shannon entropy:
Relation between Gibbs and Shannon entropy
Network ensemble with given number of nodes and links , and its conjugate-canonical ensemble are characterized as microcanonical and canonical ensembles and they have Gibbs entropy and the Shannon entropy S, respectively. The Gibbs entropy in the ensemble is given by:
For |
https://en.wikipedia.org/wiki/NOAA%20Observing%20System%20Architecture | The NOAA Observing System Architecture (NOSA) is a collection of over 100 of the environmental datasets of the National Oceanic and Atmospheric Administration (NOAA) . It was established to develop an observational architecture that helps NOAA to design observing systems that support NOAA's mission, avoid duplication of existing systems and operate efficiently in a cost-effective manner.
NOSA includes:
NOAA's observing systems (and others) required to support NOAA's mission,
The relationship among observing systems; including how they contribute to support NOAA's mission and associated observing requirements, and
The guidelines governing the design of a target architecture and the evolution toward this target architecture
See also
ACARS
AERONET
FluxNet
Coastal-Marine Automated Network
Sources
External links
NOSA Homepage
Meteorological data and networks |
https://en.wikipedia.org/wiki/Armenian%20dram%20sign | The Armenian dram sign (֏, image: ; ; code: AMD) is the currency sign of the Armenian dram. In Unicode, it is encoded at .
After its proclamation of independence, Armenia put into circulation its own national currency, the Armenian dram, and the need for a monetary sign became immediately apparent. The shape chosen is a slightly modified version of one of the capital letters of the Armenian alphabet.
History
Heritage of Mashtots
There is a strong belief that the shape of dram sign (symbol) is a direct projection of the Armenian alphabet – the work of Mesrop Mashtots. It is not hard to notice the clean-cut geometry of the Armenian letters, as did the author of the sign, K. Komendaryan, when he studied the alphabet. Later he became a proponent for the hypothesis which states that the prototype of the Armenian alphabet is a variety of combinations of the resembling curves and the horizontal elements. Subsequently, these horizontal elements played a key role in the sign development.
Date of creation
The first known usage of the sign was on 7 September 1995, when it was used in handwritten notes for the cash flow of a start-up company. Although a number of artists and businessmen developed and offered alternate designs for it, the original form has remained in use. It is now part of the Armenian standard for national characters and symbols and in Armenian computer fonts.
Incentives
The objective of the sign is to symbolize the Armenian national currency and come in handy wherever a graphical symbol for the currency is in demand, for instance in: financial documents, price-lists and tags, currency exchange displays, computer fonts, correspondence, etc.
At the time of the sign's development were outlined the definite fundamental criteria for it, namely the sign has:
to denote the Armenian National Currency by graphical means;
to recall the outlines of Armenian letters;
to include the elements typical for the foreign monetary symbols;
to be easily reproduced with a fe |
https://en.wikipedia.org/wiki/Xenophagy | Xenophagy (Greek "strange" + "eating") and allotrophy (Greek "other" + "nutrient") are changes in established patterns of biological consumption, by individuals or groups.
In entomology, xenophagy is a categorical change in diet, such as an herbivore becoming carnivorous, a predator becoming necrophagous, a coprophage becoming necrophagous or carnivorous, or a reversal of such changes. Allotrophy is a less extreme change in diet, such as in the case of the seven-spot ladybird, which can diversify a diet of aphids to sometimes include pollen. There are several apparent cases of allotrophy in Israeli Longitarsus beetles.
In microbiology, xenophagy is the process by which a cell directs autophagy against pathogens, as reflected in the study of antiviral defenses. Cellular xenophagy is an innate component of immune responses, though the general importance of xenophagy is not yet certain.
In ecology, allotrophy is also reflected in eutrophication, being a change in nutrient source such as an aquatic ecosystem that starts receiving new nutrients from drainage of the surrounding land. |
https://en.wikipedia.org/wiki/Wolf%20Szmuness | Wolf Szmuness (March 12, 1919 – June 6, 1982) was a Polish-born epidemiologist who emigrated to and worked in the United States. He conducted research at the New York Blood Center and, from 1973, he was director of the Center's epidemiology laboratory. He designed and conducted the trials for the first vaccine to prove effective against hepatitis B.
European beginnings
Szmuness was born in Warsaw, Poland on 12 March 1919. He studied medicine in Italy, but he returned to be with his family around the Nazi German invasion of Poland in 1939. As the Germans and Soviets occupied Poland, Szmuness was separated from his family, who were later killed by the Germans. Trapped in the Communist-occupied part of Poland, Szmuness traveled eastward to escape the advancing Nazis. He asked the Soviets to let him fight the Germans but was sent to Siberia as a prisoner.
Following a year of hard labour in the prison camp, Szmuness was appointed head of sanitary conditions. He later became the head epidemiologist in the local district. After release from detention in 1946, Szmuness completed his medical education at the University of Tomsk in Siberia, and earned a degree in epidemiology from the University of Kharkiv.
Szmuness married a Russian woman, Maya, and in 1959 was allowed to return to Poland. There, he continued his education at the University of Lublin and worked as an epidemiologist in municipal and regional health departments.
Szmuness's colleague Aaron Kellner reports that the Polish authorities granted Szmuness a vacation at a rest home, where he shared a room with a Catholic priest, Karol Wojtyła, and began a longtime correspondence with him. Karol Wojtyła would later become Pope John Paul II.
Emigration and life in the United States
In 1969, Szmuness, his wife and their daughter Helena were permitted to attend a scientific meeting in Italy. Upon arriving, Szmuness defected and emigrated to New York City in the United States for religious and political reasons. Th |
https://en.wikipedia.org/wiki/BF%20model | The BF model or BF theory is a topological field, which when quantized, becomes a topological quantum field theory. BF stands for background field B and F, as can be seen below, are also the variables appearing in the Lagrangian of the theory, which is helpful as a mnemonic device.
We have a 4-dimensional differentiable manifold M, a gauge group G, which has as "dynamical" fields a 2-form B taking values in the adjoint representation of G, and a connection form A for G.
The action is given by
where K is an invariant nondegenerate bilinear form over (if G is semisimple, the Killing form will do) and F is the curvature form
This action is diffeomorphically invariant and gauge invariant. Its Euler–Lagrange equations are
(no curvature)
and
(the covariant exterior derivative of B is zero).
In fact, it is always possible to gauge away any local degrees of freedom, which is why it is called a topological field theory.
However, if M is topologically nontrivial, A and B can have nontrivial solutions globally.
In fact, BF theory can be used to formulate discrete gauge theory. One can add additional twist terms allowed by group cohomology theory such as Dijkgraaf–Witten topological gauge theory. There are many kinds of modified BF theories as topological field theories, which give rise to link invariants in 3 dimensions, 4 dimensions, and other general dimensions.
See also
Background field method
Barrett–Crane model
Dual graviton
Plebanski action
Spin foam
Tetradic Palatini action |
https://en.wikipedia.org/wiki/Mitchell%E2%80%93Netravali%20filters | The Mitchell–Netravali filters or BC-splines are a group of reconstruction filters used primarily in computer graphics, which can be used, for example, for anti-aliasing or for scaling raster graphics. They are also known as bicubic filters in image editing programs because they are bi-dimensional cubic splines.
Definition
The Mitchell–Netravali filters were designed as part of an investigation into artifacts from reconstruction filters. The filters are piece-wise cubic filters with four-pixel wide supports. After excluding unsuitable filters from this family, such as discontinuous curves, two parameters and remain, through which the Mitchell–Netravali filters can be configured. The filters are defined as follows:
It is possible to construct two-dimensional versions of the Mitchell–Netravali filters by separation. In this case the filters can be replaced by a series of interpolations with the one-dimensional filter. From the color values of the four neighboring pixels , , , the color value is then calculated as follows:
lies between and ; is the distance between and .
Subjective effects
Various artifacts may result from certain choices of parameters B and C, as shown in the following illustration. The researchers recommended values from the family (dashed line) and especially as a satisfactory compromise.
Implementations
The following parameters result in well-known cubic splines used in common image editing programs:
Examples
See also
Ringing artifacts
Anisotropic filtering
Kernel (image processing) |
https://en.wikipedia.org/wiki/Melomics | Melomics (derived from "genomics of melodies") is a computational system for the automatic composition of music (with no human intervention), based on bioinspired algorithms.
Technological aspects
Melomics applies an evolutionary approach to music composition, i.e., music pieces are obtained by simulated evolution. These themes compete to better adapt to a proper fitness function, generally grounded on formal and aesthetic criteria. The Melomics system encodes each theme in a genome, and the entire population of music pieces undergoes evo-devo dynamics (i.e., pieces read-out mimicking a complex embryological development process). The system is fully autonomous: once programmed, it composes music without human intervention.
This technology has been transferred to industry as an academic spin-off, Melomics Media, which has provided and reprogrammed a new computer cluster that created a huge collection of popular music. The results of this evolutionary computation are being stored in Melomics' site, which nowadays constitutes a vast repository of music content. A differentiating feature is that pieces are available in three types of formats: playable (MP3), editable (MIDI and MusicXML) and readable (score in PDF).
Computer clusters
The Melomics computational system includes two computer clusters: Melomics109 and Iamus, dedicated to popular and artistic music, respectively.
Melomics109 cluster
Melomics109 is cluster programmed and integrated in the Melomics system. Its first product is a vast repository of popular music compositions (roughly 1 billion), covering all essential styles. In addition to MP3, all songs are available in editable formats (MIDI); and music is licensed under CC0, meaning that it is freely downloadable.
0music is the first album published by Melomics109, which is available in MP3 and MIDI formats, under CC0 license.
It has been argued that, by making such amount of editable, original and royalty-free music accessible to people, Melomics m |
https://en.wikipedia.org/wiki/Markov%20chain | A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov.
Markov chains have many applications as statistical models of real-world processes, such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, currency exchange rates and animal population dynamics.
Markov processes are the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found application in Bayesian statistics, thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory and speech processing.
The adjectives Markovian and Markov are used to describe something that is related to a Markov process.
Principles
Definition
A Markov process is a stochastic process that satisfies the Markov property (sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history. In other words, conditional on the present state of the system, its future and past states are independent.
A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, |
https://en.wikipedia.org/wiki/Changeset | In version control software, a changeset (also known as commit and revision) is a set of alterations packaged together, along with meta-information about the alterations. A changeset describes the exact differences between two successive versions in the version control system's repository of changes. Changesets are typically treated as an atomic unit, an indivisible set, by version control systems. This is one synchronization model.
Terminology
In the Git version control system a changeset is called a commit,
not to be confused with the commit operation that is used to commit a changeset (or in Git's case technically a snapshot) to a repository.
Other version control systems also use other names to refer to changesets, for example Darcs calls them "patches",
while Pijul refers to them as "changes".
Metadata
Version control systems attach metadata to changesets. Typical metadata includes a description provided by the programmer (a "commit message" in Git lingo), the name of the author, the date of the commit, etc.
Unique identifiers are an important part of the metadata which version control systems attach to changesets. Centralized version control systems, such as Subversion and CVS simply use incrementing numbers as identifiers. Distributed version control systems, such as Git, generate a unique identifier by applying a cryptographic hash function to the changeset.
Best practices
Because version control systems operate on changesets as atomic units, and because communication within development teams improves performance, there are certain best practices to follow when creating changesets. Only the 2 most significant are mentioned here, changeset content atomicity and changeset descriptions.
Changeset content should involve only one task or fix, and contain only code which works and does not knowingly break existing functionality.
Changeset descriptions should be short, recording why the modification was made, the modification's effect or purpose, and |
https://en.wikipedia.org/wiki/Ectodysplasin%20A%20receptor | Ectodysplasin A receptor (EDAR) is a protein that in humans is encoded by the EDAR gene. EDAR is a cell surface receptor for ectodysplasin A which plays an important role in the development of ectodermal tissues such as the skin. It is structurally related to members of the TNF receptor superfamily.
Function
EDAR and other genes provide instructions for making proteins that work together during embryonic development. These proteins form part of a signaling pathway that is critical for the interaction between two cell layers, the ectoderm and the mesoderm. In the early embryo, these cell layers form the basis for many of the body's organs and tissues. Ectoderm-mesoderm interactions are essential for the proper formation of several structures that arise from the ectoderm, including the skin, hair, nails, teeth, and sweat glands.
Clinical significance
Mutation in this gene have been associated with hypohidrotic ectodermal dysplasia, a disorder characterized by a lower density of sweat glands.
Derived EDAR allele
A derived G-allele point mutation (SNP) with pleiotropic effects in EDAR, 370A or rs3827760, found in ancient and modern East Asians, Southeast Asians, Nepalese and Native Americans but not common in African or European populations, is thought to be one of the key genes responsible for a number of differences between these populations, including the thicker hair, more numerous sweat glands, smaller breasts, and the Sinodont dentition (so-called shovel incisors) characteristic of East Asians.
A 2013 study suggested that the EDAR variant (370A) arose about 35,000 years ago in central China, period during which the region was then quite warm and humid. A subsequent study from 2021, based on ancient DNA samples, has suggested that the derived variant became dominant among "Ancient Northern East Asian" shortly after the Last Glacial Maximum in Northeast Asia, around 19,000 years ago. Ancient remains from Northern East Asia, such as the Tianyuan Man (40,000 ye |
https://en.wikipedia.org/wiki/Semicircle | In mathematics (and more specifically geometry), a semicircle is a one-dimensional locus of points that forms half of a circle. It is a circular arc that measures 180° (equivalently, radians, or a half-turn). It has only one line of symmetry (reflection symmetry).
In non-technical usage, the term "semicircle" is sometimes used to refer to either a closed curve that also includes the diameter segment from one end of the arc to the other or to the half-disk, which is a two-dimensional geometric region that further includes all the interior points.
By Thales' theorem, any triangle inscribed in a semicircle with a vertex at each of the endpoints of the semicircle and the third vertex elsewhere on the semicircle is a right triangle, with a right angle at the third vertex.
All lines intersecting the semicircle perpendicularly are concurrent at the center of the circle containing the given semicircle.
Uses
A semicircle can be used to construct the arithmetic and geometric means of two lengths using straight-edge and compass. For a semicircle with a diameter of a + b, the length of its radius is the arithmetic mean of a and b (since the radius is half of the diameter).
The geometric mean can be found by dividing the diameter into two segments of lengths a and b, and then connecting their common endpoint to the semicircle with a segment perpendicular to the diameter. The length of the resulting segment is the geometric mean. This can be proven by applying the Pythagorean theorem to three similar right triangles, each having as vertices the point where the perpendicular touches the semicircle and two of the three endpoints of the segments of lengths a and b.
The construction of the geometric mean can be used to transform any rectangle into a square of the same area, a problem called the quadrature of a rectangle. The side length of the square is the geometric mean of the side lengths of the rectangle. More generally, it is used as a lemma in a general method for tra |
https://en.wikipedia.org/wiki/The%20Dancing%20Wu%20Li%20Masters | The Dancing Wu Li Masters is a 1979 book by Gary Zukav, a popular science work exploring modern physics, and quantum phenomena in particular. It was awarded a 1980 U.S. National Book Award in category of Science. Although it explores empirical topics in modern physics research, The Dancing Wu Li Masters gained attention for leveraging metaphors taken from eastern spiritual movements, in particular the Huayen school of Buddhism with the monk Fazang's treatise on the Golden Lion, to explain quantum phenomena and has been regarded by some reviewers as a New Age work, although the book is mostly concerned with the work of pioneers in western physics down through the ages.
The toneless pinyin phrase Wu Li in the title is most accurately rendered in Chinese characters, one Chinese translation of the word "physics" in the light of the book's subject matter. This becomes somewhat of a pun as there are many other Chinese characters that could be rendered as "wu li" in atonal pinyin, and chapters of the book are each titled with alternative translations of Wu Li, such as "Nonsense", "My Way" and "I Clutch My Ideas". Zukav participated as a journalist in a 1976 physics conference of eastern and western scientists at Esalen Institute, California; and he used the occasion as material for his book. At the conference, it was said that the Chinese term for physics is 'Wu Li', or "patterns of organic energy." Zukav, among others, conceptualized 'physics' as the dance of the Wu Li Masters – teachers of physical essence. Zukav explains the concept further:
The Wu Li Master dances with his student. The Wu Li Master does not teach, but the student learns. The Wu Li Master always begins at the center, the heart of the matter...
Editions
The Dancing Wu Li Masters: An Overview of the New Physics (1979). New York: William Morrow and Company, hardcover: , paperback: , 352 p.
(1984) Bantam mass market paperback: , 337 p.
(1990) Audio Renaissance audiocassette: (abridged)
(2001) Harpe |
https://en.wikipedia.org/wiki/Secret%20Files%3A%20Tunguska | Secret Files: Tunguska (German: Geheimakte Tunguska) is a 2006 graphic adventure video game developed by German studios Fusionsphere Systems and Animation Arts and published by Deep Silver for Microsoft Windows, Nintendo DS, Wii, iOS, Android, Wii U and Nintendo Switch. The game is the start of the Secret Files trilogy, with a sequel, Secret Files 2: Puritas Cordis, being released in 2008.
Gameplay
The game is viewed from a third person perspective and uses a classic point and click interface. The game features a 'snoop key' tool, which highlights all interactive objects on screen and assists in finding small, easily overlooked objects. Unlike games in the Broken Sword series, however, it is not possible for the player to lose or get into an unwinnable situation. There are also a few parts of the game where players must switch between the main character, Nina Kalenkov, and her boyfriend Max Gruber, to progress.
The Wii version of the game exclusively allows the player to connect a Nunchuk to a Wii Remote and use its analog stick to directly control character movement and also supports cooperative multiplayer in which a second player uses another Wii Remote to easily point out anything that could be crucial to progress.
Plot
One night, Nina's father, Vladimir Kalenkov, in his office in a museum in Berlin, is suddenly attacked by a figure in black robes who seemingly possesses psychic powers. A while later Nina enters the office but finds the place ransacked and her father missing. She calls the police but they refuse to help due to bureaucratic reasons, but afterwards, detective Kanski arrives at the place. Nina is finally offered help by Vladimir's assistant, Max Gruber. She returns to her and her father's apartment to find it ransacked too and she is knocked unconscious being attacked from behind. In the apartment she finds clues about someone named Oleg Kambursky, who is living nearby and had been contacted by her father and the latter wanted to see him agai |
https://en.wikipedia.org/wiki/Bromoviridae | Bromoviridae is a family of viruses. Plants serve as natural hosts. There are six genera in the family.
Taxonomy
The following genera are assigned to the family:
Alfamovirus
Anulavirus
Bromovirus
Cucumovirus
Ilarvirus
Oleavirus
Structure
Viruses in the family Bromoviridae are non-enveloped, with icosahedral and bacilliform geometries. The diameter is around 26-35 nm.
Genomes are linear and segmented, tripartite.
Life cycle
Viral replication is cytoplasmic, and is lysogenic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded rna virus transcription, using the internal initiation model of subgenomic rna transcription is the method of transcription. The virus exits the host cell by tubule-guided viral movement. Plants serve as the natural host. Transmission routes are mechanical and contact. |
https://en.wikipedia.org/wiki/164th%20meridian%20west | The meridian 164° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, North America, the Pacific Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 164th meridian west forms a great circle with the 16th meridian east.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 164th meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Chukchi Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Alaska
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Chukchi Sea
| style="background:#b0e0e6;" | Kotzebue Sound
|-
|
! scope="row" |
| Alaska — Seward Peninsula
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Bering Sea
| style="background:#b0e0e6;" | Norton Sound
|-
|
! scope="row" |
| Alaska — Yukon–Kuskokwim Delta
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Bering Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Alaska — Unimak Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Southern Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" | Antarctica
| Ross Dependency, claimed by
|-
|}
See also
163rd meridian west
165th meridian west
w164 meridian west |
https://en.wikipedia.org/wiki/Link%20protection | Link protection is designed to safeguard networks from failure. Failures in high-speed networks have always been a concern of utmost importance. A single fiber cut can lead to heavy losses of traffic and protection-switching techniques have been used as the key source to ensure survivability in networks. Survivability can be addressed in many layers in a network and protection can be performed at the physical layer (SONET/SDH, Optical Transport Network), Layer 2 (Ethernet, MPLS) and Layer 3 (IP).
Protection architectures like Path protection and Link protection safeguard the above-mentioned networks from different kinds of failures. In path protection, a backup path is used from the source to its destination to bypass the failure. In Link protection, the end nodes of the failed link initiate the protection. These nodes detect the fault responsible to initiate the protection mechanisms in order to detour the affected traffic from the failed link onto predetermined reserved paths. Other types of protection are channel-, segment- and p-cycle protection.
Link Protection in the Optical Transport Layer
In older high-speed transport networks, the SONET layer (also SDH) was the main client wavelength-division multiplexing (WDM) layer. For this reason, before WDM protection schemes were defined, SONET protection mechanisms were mainly adopted to guarantee optical network survivability. When the WDM layer was created, the optical networks survivability techniques in consideration were mainly based on many elements of SONET protection in order to ensure maximum compatibility with the legacy systems (SONET systems). Hence some of the WDM-layer protection techniques are very similar to SONET/SDH protection techniques in the case of ring networks.
Ring-Based protection
In the case of a link or network failure, the simplest mechanism for network survivability is automatic protection switching (APS). APS techniques involve reserving a protection channel (dedicated or shared) w |
https://en.wikipedia.org/wiki/Standard-Model%20Extension | Standard-Model Extension (SME) is an effective field theory that contains the Standard Model, general relativity, and all possible operators that break Lorentz symmetry.
Violations of this fundamental symmetry can be studied within this general framework. CPT violation implies the breaking of Lorentz symmetry,
and the SME includes operators that both break and preserve CPT symmetry.
Development
In 1989, Alan Kostelecký and Stuart Samuel proved that interactions in string theories could lead to the spontaneous breaking of Lorentz symmetry. Later studies have indicated that loop-quantum gravity, non-commutative field theories, brane-world scenarios, and random dynamics models also involve the breakdown of Lorentz invariance. Interest in Lorentz violation has grown rapidly in the last decades because it can arise in these and other candidate theories for quantum gravity. In the early 1990s, it was shown in the context of bosonic superstrings that string interactions can also spontaneously break CPT symmetry. This work
suggested that experiments with kaon interferometry would be promising for seeking possible signals of CPT violation due to their high sensitivity.
The SME was conceived to facilitate experimental investigations of Lorentz and CPT symmetry, given the theoretical motivation for violation of these symmetries. An initial step, in 1995, was the introduction of effective interactions.
Although Lorentz-breaking interactions are motivated by constructs such as string theory, the low-energy effective action appearing in the SME is independent of the underlying theory. Each term in the effective theory involves the expectation of a tensor field in the underlying theory. These coefficients are small due to Planck-scale suppression, and in principle are measurable in experiments. The first case considered the mixing of neutral mesons, because their interferometric nature makes them highly sensitive to suppressed effects.
In 1997 and 1998, two papers by Don Collad |
https://en.wikipedia.org/wiki/Urachus | The urachus is a fibrous remnant of the allantois, a canal that drains the urinary bladder of the fetus that joins and runs within the umbilical cord. The fibrous remnant lies in the space of Retzius, between the transverse fascia anteriorly and the peritoneum posteriorly.
Development
The part of the urogenital sinus related to the bladder and urethra absorbs the ends of the Wolffian ducts and the associated ends of the renal diverticula. This gives rise to the trigone of the bladder and part of the prostatic urethra.
The remainder of this part of the urogenital sinus forms the body of the bladder and part of the prostatic urethra. The apex of the bladder stretches and is connected to the umbilicus as a narrow canal. This canal is initially open, but later closes as the urachus goes on to definitively form the median umbilical ligament.
Clinical significance
Failure of the inside of the urachus to be filled in leaves the urachus open. The telltale sign is leakage of urine through the umbilicus. This is often managed surgically. There are four anatomical causes:
Urachal cyst: there is no longer a connection between the bladder and the umbilicus, however a fluid filled cavity with uroepithelium lining persists between these two structures.
Urachal fistula: there is free communication between the bladder and umbilicus
Urachal diverticulum (vesicourachal diverticulum): the bladder exhibits outpouching
Urachal sinus: the pouch opens toward the umbilicus
The urachus is also subject to neoplasia. Urachal adenocarcinoma is histologically similar to adenocarcinoma of the bowel. Rarely, urachus carcinomas can metastasise to other regions of the body, including pelvic bones and the lung.
One urachal mass has been reported that was found to be a manifestation of IgG4-related disease.
Additional images |
https://en.wikipedia.org/wiki/Batrachology | Batrachology is the branch of zoology concerned with the study of amphibians including frogs and toads, salamanders, newts, and caecilians. It is a sub-discipline of herpetology, which also includes non-avian reptiles (snakes, lizards, amphisbaenids, turtles, terrapins, tortoises, crocodilians, and the tuatara). Batrachologists may study the evolution, ecology, ethology, or anatomy of amphibians.
Amphibians are cold blooded vertebrates largely found in damp habitats although many species have special behavioural adaptations that allow them to live in deserts, trees, underground and in regions with wide seasonal variations in temperature. There are over 7250 species of amphibians.
Notable batrachologists
Jean Marius René Guibé
Gabriel Bibron
Oskar Boettger
George Albert Boulenger
Edward Drinker Cope
François Marie Daudin
Franz Werner
Leszek Berger |
https://en.wikipedia.org/wiki/Mand%20%28psychology%29 | Mand is a term that B.F. Skinner used to describe a verbal operant in which the response is reinforced by a characteristic consequence and is therefore under the functional control of relevant conditions of deprivation or aversive stimulation. One cannot determine, based on form alone, whether a response is a mand; it is necessary to know the kinds of variables controlling a response in order to identify a verbal operant. A mand is sometimes said to "specify its reinforcement" although this is not always the case. Skinner introduced the mand as one of six primary verbal operants in his 1957 work, Verbal Behavior.
Chapter three of Skinner's work, Verbal Behavior, discusses a functional relationship called the mand. A mand is a form of verbal behavior that is controlled by deprivation, satiation, or what is now called motivating operations (MO), as well as a controlling history. An example of this would be asking for water when one is water deprived ("thirsty"). It is tempting to say that a mand describes its reinforcer, which it sometimes does. But many mands have no correspondence to the reinforcer. For example, a loud knock may be a mand "open the door" and a servant may be called by a hand clap as much as a child might "ask for milk."
Mands differ from other verbal operants in that they primarily benefit the speaker, whereas other verbal operants function primarily for the benefit of the listener. This is not to say that mand's function exclusively in favor of the speaker, however; Skinner gives the example of the advice, "Go west!" as having the potential to yield consequences which will be reinforcing to both speaker and listener. When warnings such as "Look out!" are heeded, the listener may avoid aversive stimulation.
The Lamarre & Holland (1985) study on mands would be one example of a research study in this area.
Dynamic properties
The mand form, being under the control of deprivation and stimulation, will vary in energy level. Dynamic qualities are to b |
https://en.wikipedia.org/wiki/Methylisothiazolinone | Methylisothiazolinone, MIT, or MI, is the organic compound with the formula S(CH)2C(O)NCH3. It is a white solid. Isothiazolinones, a class of heterocycles, are used as biocides in numerous personal care products and other industrial applications. MIT and related compounds have attracted much attention for their allergenic properties, e.g. contact dermatitis.
Preparation
It is prepared by cyclization of cis-N-methyl-3-thiocyanoacrylamide:
NCSCH=CHC(O)NHCH3 -> SCH=CHC(O)NCH3 + HCN
Applications
Methylisothiazolinone is used for controlling microbial growth in water-containing solutions. It is typically used in a formulation with 5-chloro-2-methyl-4-isothiazolin-3-one (CMIT), in a 3:1 mixture (CMIT:MIT) sold commercially as Kathon. Kathon is supplied to manufacturers as a concentrated stock solution containing from 1.5 to 15% of CMIT/MIT.
Kathon also has been used to control slime in the manufacture of paper products that contact food. In addition, this product serves as an antimicrobial agent in latex adhesives and in paper coatings that also contact food.
Hazards
MIT is allergenic and cytotoxic, and this has led to some concern over its use. A report released by the European Scientific Committee on Cosmetic Products and Non-food Products Intended for Consumers (SCCNFP) in 2003 also concluded that insufficient information was available to allow for an adequate risk assessment analysis of MIT.
Rising reports of consumer impact led to new research, including a report released in 2014 by the European Commission Scientific Committee on Consumer Safety which reported:
"The dramatic rise in the rates of reported cases of contact allergy to MI, as detected by diagnostic patch tests, is unprecedented in Europe; there have been repeated warnings about the rise. The increase is primarily caused by increasing consumer exposure to MI from cosmetic products; exposures to MI in household products, paints and in the occupational setting also need to be considered. The delay |
https://en.wikipedia.org/wiki/Linear%20least%20squares | Linear least squares (LLS) is the least squares approximation of linear functions to data.
It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals.
Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.
Formulations
The three main linear least squares formulations are:
Ordinary least squares (OLS) is the most common estimator. OLS estimates are commonly used to analyze both experimental and observational data. The OLS method minimizes the sum of squared residuals, and leads to a closed-form expression for the estimated value of the unknown parameter vector β: where is a vector whose ith element is the ith observation of the dependent variable, and is a matrix whose ij element is the ith observation of the jth independent variable. The estimator is unbiased and consistent if the errors have finite variance and are uncorrelated with the regressors: where is the transpose of row i of the matrix It is also efficient under the assumption that the errors have finite variance and are homoscedastic, meaning that E[εi2xi] does not depend on i. The condition that the errors are uncorrelated with the regressors will generally be satisfied in an experiment, but in the case of observational data, it is difficult to exclude the possibility of an omitted covariate z that is related to both the observed covariates and the response variable. The existence of such a covariate will generally lead to a correlation between the regressors and the response variable, and hence to an inconsistent estimator of β. The condition of homoscedasticity can fail with either experimental or observational data. If the goal is either inference or predictive modeling, the performance of OLS estimates can be poor if multicollinearity is present, unless the sample size is large.
Weighted |
https://en.wikipedia.org/wiki/Non-mevalonate%20pathway | The non-mevalonate pathway—also appearing as the mevalonate-independent pathway and the 2-C-methyl-D-erythritol 4-phosphate/1-deoxy-D-xylulose 5-phosphate (MEP/DOXP) pathway—is an alternative metabolic pathway for the biosynthesis of the isoprenoid precursors isopentenyl pyrophosphate (IPP) and dimethylallyl pyrophosphate (DMAPP). The currently preferred name for this pathway is the MEP pathway, since MEP is the first committed metabolite on the route to IPP.
Isoprenoid precursor biosynthesis
The mevalonate pathway (MVA pathway or HMG-CoA reductase pathway) and the MEP pathway are metabolic pathways for the biosynthesis of isoprenoid precursors: IPP and DMAPP. Whereas plants use both MVA and MEP pathway, most organisms only use one of the pathways for the biosynthesis of isoprenoid precursors. In plant cells IPP/DMAPP biosynthesis via the MEP pathway takes place in plastid organelles, while the biosynthesis via the MVA pathway takes place in the cytoplasm. Most gram-negative bacteria, the photosynthetic cyanobacteria and green algae use only the MEP pathway. Bacteria that use the MEP pathway include important pathogens such Mycobacterium tuberculosis.
IPP and DMAPP serve as precursors for the biosynthesis of isoprenoid (terpenoid) molecules used in processes as diverse as protein prenylation, cell membrane maintenance, the synthesis of hormones, protein anchoring and N-glycosylation in all three domains of life. In photosynthetic organisms MEP-derived precursors are used for the biosynthesis of photosynthetic pigments, such as the carotenoids and the phytol chain of chlorophyll and light harvesting pigments.
Bacteria such as Escherichia coli have been engineered for co-expressing biosynthesis genes of both the MEP and the MVA pathway. Distribution of the metabolic fluxes between the MEP and the MVA pathway can be studied using 13C-glucose isotopomers.
Reactions
The reactions of the MEP pathway are as follows, taken primarily from Eisenreich and co-workers, excep |
https://en.wikipedia.org/wiki/Environmental%20profit%20and%20loss%20account | An environmental profit and loss account (E P&L) is a company's monetary valuation and analysis of its environmental impacts including its business operations and its supply chain from cradle-to-gate.
An E P&L internalizes externalities and monetizes the cost of business to nature by accounting for the ecosystem services a business depends on to operate in addition to the cost of direct and indirect negative impacts on the environment. The primary purpose of an E P&L is to allow managers and stakeholders to see the magnitude of these impacts and where in the supply chain they occur.
The E P&L analysis provides a metric to measure and monitor the footprint of the company's operations and suppliers all the way to the initial raw materials. It is a tool to build awareness of the importance of nature to the sustainability of businesses; enhance visibility across a company's supply chain and deepen understanding to focus sustainability efforts and implement better-informed operational decisions; improve specificity for risk management regarding environmental dependencies and impacts; and support a more holistic view of a company's performance, while bringing clarity and transparency to stakeholders at all levels and identifying new opportunities to enhance the sustainability of a company's products.
Background
Conceived by Puma Chairman, Jochen Zeitz, and launched by Puma and its parent company's sustainability initiative (PPR HOME), the first-ever E P&L was conducted on 2010 data and released in two phases. In May 2011 the valuation of Puma's 2010 Greenhouse Gas Emissions (GHG) and water usage was announced, followed in November 2011, by Puma's overall E P&L, which also included valuation results for other forms of air pollution, land conversion and waste.
Simultaneously, the PPR Group announced in November 2011 that a Group E P&L would be implemented across its Luxury and Sport & Lifestyle brands by 2015.
Methodology
The E P&L and the associated methodology wer |
https://en.wikipedia.org/wiki/Hypholoma%20capnoides | Hypholoma capnoides is an edible mushroom in the family Strophariaceae. Like its poisonous relative H. fasciculare ("sulphur tuft"), H. capnoides grows in clusters on decaying wood, for example in tufts on old tree stumps, in North America, Europe, and Asia.
Edibility
Though edible, the poisonous sulphur tuft is more common in many areas. H. capnoides has greyish gills due to the dark color of its spores, whereas sulphur tuft has greenish gills. It could also perhaps be confused with the deadly Galerina marginata or the good edible Kuehneromyces mutabilis.
Description
Cap: Up to 6 cm in diameter with yellow-to-orange-brownish or matt yellow colour, sometimes viscid.
Gills: Initially pale orangish-yellow, pale grey when mature, later darker purple/brown.
Spore powder: Dark burgundy/brown.
Stipe: Yellowish, somewhat rust-brown below.
Taste: Mild (other Hypholomas mostly have a bitter taste). |
https://en.wikipedia.org/wiki/NNDB | The Notable Names Database (NNDB) is an online database of biographical details of over 40,000 people. Soylent Communications, a sole proprietorship that also hosted the now-defunct Rotten.com, describes NNDB as an "intelligence aggregator" of noteworthy persons, highlighting their interpersonal connections. The Rotten.com domain was registered in 1996 by former Apple and Netscape software engineer Thomas E. Dell, who was also known by his internet alias, "Soylent".
Entries
Each entry has an executive summary with an assessment of the person's notability. It also lists their deaths, cause of death, and life risk factors that may affect their lives span such as obesity, cocaine addiction, or dwarfism. Businesspeople and government officials are listed with chronologies of their posts, positions, and board memberships. NNDB has articles on films with user-submitted reviews, discographies of selected music groups, and extensive bibliographies.
NNDB Mapper
The NNDB Mapper, a visual tool for exploring connections between people, was made available in May 2008. It required Adobe Flash 7.
See also
NameBase |
https://en.wikipedia.org/wiki/Nordic%20Labour%20Journal | Nordic Labour Journal is an online magazine published by the Norwegian Work Research Institute in Oslo on commission from the Nordic Council of Ministers. The magazine was launched in 1996. The main focus is the labour market, work environments and labour law within the Nordic models, which are based on collective agreements between unions and employers in cooperation with the authorities.
The editor-in-chief is Björn Lindahl.
The magazine has a Scandinavian version, called Arbeidsliv i Norden, which was established in 1986. |
https://en.wikipedia.org/wiki/Registration%20Data%20Access%20Protocol | The Registration Data Access Protocol (RDAP) is a computer network communications protocol standardized by a working group at the Internet Engineering Task Force in 2015, after experimental developments and thorough discussions. It is a successor to the WHOIS protocol, used to look up relevant registration data from such Internet resources as domain names, IP addresses, and autonomous system numbers.
While WHOIS essentially retrieves free text, RDAP delivers data in a standard, machine-readable JSON format. In order to accomplish this goal, the output of all operative WHOIS servers was analyzed, taking a census of the labels they used. RDAP designers, many of whom are members of number or name registries, strove to keep the protocol as simple as possible, since complexity was considered one of the reasons why previous attempts, such as CRISP, failed. RDAP is based on RESTful web services, so that error codes, user identification, authentication, and access control can be delivered through HTTP.
The biggest delay in getting RDAP done turned out to be the bootstrap, figuring out where the server is for each top level domain, IP range, or ASN range. IANA agreed to host the bootstrap information in suitable registries, and publish it at a well-known location URLs in JSON format. Those registries started empty and will be gradually populated as registrants of domains and address spaces provide RDAP server information to IANA. For number registries, ARIN set up a public RDAP service which also features a bootstrap URL, similar to what they do for WHOIS. For name registries, ICANN requires RDAP compliance since 2013.
Number resources
RDAP databases for assigned IP numbers are maintained by five Regional Internet registries. ARIN maintains a bootstrap database. Thanks to the standard document format, tasks such as, for example, getting the abuse team address of a given IP number can be accomplished in a fully automated manner.
Name resources
RDAP databases fo |
https://en.wikipedia.org/wiki/Test%20plan | A test plan is a document detailing the objectives, resources, and processes for a specific test session for a software or hardware product. The plan typically contains a detailed understanding of the eventual workflow.
Test plans
A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan is usually prepared by or with significant input from test engineers.
Depending on the product and the responsibility of the organization to which the test plan applies, a test plan may include a strategy for one or more of the following:
Design verification or compliance test – to be performed during the development or approval stages of the product, typically on a small sample of units.
Manufacturing test or production test – to be performed during preparation or assembly of the product in an ongoing manner for purposes of performance verification and quality control.
Acceptance test or commissioning test – to be performed at the time of delivery or installation of the product.
Service and repair test – to be performed as required over the service life of the product.
Regression test – to be performed on an existing operational product, to verify that existing functionality was not negatively affected when other aspects of the environment were changed (e.g., upgrading the platform on which an existing application runs).
A complex system may have a high-level test plan to address the overall requirements and supporting test plans to address the design details of subsystems and components.
Test plan document formats can be as varied as the products and organizations to which they apply. There are three major elements that should be described in the test plan: test coverage, test methods, and test responsibilities. These are also used in a formal test strategy.
Test coverage
Test coverage in the test plan states what requirements will be verified during what stag |
https://en.wikipedia.org/wiki/Magnesium%20hydroxide | Magnesium hydroxide is the inorganic compound with the chemical formula Mg(OH)2. It occurs in nature as the mineral brucite. It is a white solid with low solubility in water (). Magnesium hydroxide is a common component of antacids, such as milk of magnesia.
Preparation
Treating the solution of different soluble magnesium salts with alkaline water induces the precipitation of the solid hydroxide Mg(OH)2:
Mg2+ + 2 OH− → Mg(OH)2
As is the second most abundant cation present in seawater after , it can be economically extracted directly from seawater by alkalinisation as described here above. On an industrial scale, Mg(OH)2 is produced by treating seawater with lime (Ca(OH)2). A volume of of seawater gives about one tonne of Mg(OH)2. Ca(OH)2 ) is far more soluble than Mg(OH)2 ) and drastically increases the pH value of seawater from 8.2 to 12.5. The less soluble precipitates because of the common ion effect due to the added by the dissolution of :
Uses
Precursor to MgO
Most Mg(OH)2 that is produced industrially, as well as the small amount that is mined, is converted to fused magnesia (MgO). Magnesia is valuable because it is both a poor electrical conductor and an excellent thermal conductor.
Medical
Only a small amount of the magnesium from magnesium hydroxide is usually absorbed by the intestine (unless one is deficient in magnesium). However, magnesium is mainly excreted by the kidneys; so long-term, daily consumption of milk of magnesia by someone suffering from kidney failure could lead in theory to hypermagnesemia. Unabsorbed magnesium is excreted in feces; absorbed magnesium is rapidly excreted in urine.
Applications
Antacid
As an antacid, magnesium hydroxide is dosed at approximately 0.5–1.5g in adults and works by simple neutralization, in which the hydroxide ions from the Mg(OH)2 combine with acidic H+ ions (or hydronium ions) produced in the form of hydrochloric acid by parietal cells in the stomach, to produce water.
Laxative
As a laxative, |
https://en.wikipedia.org/wiki/Current%20density | In electromagnetism, current density is the amount of charge per unit time that flows through a unit area of a chosen cross section. The current density vector is defined as a vector whose magnitude is the electric current per cross-sectional area at a given point in space, its direction being that of the motion of the positive charges at this point. In SI base units, the electric current density is measured in amperes per square metre.
Definition
Assume that (SI unit: m2) is a small surface centred at a given point and orthogonal to the motion of the charges at . If (SI unit: A) is the electric current flowing through , then electric current density at is given by the limit:
with surface remaining centered at and orthogonal to the motion of the charges during the limit process.
The current density vector is the vector whose magnitude is the electric current density, and whose direction is the same as the motion of the positive charges at .
At a given time , if is the velocity of the charges at , and is an infinitesimal surface centred at and orthogonal to , then during an amount of time , only the charge contained in the volume formed by and will flow through . This charge is equal to where is the charge density at . The electric current is , it follows that the current density vector is the vector normal (i.e. parallel to ) and of magnitude
The surface integral of over a surface , followed by an integral over the time duration to , gives the total amount of charge flowing through the surface in that time ():
More concisely, this is the integral of the flux of across between and .
The area required to calculate the flux is real or imaginary, flat or curved, either as a cross-sectional area or a surface. For example, for charge carriers passing through an electrical conductor, the area is the cross-section of the conductor, at the section considered.
The vector area is a combination of the magnitude of the area through which the charge |
https://en.wikipedia.org/wiki/Toomre%27s%20stability%20criterion | In astrophysics, Toomre's stability criterion (also known as the Safronov–Toomre criterion) is a relationship between parameters of a differentially rotating, gaseous accretion disc which can be used to determine approximately whether the system is stable. In the case of a stationary gas, the Jeans stability criterion can be used to compare the strength of gravity with that of thermal pressure. In the case of a differentially rotating disk, the shear force can provide an additional stabilizing force.
The Toomre criterion for a disk to be stable can be expressed as,
where is the speed of sound (and measure of the thermal pressure), is the epicyclic frequency, G is Newton's gravitational constant, and is the surface density of the disk.
The Toomre Q parameter is often defined as the left-hand side of Eq.,
The stability criterion can then simply be stated as, for a disk to be stable against collapse.
The previous discussion was for a gaseous disk, but a similar analysis can be applied to a disk of stars (for example, the disk of a galaxy), yielding a kinematic Q parameter,
where is the radial velocity dispersion, and is the local epicyclic frequency.
Background
Many astrophysical objects result from the gravitational collapse of gaseous objects (for example, star formation occurs when molecular clouds collapse under gravity), and thus the stability of gaseous systems is of great interest. In general, a physical system is 'stable' if: 1) It is in equilibrium (there is a balance of forces such that the system is static), and 2) small deviations from equilibrium will tend to damp out, so that the system tends to return to equilibrium.
The most basic gravitational stability analysis is the Jeans criteria, which addresses the balance between self-gravity and thermal pressure in a gas. In terms of the two above stability conditions, the system is stable if: i) thermal pressure balances the force of gravity, and ii) if the system is compressed slightly, th |
https://en.wikipedia.org/wiki/Phobotaxis | Phobotaxis is a random behavioral response to all forms of aversive stimuli. A positive phobic response is one in which either activity is increased or the organism moves toward the stimulus, while a negative phobic response is when activity is decreased or the organism moves away from the stimulus. On the bacterial level, phobotaxis is regularly seen in accordance with phototaxis, random movement in response to light. In the protobacteria Rhodospirillum rubrum, the presence of ferric ion does not create a favorable wavelength of light for physiological activity. This elicits a positive photophobotactic response where the protobacteria moves towards blue and near-UV light. While the phobic response is classified as a photophobotactic response, the photochemical product of ferric complex in medium acts as a chemical stimulus, making this an example of chemotaxis as well. In the eukaryote Euglena, positive phototaxis and positive phobotaxis exhibit nearly the same action spectra, providing more evidence for their association. There also exists evidence to support photophobotaxis being coupled with electron transport needed in photosynthesis for two specific algaes: Phormidium uncinatum and Ph. autumnale. While there does not exist much evidence of phobotaxis in response to tactile stimuli, there is evidence to suggest species will respond in ways that will maximize necessary resources such as food. An experiment that simulated trail movements of trace fossils in the Ediacaran-Cambrian transition showed that those who engaged in phobotaxis, as in avoiding trails which indicate already exploited areas, gained more resources and had higher search efficiency. This foraging for resources involves changes in patchiness, which combines gravitaxis, movement in response to changes in gravity, and chemoreception to identify the spatial pattern of odors and move in response to chemical gradients. |
https://en.wikipedia.org/wiki/El%20Ni%C3%B1o%E2%80%93Southern%20Oscillation | El Niño–Southern Oscillation (ENSO) is an irregular periodic variation in winds and sea surface temperatures over the tropical eastern Pacific Ocean, affecting the climate of much of the tropics and subtropics. The warming phase of the sea temperature is known as El Niño and the cooling phase as La Niña. The Southern Oscillation is the accompanying atmospheric component, coupled with the sea temperature change: El Niño is accompanied by high air surface pressure in the tropical western Pacific and La Niña with low air surface pressure there. The two periods last several months each and typically occur every few years with varying intensity per period.
The two phases relate to the Walker circulation, which was discovered by Gilbert Walker during the early twentieth century. The Walker circulation is caused by the pressure gradient force that results from a high-pressure area over the eastern Pacific Ocean, and a low-pressure system over Indonesia. Weakening or reversal of the Walker circulation (which includes the trade winds) decreases or eliminates the upwelling of cold deep sea water, thus creating El Niño by causing the ocean surface to reach above average temperatures. An especially strong Walker circulation causes La Niña, resulting in cooler ocean temperatures due to increased upwelling.
Mechanisms that cause the oscillation remain under study. The extremes of this climate pattern's oscillations cause extreme weather (such as floods and droughts) in many regions of the world. Developing countries dependent upon agriculture and fishing, particularly those bordering the Pacific Ocean, are the most affected.
Outline
The El Niño–Southern Oscillation is a single climate phenomenon that periodically fluctuates between three phases: Neutral, La Niña or El Niño. La Niña and El Niño are opposite phases which require certain changes to take place in both the ocean and the atmosphere before an event is declared.
Normally the northward flowing Humboldt Current brings |
https://en.wikipedia.org/wiki/Body%20of%20constant%20brightness | In convex geometry, a body of constant brightness is a three-dimensional convex set all of whose two-dimensional projections have equal area. A sphere is a body of constant brightness, but others exist. Bodies of constant brightness are a generalization of curves of constant width, but are not the same as another generalization, the surfaces of constant width.
The name comes from interpreting the body as a shining body with isotropic luminance, then a photo (with focus at infinity) of the body taken from any angle would have the same total light energy hitting the photo.
Properties
A body has constant brightness if and only if the reciprocal Gaussian curvatures at pairs of opposite points of tangency of parallel supporting planes have almost-everywhere-equal sums.
According to an analogue of Barbier's theorem, all bodies of constant brightness that have the same projected area as each other also have the same surface area, . This can be proved by the Crofton formula.
Example
The first known body of constant brightness that is not a sphere was constructed by Wilhelm Blaschke in 1915. Its boundary is a surface of revolution of a curved triangle (but not the Reuleaux triangle). It is smooth except on a circle and at one isolated point where it is crossed by the axis of revolution. The circle separates two patches of different geometry from each other: one of these two patches is a spherical cap, and the other forms part of a football, a surface of constant Gaussian curvature with a pointed tip. Pairs of parallel supporting planes to this body have one plane tangent to a singular point (with reciprocal curvature zero) and the other tangent to the one of these two patches, which both have the same curvature. Among bodies of revolution of constant brightness, Blaschke's shape (also called the Blaschke–Firey body) is the one with minimum volume, and the sphere is the one with maximum volume.
Additional examples can be obtained by combining multiple bodies of constan |
https://en.wikipedia.org/wiki/Michael%20Guy | Michael J. T. Guy (born 1 April 1943) is a British computer scientist and mathematician. He is known for early work on computer systems, such as the Phoenix system at the University of Cambridge, and for contributions to number theory, computer algebra, and the theory of polyhedra in higher dimensions. He worked closely with John Horton Conway, and is the son of Conway's collaborator Richard K. Guy.
Mathematical work
With Conway, Guy found the complete solution to the Soma cube of Piet Hein. Also with Conway, an enumeration led to the discovery of the grand antiprism, an unusual uniform polychoron in four dimensions. The two had met at Gonville and Caius College, Cambridge, where Guy was an undergraduate student from 1960, and Conway was a graduate student. It was through Michael that Conway met Richard Guy, who would become a co-author of works in combinatorial game theory. Michael Guy with Conway made numerous particular contributions to geometry, number and game theory, often published in problem selections by Richard Guy. Some of these are recreational mathematics, others contributions to discrete mathematics. They also worked on the sporadic groups.
Guy began work as a research student of J. W. S. Cassels at Department of Pure Mathematics and Mathematical Statistics (DPMMS), Cambridge. He did not complete a Ph.D., but joint work with Cassels produced numerical examples on the Hasse principle for cubic surfaces.
Computer science
He subsequently went into computer science. He worked on the filing system for Titan, Cambridge's Atlas 2, being one of a team of four in one office including Roger Needham. In working on ALGOL 68, he was co-author with Stephen R. Bourne of ALGOL 68C.
Bibliography
Notes |
https://en.wikipedia.org/wiki/Altos%20586 | The Altos 586 was a multi-user microcomputer intended for the business market. It was introduced by Altos Computer Systems in 1983. A configuration with 512 kB of RAM, an Intel 8086 processor, Microsoft Xenix, and 10 MB hard drive cost about US$8,000. 3Com offered this Altos 586 product as a file server for their IBM PC networking solution in spring 1983. The network was 10BASE2 (thin-net) based, with an Ethernet AUI port on the Altos 586.
Reception
BYTE in August 1984 called the Altos 586 "an excellent multiuser UNIX system", with "the best performance" for the price among small Unix systems. The magazine reported that a Altos with 512 kB RAM and 40 MB hard drive "under moderate load approaches DEC VAX performance for most tasks that a user would normally invoke". A longer review in March 1985 stated that "despite some bugs, it's a good product". It criticized the documentation and lack of customer service for developers, but praised the multiuser performance. The author reported that his 586 had run a multiuser bulletin board system 24 hours a day for more than two years with no hardware failures. He concluded that "Very few UNIX or XENIX computers can provide all of the features of the 586 for $8990", especially for multiuser turnkey business users.
See also
Fortune XP 20 |
https://en.wikipedia.org/wiki/Almadroc | Almadroc is a garlic-cheese sauce from medieval Catalan cuisine from the Llibre de Sent Soví. There is a similar recipe in the Llibre del Coch by Rupert de Nola for almadrote, a similar recipe for a sauce made with garlic, eggs, cheese and broth that was served with partridge. In modern usage it refers to an oil, garlic and cheese sauce served with eggplant casserole.
Almadrote may have pre-Inquisition Sephardic origins and served with eggplant has become widespread in modern Turkish cuisine.
Earlier recipes for almodrote may date recipes for a type of green sauce called moretum (believed to be the etymological origin of almodrote) in the Apicius, or it may be of Arabic etymology. Concerning the latter, the evidence is based on modern lexicographical studies; it's not used for any of the sauces found in the 13th-century Andalusian cookbook, but may be derived from the Arabic term almaṭrúq meaning to pound.
During the Inquisition in the late 15th-century Fray Gonçalo Bringuylla was reported to have eaten "eggplant with almadrote".
In a later 16th century cookbook the name of the sauce is altered to capirotada, though garlic is only present in one variation of the sauce. Modern Sephardic cookbooks commonly contain recipes for vegetable casseroles with almodrot and almodrote. These recipes may include white cheese or feta, sometimes combined with a firmer cheese like gruyere, and eggs to thicken the sauce. |
https://en.wikipedia.org/wiki/Superluminescent%20diode | A superluminescent diode (SLED or SLD) is an edge-emitting semiconductor light source based on superluminescence. It combines the high power and brightness of laser diodes with the low coherence of conventional light-emitting diodes. Its emission optical bandwidth, also described as full-width at half maximum, can range from 5 up to 750 nm.
History
The superluminescent diode was reported for the first time by Kurbatov et al. (1971) and Lee, Burrus, and Miller (1973). By 1986 Dr. Gerard A. Alphonse at RCA Laboratories (now SRI International), invented a novel design enabling high power superluminescent diodes. This light source was developed as a key component in the next generations of fibre optic gyroscopes, low coherence tomography for medical imaging, and external cavity tunable lasers with applications to fiber-optic communications. In 1989 the technology was transferred to GE-RCA in Canada, which became a division of EG&G.
Superluminescent light emitting diodes are also called sometimes superluminescent diodes, superluminescence diodes or superluminescent LEDs.
Principles of operation
A superluminescent light emitting diode is, similar to a laser diode, based on an electrically driven p-n junction that, when biased in forward direction, becomes optically active and generates amplified spontaneous emission over a wide range of wavelengths. The peak wavelength and the intensity of the SLED depend on the active material composition and on the injection current level. SLEDs are designed to have high single pass amplification for the spontaneous emission generated along the waveguide but, unlike laser diodes, insufficient feedback to achieve lasing action. This is obtained very successfully through the joint action of a tilted waveguide and anti-reflection coated (ARC) facets.
When an electrical forward voltage is applied, an injection current across the active region of the SLED is generated. Like most semiconductor devices, a SLED consists of a positive ( |
https://en.wikipedia.org/wiki/OpenSonATA | Open SonATA stands for Open SETI on the Allen Telescope Array and is the open source version of the software are used for signal detection by the SETI Institute on the Allen Telescope Array (ATA). The software currently runs on Linux and macOS operating systems and is intended to be ported to multiple platforms. The Allen Telescope Array uses the OpenSUSE operating system on the SonATA computers.
Before the release of the code to the public setiQuest had to find all instances that conflicted with the GPL license they looked to release it in.
With the release of Open SonATA 2.1, setiQuest released the source code to the public under the GPL License. setiQuest has included "ways to help" in their documentation of the software. The source code can be found in setiQuest's GitHub repository.
Open SonATA is closely related to the setiQuest project. |
https://en.wikipedia.org/wiki/HERAS-AF |
The HERASAF Project
HERASAF is a well established open-source XACML 2.0 implementation.
It provides an extended implementation of the XACML 2.0 standard.
It is freely available, established and based on future driven technologies and standards. |
https://en.wikipedia.org/wiki/Mobile%20journalism | Mobile journalism is a form of multimedia newsgathering and storytelling that enables journalists to document, edit and share news using small, network connected devices like smartphones.
Mobile journalists report in video, audio, photography, and graphics using apps on their portable devices.
Such reporters, sometimes known as mojos (for mobile journalist), are staff or freelance journalists who may use digital cameras and camcorders, laptop PCs, smartphones or tablet devices. A broadband wireless connection, satellite phone, or cellular network is then used to transmit the story and imagery for publication.
The term mojo has been in use since 2005, originating at the Fort Myers News-Press and then gaining popularity throughout the Gannett newspaper chain in the United States.
Some key benefits of mobile journalism in comparison to conventional methods include affordability, portability, discretion, approachability, and the ease of access for beginners.
History
One of the first instance of mobile journalism recorded is from wearable technology pioneer Steve Mann as a feature in a personal visual assistant that he designed, he identified himself as a roving reporter.
In the beginning, he faced concerns from the press about privacy. He responded by writing on The Tech of MIT on July 24, 1996 a guest column "Wearcam Helps Address Privacy Issue". In the column, he stated that he was wearing his experimental eye glass to bring awareness to the huge and growing number of surveillance cameras that were watching over citizen's activities. He also stated in the article that he "exercises deference to others, " many of the photos he took were "architecture details, experiments in light and shade, posed shots done at the request of those in the picture".
Every year, hundreds of mobile journalists attend mobile journalism conferences. One of these is MojoFest, which has been organized in association with RTE, the national public services broadcaster of Ireland.
Edi |
https://en.wikipedia.org/wiki/Thurston%20norm | In mathematics, the Thurston norm is a function on the second homology group of an oriented 3-manifold introduced by William Thurston, which measures in a natural way the topological complexity of homology classes represented by surfaces.
Definition
Let be a differentiable manifold and . Then can be represented by a smooth embedding , where is a (not necessarily connected) surface that is compact and without boundary. The Thurston norm of is then defined to be
,
where the minimum is taken over all embedded surfaces (the being the connected components) representing as above, and is the absolute value of the Euler characteristic for surfaces which are not spheres (and 0 for spheres).
This function satisfies the following properties:
for ;
for .
These properties imply that extends to a function on which can then be extended by continuity to a seminorm on . By Poincaré duality, one can define the Thurston norm on .
When is compact with boundary, the Thurston norm is defined in a similar manner on the relative homology group and its Poincaré dual .
It follows from further work of David Gabai that one can also define the Thurston norm using only immersed surfaces. This implies that the Thurston norm is also equal to half the Gromov norm on homology.
Topological applications
The Thurston norm was introduced in view of its applications to fiberings and foliations of 3-manifolds.
The unit ball of the Thurston norm of a 3-manifold is a polytope with integer vertices. It can be used to describe the structure of the set of fiberings of over the circle: if can be written as the mapping torus of a diffeomorphism of a surface then the embedding represents a class in a top-dimensional (or open) face of : moreover all other integer points on the same face are also fibers in such a fibration.
Embedded surfaces which minimise the Thurston norm in their homology class are exactly the closed leaves of foliations of .
Notes |
https://en.wikipedia.org/wiki/Fecal%20impaction | A fecal impaction or an impacted bowel is a solid, immobile bulk of feces that can develop in the rectum as a result of chronic constipation (a related term is fecal loading which refers to a large volume of stool in the rectum of any consistency). Fecal impaction is a common result of neurogenic bowel dysfunction and causes immense discomfort and pain. Its treatment includes laxatives, enemas, and pulsed irrigation evacuation (PIE) as well as digital removal. It is not a condition that resolves without direct treatment.
Signs and symptoms
Symptoms of a fecal impaction include the following:
Chronic constipation
Fecal incontinence-- paradoxical overflow diarrhea (encopresis) as a result of liquid stool passing around the obstruction
Abdominal pain and bloating
Loss of appetite
Complications may include necrosis and ulcers of the rectal tissue, which if untreated can cause death.
Causes
There are many possible causes; these include a long period of physical inactivity, failure to consume adequate dietary fiber, dehydration, and deliberate retention of fecal matter.
Medications such as fentanyl, buprenorphine, methadone, codeine, oxycodone, hydrocodone, morphine, and hydromorphone as well as certain sedatives that reduce intestinal movement may cause fecal matter to become too large, hard and/or dry to expel.
Specific conditions, such as irritable bowel syndrome, certain neurological disorders, paralytic ileus, gastroparesis, diabetes, enlarged prostate gland, distended colon, an ingested foreign object, inflammatory bowel diseases such as Crohn's disease and colitis, and autoimmune diseases such as amyloidosis, celiac disease, lupus, and scleroderma can cause a fecal impaction. Hypothyroidism can also cause chronic constipation because of sluggish, slower, or weaker colon contractions. Iron supplements or increased blood calcium levels are also potential causes. Spinal cord injury is a common cause of constipation, due to ileus.
Prevention
Reducing opiat |
https://en.wikipedia.org/wiki/Ruble%20sign | The ruble sign, , is the currency sign used for the Russian ruble, the official currency of Russia. Its form is a Cyrillic letter Р with an additional horizontal stroke. The design was approved on 11 December 2013 after a public poll that took place a month earlier.
In Russian orthography, the sign almost always follows the number (the monetary value), and in many cases there is a space between the two. In English orthography, it usually precedes the number.
History
In the 18th and 19th centuries there was a symbol for the Russian ruble consisting of lower case Cyrillic letters — a rotated р on a у. In the 20th century р. was used to abbreviate the ruble.
The debates about adopting a national currency symbol for the Russian ruble began nearly from the start of Russia's transition to a market economy and its economic integration into the global market in the 1990s, soon after the dissolution of the Soviet Union. The idea was to reach the same level of recognition and therefore of influence as well-known currency signs such as $ (the US dollar), € (the euro), ¥ (the Chinese yuan or the Japanese yen) and £ (the Pound sterling). There were several contests to choose the ruble sign, hosted by different organizations. However, the Central Bank of Russia did not adopt one of the winning symbols from these early contests.
In 2007, a group of Russian design bureaus and studios proposed to use ₽, the stroked Cyrillic letter Р to represent the ruble. Soon after, many electronic retailers, restaurants and cafés started to use the sign unofficially. It became very popular and was widely used as a de facto standard.
In November 2013, the Central Bank of Russia finally decided to adopt a national currency sign. It placed a public poll on its website with five pre-chosen options.
The design provided earlier by the design community that was informally yet widely used (₽) was on the poll's list and got the most votes. On 11 December 2013, ₽ was approved as the official si |
https://en.wikipedia.org/wiki/Isometamidium%20chloride | Isometamidium chloride is a triazene trypanocidal agent used in veterinary medicine.
It consists of a single ethidium bromide like subunit linked to a fragment of the diminazene molecule.
Resistance
The Gibe River Valley in southwest Ethiopia showed universal resistance between July 1989 and February 1993. This likely indicates a permanent loss of function in this area against the tested target, T. congolense isolated from Boran cattle. |
https://en.wikipedia.org/wiki/Properties%20of%20nonmetals%20%28and%20metalloids%29%20by%20group | Nonmetals show more variability in their properties than do metals. Metalloids are included here since they behave predominately as chemically weak nonmetals.
Physically, they nearly all exist as diatomic or monatomic gases, or polyatomic solids having more substantial (open-packed) forms and relatively small atomic radii, unlike metals, which are nearly all solid and close-packed, and mostly have larger atomic radii. If solid, they have a submetallic appearance (with the exception of sulfur) and are brittle, as opposed to metals, which are lustrous, and generally ductile or malleable; they usually have lower densities than metals; are mostly poorer conductors of heat and electricity; and tend to have significantly lower melting points and boiling points than those of most metals.
Chemically, the nonmetals mostly have higher ionisation energies, higher electron affinities (nitrogen and the noble gases have negative electron affinities) and higher electronegativity values than metals noting that, in general, the higher an element's ionisation energy, electron affinity, and electronegativity, the more nonmetallic that element is. Nonmetals, including (to a limited extent) xenon and probably radon, usually exist as anions or oxyanions in aqueous solution; they generally form ionic or covalent compounds when combined with metals (unlike metals, which mostly form alloys with other metals); and have acidic oxides whereas the common oxides of nearly all metals are basic.
Properties
Abbreviations used in this section are: AR Allred-Rochow; CN coordination number; and MH Moh's hardness
Group 1
Hydrogen is a colourless, odourless, and comparatively unreactive diatomic gas with a density of 8.988 × 10−5 g/cm3 and is about 14 times lighter than air. It condenses to a colourless liquid −252.879 °C and freezes into an ice- or snow-like solid at −259.16 °C. The solid form has a hexagonal crystalline structure and is soft and easily crushed. Hydrogen is an insulator in all of |
https://en.wikipedia.org/wiki/Countercurrent%20exchange | Countercurrent exchange is a mechanism occurring in nature and mimicked in industry and engineering, in which there is a crossover of some property, usually heat or some chemical, between two flowing bodies flowing in opposite directions to each other. The flowing bodies can be liquids, gases, or even solid powders, or any combination of those. For example, in a distillation column, the vapors bubble up through the downward flowing liquid while exchanging both heat and mass.
The maximum amount of heat or mass transfer that can be obtained is higher with countercurrent than co-current (parallel) exchange because countercurrent maintains a slowly declining difference or gradient (usually temperature or concentration difference). In cocurrent exchange the initial gradient is higher but falls off quickly, leading to wasted potential. For example, in the adjacent diagram, the fluid being heated (exiting top) has a higher exiting temperature than the cooled fluid (exiting bottom) that was used for heating. With cocurrent or parallel exchange the heated and cooled fluids can only approach one another. The result is that countercurrent exchange can achieve a greater amount of heat or mass transfer than parallel under otherwise similar conditions.
See: flow arrangement.
Countercurrent exchange when set up in a circuit or loop can be used for building up concentrations, heat, or other properties of flowing liquids. Specifically when set up in a loop with a buffering liquid between the incoming and outgoing fluid running in a circuit, and with active transport pumps on the outgoing fluid's tubes, the system is called a countercurrent multiplier, enabling a multiplied effect of many small pumps to gradually build up a large concentration in the buffer liquid.
Other countercurrent exchange circuits where the incoming and outgoing fluids touch each other are used for retaining a high concentration of a dissolved substance or for retaining heat, or for allowing the external bu |
https://en.wikipedia.org/wiki/The%204%20Percent%20Universe | The 4 Percent Universe: Dark Matter, Dark Energy, and the Race to Discover the Rest of Reality is a nonfiction book by writer and professor Richard Panek and published by Houghton Mifflin Harcourt on January 10, 2011.
In October 2011, the Nobel Prize in Physics was awarded to Saul Perlmutter, Brian Schmidt, and Adam Riess, three of the main figures discussed in the book for the primary discovery that is the topic of The 4 Percent Universe.
Content
The book's namesake comes from the scientific confusion over how ordinary matter makes up only four percent of the mass–energy in the universe, with the rest consisting of mysterious dark matter and dark energy that are both invisible and almost impossible to detect. It is due to dark matter that galaxies are able to keep their shape, with the mass of dark matter creating enough gravitational force to hold the stars that make up a galaxy together. Dark energy, however, is a substance or force responsible for the accelerating expansion of the universe over time.
The significant focus of The 4 Percent Universe is on the developments of astronomical science in the 20th century, including the formation of the expanding universe theory by Edwin Hubble in the 1930s. This model, when used in conjunction with Albert Einstein's general relativity helped in the creation of the Big Bang model and the later discovery of the cosmic background radiation in the 1960s. In following this history, Panek also discusses the flaws and missing pieces in the theories and the quest by two major scientific groups to discover the reason for the expansion of the universe not matching the models as expected. The book discusses the science behind the idea of dark matter being made up of weakly interacting massive particles and how scientists tried to determine the existence of dark energy from the 1990s and onward. The two groups involved in this research were the Supernova Cosmology Project headed by Saul Perlmutter and the High-Z Supernova Search |
https://en.wikipedia.org/wiki/Monovalent%20cation%3Aproton%20antiporter-3 | The Monovalent Cation (K+ or Na+):Proton Antiporter-3 (CPA3) Family (TC# 2.A.63) is a member of the Na+ transporting Mrp superfamily. The CPA3 family consists of bacterial multicomponent K+:H+ and Na+:H+ antiporters. The best characterized systems are the PhaABCDEFG system of Sinorhizobium meliloti (TC# 2.A.63.1.1) that functions in pH adaptation and as a K+ efflux system, and the MnhABCDEFG system of Staphylococcus aureus (TC# 2.A.63.1.3) that functions as a Na+ efflux Na+:H+ antiporter.
Homology
A homologous, but only partially sequenced, system was earlier reported to catalyze Na+:H+ antiport in an alkalophilic Bacillus strain. PhaA and PhaD are respectively homologous to the ND5 and ND4 subunits of the H+-pumping NADH:ubiquinone oxidoreductase (TC #3.D.1). Homologous protein subunits from E. coli NADH:quinone oxidoreductase can functionally replace MrpA and MrpD in Bacillus subtilis.
Homologues of PhaA, B, C and D and Nha1, 2, 3 and 4 of an alkalophilic Bacillus strain are the Yuf(Mrp)T, U, V and D genes of Bacillus subtilis. In this system, YufT is believed to be responsible for Na+:H+ antiporter activity, but it does not have activity in the absence of other constituents of the operon.
Structure
The seven Pha proteins are of the following sizes (in #aas) and exhibit the following putative numbers of transmembrane α-helical spanners (TMSs):
PhaA - 725 and 17
PhaB - 257 and 5
PhaC - 115 and 3
PhaD - 547 and 13
PhaE - 161 and 3
PhaF - 92 and 3
PhaG - 120 and 3
All are predicted to be integral membrane proteins.
Corresponding values for the S. aureus Mnh system are:
MnhA - 801 and 18
MnhB - 142 and 4
MnhC - 113 and 3
MnhD - 498 and 13
MnhE - 159 and 4
MnhF - 97 and 3
MnhG - 118 and 3
In view of the complexity of the system, large variation in subunit structure, and the homology with NDH family protein constituents, a complicated energy coupling mechanism, possibly involving a redox reaction, cannot be ruled out.
Function
Na+ or Li+ |
https://en.wikipedia.org/wiki/Truthful%20resource%20allocation | Truthful resource allocation is the problem of allocating resources among agents with different valuations over the resources, such that agents are incentivized to reveal their true valuations over the resources.
Model
There are m resources that are assumed to be homogeneous and divisible. Examples are:
Materials, such as wood or metal;
Virtual resources, such as CPU time or computer memory;
Financial resources, such as shares in firms.
There are n agents. Each agent has a function that attributes a numeric value to each "bundle" (combination of resources).
It is often assumed that the agents' value functions are linear, so that if the agent receives a fraction rj of each resource j, then his/her value is the sum of rj *vj .
Design goals
The goal is to design a truthful mechanism, that will induce the agents to reveal their true value functions, and then calculate an allocation that satisfies some fairness and efficiency objectives. The common efficiency objectives are:
Pareto efficiency (PE);
Utilitarian social welfare --- defined as the sum of agents' utilities. An allocation maximizing this sum is called utilitarian or max-sum; it is always PE.
Nash social welfare --- defined as the product of agents' utilities. An allocation maximizing this product is called Nash-optimal or max-product or proportionally-fair; it is always PE. When agents have additive utilities, it is equivalent to the competitive equilibrium from equal incomes.
The most common fairness objectives are:
Equal treatment of equals (ETE) --- if two agents have exactly the same utility function, then they should get exactly the same utility.
Envy-freeness --- no agent should envy another agent. It implies ETE.
Egalitarian in lieu of equitable markets are analogous to laissez-faire early-stage capitalism, which form the basis of common marketplaces bearing fair trade policies in world markets' market evaluation; financiers can capitalise on financial controls and financial leverage and t |
https://en.wikipedia.org/wiki/Jordan%20and%20Einstein%20frames | The Lagrangian in scalar-tensor theory can be expressed in the Jordan frame in which the scalar field or some function of it multiplies the Ricci scalar, or in the Einstein frame in which Ricci scalar is not multiplied by the scalar field. There exist various transformations between these frames. Despite the fact that these frames have been around for some time there is has been debate about whether either, both, or neither frame is a 'physical' frame which can be compared to observations and experiment.
Christopher Hill and Graham Ross have shown that there exist ``gravitational contact terms" in the Jordan frame, whereby the action is modified by graviton exchange. This modification leads back to the Einstein frame as the effective theory.
Contact interactions arise in Feynman diagrams when a vertex contains a power of the exchanged momentum, , which then cancels against the Feynman propagator, , leading to a point-like interaction. This must be included as part of the effective action of the theory. When the contact term is included results for amplitudes in the Jordan frame will be equivalent to those in the Einstein frame, and
results of physical calculations in the Jordan frame that omit the contact terms will generally be incorrect. This implies that the Jordan frame action is misleading, and the Einstein frame is uniquely correct for fully representing the physics.
Equations and physical interpretation
If we perform the Weyl rescaling , then the Riemann and Ricci tensors are modified as follows.
As an example consider the transformation of a simple Scalar-tensor action with an arbitrary set of matter fields coupled minimally to the curved background
The tilde fields then correspond to quantities in the Jordan frame and the fields without the tilde correspond to fields in the Einstein frame. See that the matter action changes only in the rescaling of the metric.
The Jordan and Einstein frames are constructed to render certain parts of physical |
https://en.wikipedia.org/wiki/Protein%20methylation | Protein methylation is a type of post-translational modification featuring the addition of methyl groups to proteins. It can occur on the nitrogen-containing side-chains of arginine and lysine, but also at the amino- and carboxy-termini of a number of different proteins. In biology, methyltransferases catalyze the methylation process, activated primarily by S-adenosylmethionine.
Protein methylation has been most studied in histones, where the transfer of methyl groups from S-adenosyl methionine is catalyzed by histone methyltransferases. Histones that are methylated on certain residues can act epigenetically to repress or activate gene expression.
Methylation by substrate
Multiple sites of proteins can be methylated. For some types of methylation, such as N-terminal methylation and prenylcysteine methylation, additional processing is required, whereas other types of methylation such as arginine methylation and lysine methylation do not require pre-processing.
Arginine
Arginine can be methylated once (monomethylated arginine) or twice (dimethylated arginine). Methylation of arginine residues is catalyzed by three different classes of protein arginine methyltransferases (PRMTs): Type I PRMTs (PRMT1, PRMT2, PRMT3, PRMT4, PRMT6, and PRMT8) attach two methyl groups to a single terminal nitrogen atom, producing asymmetric dimethylarginine (N G,N G-dimethylarginine). In contrast, type II PRMTs (PRMT5 and PRMT9) catalyze the formation of symmetric dimethylarginine with one methyl group on each terminal nitrogen (symmetric N G,N' G-dimethylarginine). Type I and II PRMTs both generate N G-monomethylarginine intermediates; PRMT7, the only known type III PRMT, produces only monomethylated arginine.
Arginine-methylation usually occurs at glycine and arginine-rich regions referred to as "GAR motifs", which is likely due to the enhanced flexibility of these regions that enables insertion of arginine into the PRMT active site. Nevertheless, PRMTs with non-GAR consensus sequ |
https://en.wikipedia.org/wiki/CompTox%20Chemicals%20Dashboard | The CompTox Chemicals Dashboard is a freely accessible online database created and maintained by the U.S. Environmental Protection Agency (EPA). The database provides access to multiple types of data including physicochemical properties, environmental fate and transport, exposure, usage, in vivo toxicity, and in vitro bioassay. EPA and other scientists use the data and models contained within the dashboard to help identify chemicals that require further testing and reduce the use of animals in chemical testing. The Dashboard is also used to provide public access to information from EPA Action Plans, e.g. around perfluorinated alkylated substances.
Originally titled the Chemistry Dashboard, the first version was released in 2016. The latest release of the database (version 3.0.5) contains manually curated data for over 875,000 chemicals and incorporates the latest data generated from the EPA's Toxicity Forecaster (ToxCast) high-throughput screening program. The Chemicals Dashboard incorporates data from several previous EPA databases into one package including the ToxCast Dashboard, the Endocrine Disruption Screening Program (EDSP) Dashboard and the Chemical and Products Database (CPDat).
Scope and Access
The CompTox Chemicals Dashboard database contains high quality chemical structures and information that have been extensively curated and quality checked, which can be used as a resource for analytical scientists involved in structure identification.
Chemical hazard data in the dashboard comes from both traditional laboratory animal studies and high-throughput screening. Biological data from high-throughput screening is generated by EPA's ToxCast program, the ToxCast data in the database provides information about the assays used and their response potency and efficacy. These data can be found in the bioactivity tab.
The Chemicals Dashboard can be accessed via a web interface or sets of data within it can be downloaded for use offline. The Lists tab can be use |
https://en.wikipedia.org/wiki/Human%20embryonic%20development | Human embryonic development, or human embryogenesis, is the development and formation of the human embryo. It is characterised by the processes of cell division and cellular differentiation of the embryo that occurs during the early stages of development. In biological terms, the development of the human body entails growth from a one-celled zygote to an adult human being. Fertilization occurs when the sperm cell successfully enters and fuses with an egg cell (ovum). The genetic material of the sperm and egg then combine to form the single cell zygote and the germinal stage of development commences. Embryonic development in the human, covers the first eight weeks of development; at the beginning of the ninth week the embryo is termed a fetus.
The eight weeks has 23 stages.
Human embryology is the study of this development during the first eight weeks after fertilization. The normal period of gestation (pregnancy) is about nine months or 40 weeks.
The germinal stage refers to the time from fertilization through the development of the early embryo until implantation is completed in the uterus. The germinal stage takes around 10 days. During this stage, the zygote begins to divide, in a process called cleavage. A blastocyst is then formed and implants in the uterus. Embryogenesis continues with the next stage of gastrulation, when the three germ layers of the embryo form in a process called histogenesis, and the processes of neurulation and organogenesis follow.
In comparison to the embryo, the fetus has more recognizable external features and a more complete set of developing organs. The entire process of embryogenesis involves coordinated spatial and temporal changes in gene expression, cell growth and cellular differentiation. A nearly identical process occurs in other species, especially among chordates.
Germinal stage
Fertilization
Fertilization takes place when the spermatozoon has successfully entered the ovum and the two sets of genetic material carried b |
https://en.wikipedia.org/wiki/Managed%20private%20cloud | Managed private cloud (also known as "hosted private cloud") refers to a principle in software architecture where a single instance of the software runs on a server, serves a single client organization (tenant), and is managed by a third party. The third-party provider is responsible for providing the hardware for the server and also for preliminary maintenance. This is in contrast to multitenancy, where multiple client organizations share a single server, or an on-premises deployment, where the client organization hosts its software instance.
Managed private clouds also fall under the larger umbrella of cloud computing.
Adoption
The need for private clouds arose due to enterprises requiring a dedicated service and infrastructure for their cloud computing needs, such as for business critical operations, improved security and better control over their resources. Managed private cloud adoption is a popular choice among organizations, and has been on the rise due to enterprises requiring a dedicated cloud environment, and preferring to avoid having to deal with management, maintenance, or future upgradation costs for the associated infrastructure and services. Such operational costs are unavoidable in on-premises private cloud data centers.
Advantages and challenges of managed private cloud
A managed private cloud cuts down on upkeep costs by outsourcing infrastructure management and maintenance to the managed cloud provider. It is easier to integrate an organization's existing software, services, and applications into a dedicated cloud hosting infrastructure which can be customized to the client's needs, instead of a public cloud platform, whose hardware or infrastructure/software platform cannot be individualized to each client.
Customers who choose a managed private cloud deployment usually choose them because of their desire for an efficient cloud deployment, but also have the need for service customization or integration only available in a single-tenant envi |
https://en.wikipedia.org/wiki/Perceptual%20quantizer | The perceptual quantizer (PQ), published by SMPTE as SMPTE ST 2084, is a transfer function that allows for HDR display by replacing the gamma curve used in SDR. It is capable of representing luminance level up to 10000 cd/m2 (nits) and down to 0.0001 nits. It has been developed by Dolby and standardized in 2014 by SMPTE and also in 2016 by ITU in Rec. 2100. ITU specifies the use of PQ or HLG as transfer functions for HDR-TV. PQ is the basis of HDR video formats (such as Dolby Vision, HDR10 and HDR10+) and is also used for HDR still picture formats. PQ is not backward compatible with the BT.1886 EOTF (i.e. the gamma curve of SDR), while HLG is compatible.
PQ is a non-linear transfer function based on the human visual perception of banding and is able to produce no visible banding in 12 bits. A power function (used as EOTFs in standard dynamic range applications) extended to 10000 cd/m2 would have required 15 bits.
Technical details
The PQ EOTF (electro-optical transfer function) is as follows:
The PQ inverse EOTF is as follows:
where
is the non-linear signal value, in the range .
is the displayed luminance in cd/m2
is the normalized linear displayed value, in the range [0:1] (with representing the peak luminance of 10000 cd/m2)
See also
Transfer functions in imaging
High-dynamic-range television |
https://en.wikipedia.org/wiki/Chirikov%20criterion | The Chirikov criterion or Chirikov resonance-overlap criterion
was established by the Russian physicist Boris Chirikov.
Back in 1959, he published a seminal article,
where he introduced the very first physical criterion for the onset of chaotic motion in
deterministic Hamiltonian systems. He then applied such a criterion to explain
puzzling experimental results on plasma confinement in magnetic bottles
obtained by Rodionov at the Kurchatov Institute.
Description
According to this criterion a deterministic trajectory will begin to move
between two nonlinear resonances in a chaotic and unpredictable manner,
in the parameter range
Here is the perturbation parameter,
while
is the resonance-overlap parameter, given by the ratio of the
unperturbed resonance width in frequency
(often computed in the pendulum
approximation and proportional to the square-root of perturbation),
and the frequency difference
between two unperturbed resonances. Since its introduction, the Chirikov criterion has become an important analytical tool for the determination of the chaos border.
See also
Chirikov criterion at Scholarpedia
Chirikov standard map and standard map
Boris Chirikov and Boris Chirikov at Scholarpedia |
https://en.wikipedia.org/wiki/Solanidine | Solanidine is a poisonous steroidal alkaloid chemical compound that occurs in plants of the family Solanaceae, such as potato and Solanum americanum. Human ingestion of solanidine also occurs via the consumption of the glycoalkaloids, α-solanine and α-chaconine, present in potatoes. The sugar portion of these glycoalkaloids hydrolyses in the body, leaving the solanidine portion. Solanidine occurs in the blood serum of normal healthy people who eat potato, and serum solanidine levels fall markedly once potato consumption ceases. Solanidine from food is also stored in the human body for prolonged periods of time, and it has been suggested that it could be released during times of metabolic stress with the potential for deleterious consequences. Solanidine is responsible for neuromuscular syndromes via cholinesterase inhibition.
Uses
Solanidine is a very important precursor for the synthesis of hormones and some pharmacologically active compounds.
Synthetic uses
Solanidine to 16-DPA conversion
In 1994, Gunic and coworkers reported the electrochemical oxidation of 3β-acetoxy-solanidine in CH3CN/CH2Cl2 1/1 with pyridine as a base. The corresponding iminium salts 2 and 3 were obtained in a 1/1 ratio in good yield. Performing this electrochemical reaction in DCM with pyridine gives 3 in 95% yield, while the same reaction in acetone gives iminium salt 2 in 95% yield. Iminium ion 2 can be isomerized to the thermodynamically more stable enamine 5. THis isomerization is believed to proceed via enamine 4, which is the kinetic product.
In 1997, Gaši et al. reported a short procedure for the degradation of solanidine to 16-Dehydropregnenolone acetate. Instead of applying the electrochemical oxidation, Hg(OAc)2 in acetone was used as oxidizing agent. The advantage of this reagent and solvent system was the ease of use and the selective formation of iminium salt 2, which spontaneously isomerized to enamine 3 (94%). This enamine was then subjected to another isomerization, whi |
https://en.wikipedia.org/wiki/Send%20tape%20echo%20echo%20delay | Send tape echo echo delay (more commonly known as STEED, alternatively known as single tape echo and echo delay) is a technique used in magnetic tape sound recording to apply a delay effect using tape loops and echo chambers.
In 2006, while publicising his memoir (Here, There, and Everywhere: My Life Recording the Music of The Beatles), recording engineer Geoff Emerick stated that "God only knows" how the effect worked.
Technique
The technique was developed at EMI/Abbey Road Studios in the late 1950s, by EMI engineer Gwynne Stock. It involved delaying the recorded (dry) signal, sending it into the studio's echo chamber using a tape machine. The dry signal (without delay) was also sent to the chamber via the tape machine's replay head. The resulting sound was picked up by two condenser microphones. These microphones then fed the wet signal back to the recording console. The amount of feedback could be controlled allowing multiple delays to be sent to the reverb chamber, which could lengthen the effect's decay time.
An identical technique was used for the production of Anthology 1 in 1995, where speakers were used to play the sound within the echo chamber.
Use
One notable example of the use of STEED is on George Harrison's lead vocal on "Everybody's Trying to Be My Baby" (1964). Mark Lewisohn describes the effect as a "vast amount", and likened Harrison's vocal to singing inside a tin can. He notes that some of the musical backing tracks were also affected by the technique due to microphone spill from Harrison's headphones. Other examples of the use of STEED on Beatles recordings include the vocal fermata in "Paperback Writer" (1966), and Paul McCartney's piano on "Birthday" (1968).
The effect was also used on "Revolution 9" (1968), and was used in the mixing of tracks for Anthology 1 in 1995.
See also
Artificial double tracking, a technique developed by EMI/Abbey Road's Ken Townsend
Footnotes
Sources
Audio effects
The Beatles music |
https://en.wikipedia.org/wiki/Discrete%20transform | In signal processing, discrete transforms are mathematical transforms, often linear transforms, of signals between discrete domains, such as between discrete time and discrete frequency.
Many common integral transforms used in signal processing have their discrete counterparts. For example, for the Fourier transform the counterpart is the discrete Fourier transform.
In addition to spectral analysis of signals, discrete transforms play important role in data compression, signal detection, digital filtering and correlation analysis. The discrete cosine transform (DCT) is the most widely used transform coding compression algorithm in digital media, followed by the discrete wavelet transform (DWT).
Transforms between a discrete domain and a continuous domain are not discrete transforms. For example, the discrete-time Fourier transform and the Z-transform, from discrete time to continuous frequency, and the Fourier series, from continuous time to discrete frequency, are outside the class of discrete transforms.
Classical signal processing deals with one-dimensional discrete transforms. Other application areas, such as image processing, computer vision, high-definition television, visual telephony, etc. make use of two-dimensional and in general, multidimensional discrete transforms.
See also |
https://en.wikipedia.org/wiki/744%20%28number%29 | 744 (seven hundred [and] forty four) is the natural number following 743 and preceding 745.
744 plays a major role within moonshine theory of sporadic groups, in context of the classification of finite simple groups.
Number theory
744 is the nineteenth number of the form where , and represent distinct prime numbers (2, 3, and 31; respectively).
It can be represented as the sum of nonconsecutive factorials , as the sum of four consecutive primes , and as the product of sums of divisors of consecutive integers ; respectively:
744 contains sixteen total divisors — fourteen aside from its two unitary divisors — all of which collectively generate an integer arithmetic mean of that is also the first number of the form
The number partitions of the square of seven (49) into prime parts is 744, as is the number of partitions of 48 into at most four distinct parts.
It is palindromic in septenary (21127), while in binary it is a pernicious number, as its digit representation (10111010002) contains a prime count (5) of ones.
744 is abundant and semiperfect, as well as practical. It is the first number to be the sum of nine cubes in eight or more ways. In decimal, 744 is the number of six-digit perfect powers.
Totients
744 has two hundred and forty integers that are relatively prime or coprime with and up to itself, equivalently its Euler totient.
This totient of 744 is regular like its sum-of-divisors, where 744 sets the twenty-ninth record for of 1920. Both the totient and sum-of-divisors values of 744 contain the same set of distinct prime factors (2, 3, and 5), while the Carmichael function or reduced totient (which counts the least common multiple of order of elements in a multiplicative group of integers modulo ) at seven hundred forty-four is equal to . 744 is also a Zumkeller number whose divisors can be partitioned into two disjoint sets with equal sum (960).
Of these 240 totatives, 110 are strictly composite totatives that nearly match the sequen |
https://en.wikipedia.org/wiki/Framework%20for%20integrated%20test | Framework for Integrated Test, or "Fit", is an open-source (GNU GPL v2) tool for automated customer tests. It integrates the work of customers, analysts, testers, and developers.
Customers provide examples of how their software should work. Those examples are then connected to the software with programmer-written test fixtures and automatically checked for correctness. The customers' examples are formatted in tables and saved as HTML using ordinary business tools such as Microsoft Excel. When Fit checks the document, it creates a copy and colors the tables green, red, and yellow according to whether the software behaved as expected.
Fit was invented by Ward Cunningham in 2002. He created the initial Java version of Fit. As of June 2005, it has up-to-date versions for Java, C#, Python, Perl, PHP and Smalltalk.
Although Fit is an acronym, the word "Fit" came first, making it a backronym. Fit is sometimes italicized but should not be capitalized. In other words, "Fit" and "Fit" are appropriate usage, but "FIT" is not.
Fit includes a simple command-line tool for checking Fit documents. There are third-party front-ends available. Of these, FitNesse is the most popular. FitNesse is a complete IDE for Fit that uses a Wiki for its front end. As of June 2005, FitNesse had forked Fit, making it incompatible with newer versions of Fit, but plans were underway to re-merge with Fit.
See also
YatSpec - a Java testing framework that supersedes Fit
Concordion - a Java testing framework similar to Fit
Endly - a language agnostic and declarative end to end testing framework |
https://en.wikipedia.org/wiki/GE-200%20series | The GE-200 series was a family of small mainframe computers of the 1960s, built by General Electric (GE). GE marketing called the line Compatibles/200 (GE-205/215/225/235). The GE-210 of 1960 was not compatible with the rest of the 200 series.
200 series models
The main machine in the line was the GE-225 (1961). It used a 20-bit word, of which 13 bits could be used for an address. Along with the basic central processing unit (CPU) the system could also have had a floating-point unit (the "Auxiliary Arithmetic Unit"), or a fixed-point decimal option with three six-bit decimal digits per word. It had eleven I/O channel controllers, and GE sold a variety of add-ons including disks, printers, and other devices. The machines were built using discrete transistors, with a typical machine containing about 10,000 transistors and 20,000 diodes. They used magnetic-core memory, and a standard 8 kiloword system held 186,000 magnetic cores. They weighed about .
The GE-215 (1963) was a scaled-down version of the GE-225, including only six I/O channels and only 4 kilowords or 8 kilowords of core.
The GE-205 (1964).
The GE-235 (1964) was a re-implementation of the GE-225 with three times faster memory than the original. The GE-235 consisted of several major components and options:
Central processor
400 card-per-minute (CPM) or 1000 CPM card reader
100 CPM card punch or 300 CPM card punch
Perforated tape subsystem
Magnetic tape subsystem
12-pocket high-speed document handler
On-line high speed printer or off/on-line speed printer
Disc storage unit
Auxiliary Arithmetic Logic Unit (ALU)
DATANET data communications equipment
Background
The series was designed by a team led by Homer R. “Barney” Oldfield, and which included Arnold Spielberg (father of film director Steven Spielberg). GE chairman Ralph J. Cordiner had forbidden GE from entering the general purpose computer business, rejecting several proposals by Oldfield by simply writing "No" across them and sending them |
https://en.wikipedia.org/wiki/Louvain%20method | The Louvain method for community detection is a method to extract non-overlapping communities from large networks created by Blondel et al. from the University of Louvain (the source of this method's name). The method is a greedy optimization method that appears to run in time where is the number of nodes in the network.
Modularity optimization
The inspiration for this method of community detection is the optimization of modularity as the algorithm progresses. Modularity is a scale value between −0.5 (non-modular clustering) and 1 (fully modular clustering) that measures the relative density of edges inside communities with respect to edges outside communities. Optimizing this value theoretically results in the best possible grouping of the nodes of a given network. But because going through all possible iterations of the nodes into groups is impractical, heuristic algorithms are used.
In the Louvain Method of community detection, first small communities are found by optimizing modularity locally on all nodes, then each small community is grouped into one node and the first step is repeated. The method is similar to the earlier method by Clauset, Newman and Moore that connects communities whose amalgamation produces the largest increase in modularity.
Algorithm
The value to be optimized is modularity, defined as a value in the range that measures the density of links inside communities compared to links between communities. For a weighted graph, modularity is defined as:
where
represents the edge weight between nodes and ;
and are the sum of the weights of the edges attached to nodes and , respectively;
is the sum of all of the edge weights in the graph;
and are the communities of the nodes; and
is Kronecker delta function ( if , otherwise).
Based on the above equation, the modularity of a community can be calculated as:
where
is the sum of edge weights between nodes within the community (each edge is considered twice); and
is the sum |
https://en.wikipedia.org/wiki/Carbon-carbon%20bond%20activation | Carbon-carbon bond activation refers to the breaking of carbon-carbon bonds in organic molecules. This process is an important tool in organic synthesis, as it allows for the formation of new carbon-carbon bonds and the construction of complex organic molecules. However, C–C bond activation is challenging mainly for the following reasons: (1) C-H bond activation is a competitive process of C-C activation, which is both energetically and kinetically more favorable; (2) the accessibility of the transition metal center to C–C bonds is generally difficult due to its 'hidden' nature; (3) relatively high stability of the C–C bond (90 kcal/mol−1). As a result, in the early stage, most examples of C-C activation are of stringed ring systems, which makes C-C activation more favorable by increasing the energy of the starting material. However, C-C activation of unstrained C-C bonds has remained challenging until the recent two decades.
Examples of C-C bond activation
Due to the difficulty of C-C activation, a driving force is required to facilitate the reaction. One common strategy is to form stable metal complexes. One example is reported by Milstein and coworkers, in which the C(sp2)–C(sp3) bond of bisphosphine ligands was selectively cleaved by a number of metals to afford stable pincer complexes under mild conditions.
Aromatization is another driving force that is utilized for C–C bond activation. For example, Chaudret group reported that the C–C bond of steroid compounds can be cleaved through the Ru-promoted aromatization of the B ring. At the same time, a methane molecule is released, which is possibly another driving force for this reaction.
In addition, the metalloradical has also been proven to have the ability to cleave the C–C single bond. Chan group reported the C–C bond scission of cyclooctane via 1,2-addition with Rh(III) porphyrin hydride, which involved [RhII(ttp)]· radical as the key intermediate.
Mechanism of C-C bond activation
Generally speaking, th |
https://en.wikipedia.org/wiki/Weierstrass%E2%80%93Enneper%20parameterization | In mathematics, the Weierstrass–Enneper parameterization of minimal surfaces is a classical piece of differential geometry.
Alfred Enneper and Karl Weierstrass studied minimal surfaces as far back as 1863.
Let and be functions on either the entire complex plane or the unit disk, where is meromorphic and is analytic, such that wherever has a pole of order , has a zero of order (or equivalently, such that the product is holomorphic), and let be constants. Then the surface with coordinates is minimal, where the are defined using the real part of a complex integral, as follows:
The converse is also true: every nonplanar minimal surface defined over a simply connected domain can be given a parametrization of this type.
For example, Enneper's surface has , .
Parametric surface of complex variables
The Weierstrass-Enneper model defines a minimal surface () on a complex plane (). Let (the complex plane as the space), the Jacobian matrix of the surface can be written as a column of complex entries:
where and are holomorphic functions of .
The Jacobian represents the two orthogonal tangent vectors of the surface:
The surface normal is given by
The Jacobian leads to a number of important properties: , , , . The proofs can be found in Sharma's essay: The Weierstrass representation always gives a minimal surface. The derivatives can be used to construct the first fundamental form matrix:
and the second fundamental form matrix
Finally, a point on the complex plane maps to a point on the minimal surface in by
where for all minimal surfaces throughout this paper except for Costa's minimal surface where .
Embedded minimal surfaces and examples
The classical examples of embedded complete minimal surfaces in with finite topology include the plane, the catenoid, the helicoid, and the Costa's minimal surface. Costa's surface involves Weierstrass's elliptic function :
where is a constant.
Helicatenoid
Choosing the functions and , a one parame |
https://en.wikipedia.org/wiki/BESM | BESM (БЭСМ) is the series of Soviet mainframe computers built in 1950–60s. The name is an acronym for "Bolshaya (or Bystrodeystvuyushchaya) Elektronno-schotnaya Mashina" ("Большая электронно-счётная машина" or "Быстродействующая электронно-счётная машина"), meaning "Big Electronic Computing Machine" or "High-Speed Electronic Computing Machine". It was designed at the Institute of Precision Mechanics and Computer Engineering
Models
The BESM series included six models.
BESM-1
BESM-1, originally referred to as simply the BESM or BESM AN ("BESM Akademii Nauk", BESM of the Academy of Sciences), was completed in 1952. Only one BESM-1 machine was built. The machine used approximately 5,000 vacuum tubes. At the time of completion, it was the fastest computer in Europe. The floating-point numbers were represented as 39-bit words: 32 bits for the mantissa, one bit for sign, and 1 + 5 bits for the exponent. It was capable of representing numbers in the range 10−9 – 1010. BESM-1 had 1024 words of read–write memory using ferrite cores, and 1024 words of read-only memory based on semiconducting diodes. It also had external storage: four magnetic tape units of 30,000 words each, and fast magnetic drum storage with a capacity of 5120 words and an access rate of 800 words/second. The computer was capable of performing 8–10 KFlops. The energy consumption was approximately 30 kW, not accounting for the cooling systems.
BESM-2
BESM-2 also used vacuum tubes.
BESM-3M and BESM-4
BESM-3M and BESM-4 were built using transistors. Their architecture was similar to that of the M-20 and M-220 series. The word size was 45 bits. Thirty BESM-4 machines were built. BESM-4 was used to create the first ever computer animation. The prototypes of both models were made in 1962–63, and the beginning of the series release was in 1964.
EPSILON (a macro language with high level features including strings and lists, developed by Andrey Ershov at Novosibirsk in 1967) was used to implement ALGOL 68 on t |
https://en.wikipedia.org/wiki/Non-integer%20base%20of%20numeration | A non-integer representation uses non-integer numbers as the radix, or base, of a positional numeral system. For a non-integer radix β > 1, the value of
is
The numbers di are non-negative integers less than β. This is also known as a β-expansion, a notion introduced by and first studied in detail by . Every real number has at least one (possibly infinite) β-expansion. The set of all β-expansions that have a finite representation is a subset of the ring Z[β, β−1].
There are applications of β-expansions in coding theory and models of quasicrystals (; ).
Construction
β-expansions are a generalization of decimal expansions. While infinite decimal expansions are not unique (for example, 1.000... = 0.999...), all finite decimal expansions are unique. However, even finite β-expansions are not necessarily unique, for example φ + 1 = φ2 for β = φ, the golden ratio. A canonical choice for the β-expansion of a given real number can be determined by the following greedy algorithm, essentially due to and formulated as given here by .
Let be the base and x a non-negative real number. Denote by the floor function of x (that is, the greatest integer less than or equal to x) and let be the fractional part of x. There exists an integer k such that . Set
and
For , put
In other words, the canonical β-expansion of x is defined by choosing the largest dk such that , then choosing the largest dk−1 such that , and so on. Thus it chooses the lexicographically largest string representing x.
With an integer base, this defines the usual radix expansion for the number x. This construction extends the usual algorithm to possibly non-integer values of β.
Conversion
Following the steps above, we can create a β-expansion for a real number (the steps are identical for an , although must first be multiplied by to make it positive, then the result must be multiplied by to make it negative again).
First, we must define our value (the exponent of the nearest power of greater |
https://en.wikipedia.org/wiki/Future%20interests%20%28actuarial%20science%29 | Future interests is the subset of actuarial math that divides enjoyment of property -- usually the right to an income stream either from an annuity, a trust, royalties, or rents -- based usually on the future survival of one or more persons (natural humans, not juridical persons such as corporations).
Actuarial science |
https://en.wikipedia.org/wiki/Math%20Blaster%21 | Math Blaster! is a 1983 edutainment video game, and the first entry in the "Math Blaster" series within the Blaster Learning System created by Davidson & Associates. The game was developed by former educator Jan Davidson. It would be revised and ported to newer hardware and operating systems, with enhanced versions rebranded as Math Blaster Plus! (1987), followed by New Math Blaster Plus! (1990). A full redesign was done in 1993 as Math Blaster Episode I: In Search of Spot and again in 1996 as Mega Math Blaster.
The game spawned other Math Blaster titles like Math Blaster Jr. and Math Blaster Mystery: The Great Brain Robbery, as well as math-related spin-offs like Alge Blaster and Geometry Blaster, and forays into other subjects like Reading Blaster, Word Blaster, Spelling Blaster, and Science Blaster Jr.
Gameplay
An arcade-style educational game that offers skill-building mathematical exercises, the title contains minigames that test players' knowledge in subjects such as addition, subtraction, multiplication, division, fractions, percentages, and decimals. A series of mathematics problems appear on the screen, and the player must move to fire the cannon pointing at the correct answer. The game included an editor for teachers and parents to design their own problems.
While this title was purely a drill and practice, its 1987 sequel would wrap the activity around a narrative.
Educational goals
Math Blaster was designed to aid students to master first-to-sixth-grade mathematics in an exciting and interesting manner. The learning activities were advertised as graphically appealing and promised to motivate and challenge students.
Commercial performance
After it was developed, Math Blaster! was extensively tested in classrooms. By November 2, 1985, the game had sustained 92 weeks on the Billboard charts for Top Education Computing Software, and was currently at #2. The game, plus its various sequels and spin-offs, has since become the best-selling piece of math |
https://en.wikipedia.org/wiki/Affect%20control%20theory | In control theory, affect control theory proposes that individuals maintain affective meanings through their actions and interpretations of events. The activity of social institutions occurs through maintenance of culturally based affective meanings.
Affective meaning
Besides a denotative meaning, every concept has an affective meaning, or connotation, that varies along three dimensions: evaluation – goodness versus badness, potency – powerfulness versus powerlessness, and activity – liveliness versus torpidity. Affective meanings can be measured with semantic differentials yielding a three-number profile indicating how the concept is positioned on evaluation, potency, and activity (EPA). Osgood demonstrated that an elementary concept conveyed by a word or idiom has a normative affective meaning within a particular culture.
A stable affective meaning derived either from personal experience or from cultural inculcation is called a sentiment, or fundamental affective meaning, in affect control theory. Affect control theory has inspired assembly of dictionaries of EPA sentiments for thousands of concepts involved in social life – identities, behaviours, settings, personal attributes, and emotions. Sentiment dictionaries have been constructed with ratings of respondents from the US, Canada, Northern Ireland, Germany, Japan, China and Taiwan.
Impression formation
Each concept that is in play in a situation has a transient affective meaning in addition to an associated sentiment. The transient corresponds to an impression created by recent events.
Events modify impressions on all three EPA dimensions in complex ways that are described with non-linear equations obtained through empirical studies.
Here are two examples of impression-formation processes.
An actor who behaves disagreeably seems less good, especially if the object of the behavior is innocent and powerless, like a child.
A powerful person seems desperate when performing extremely forceful acts on anoth |
https://en.wikipedia.org/wiki/Chromatrope | A chromatrope is a type of magic lantern slide that produces dazzling, colorful geometrical patterns set in motion by rotating two painted glass discs in opposite directions, originally with a double pulley mechanism but later usually with a rackwork mechanism.
The chromatrope was probably invented in the year 1841 (or slightly earlier) by English glass painter and showman Henry Langdon Childe, by which year it was listed in the Royal Polytechnic Institution catalogue. It was added as a novelty to the program of the Royal Polytechnic Institution, which had previously included many other types of magic lantern shows with moving images, such as phantasmagoria and dissolving views.
The principle and the effect of the chromatrope is similar to that of the feux pyriques that had gained some popularity in rich North European households at the end of the 18th century. The resulting abstract and everchanging image is also very similar to that of the kaleidoscope, which had caused an enormous international craze in 1818. |
https://en.wikipedia.org/wiki/Hitachi%20Flora%20Prius | The Hitachi Flora Prius was a range of personal computers marketed in Japan by Hitachi, Ltd. during the late 1990s.
The Flora Prius was preinstalled with both Microsoft Windows 98 as well as BeOS. It did not, however, have a dual-boot option as Microsoft reminded Hitachi of the terms of the Windows OEM license. In effect, two thirds of the hard drive was hidden from the end-user, and a series of complicated manipulations was necessary to activate the BeOS partition.
Models
FLORA Prius 330J came in three models:
330N40JB: Base version with no LCD Screen
3304ST40JB: Included a 14.1-inch super TFT color LCD Display
3304ST40JBT: Included a 14.1-inch super TFT color LCD Display and WinTV Video capture board
Base specifications
CPU: Pentium II processor (400 MHz)
RAM: 64 MB SDRAM
Hard Drive: 6.4 GB (2 GB for Windows 98 and 4.6 GB for BeOS)
CD-ROM Drive: 24X speed max.
100BASE-TX/10BASE-10 |
https://en.wikipedia.org/wiki/Gigamacro | A gigapixel macro image is a digital image bitmap composed of one billion (109) pixels (picture elements), or 1000 times the information captured by a 1 megapixel digital camera. Creating such high-resolution images involves making mosaics (image stitching) of a large number of high-resolution digital photographs which are then combined into a single image.
Gigapixel macro images are made by 'stacking' a number of photographs together in order to increase the depth of field and then stitching the resulting images together in a technique known as 'stack and stitch'. Such images are usually very large in size and cannot be easily viewed. To make such images accessible, they are converted using tiled image techniques so that they may be viewed in a web browser. Such techniques are familiar in everyday use in e.g. Google Maps. |
https://en.wikipedia.org/wiki/Lie%20product%20formula | In mathematics, the Lie product formula, named for Sophus Lie (1875), but also widely called the Trotter product formula, named after Hale Trotter, states that for arbitrary m × m real or complex matrices A and B,
where eA denotes the matrix exponential of A. The Lie–Trotter product formula and the Trotter–Kato theorem extend this to certain unbounded linear operators A and B.
This formula is an analogue of the classical exponential law
which holds for all real or complex numbers x and y. If x and y are replaced with matrices A and B, and the exponential replaced with a matrix exponential, it is usually necessary for A and B to commute for the law to still hold. However, the Lie product formula holds for all matrices A and B, even ones which do not commute.
The Lie product formula is conceptually related to the Baker–Campbell–Hausdorff formula, in that both are replacements, in the context of noncommuting operators, for the classical exponential law.
The formula has applications, for example, in the path integral formulation of quantum mechanics. It allows one to separate the Schrödinger evolution operator (propagator) into alternating increments of kinetic and potential operators (the Suzuki–Trotter decomposition, after Trotter and Masuo Suzuki). The same idea is used in the construction of splitting methods for the numerical solution of differential equations. Moreover, the Lie product theorem is sufficient to prove the Feynman–Kac formula.
The Trotter–Kato theorem can be used for approximation of linear C0-semigroups.
See also
Time-evolving block decimation |
https://en.wikipedia.org/wiki/Optica%20Fellow | The Optica Fellow is a membership designation of Optica (formerly known as The Optical Society (OSA)) that denotes distinguished scientific accomplishment. The bylaws of this society only allow 10% of its membership to be designated as an Optica Fellow. The Optica Fellow requires peer group nomination.
The nominee
An Optica member can only become an Optica Fellow when nominated by a peer group of other current Optica Fellows. Review of the nomination is then passed to the Optica Fellow Members Committee. This committee then nominates the candidate to the Board of Directors on an annual basis. Finally, the purpose of this award is to designate a member as one who has "made significant contributions to the advancement of optics".
The process
The process includes actively identifying possible candidates who might qualify for this award. Contributing factors for qualification are diverse within the optics community. These factors include significant or distinguishing scientific accomplishments, technical achievements, inventions, technical innovations, technical management, and demonstration of leadership. The fields of such achievement are significant instrument technique, and measurement technique (including original software). Other fields include distinguished sustained accomplishments in engineering, education, and service to the global optics community (including photonics and Optica). Other factors may also include a record of significant publications, patents, and invited review papers for the various levels of meetings related to the covered fields.
Letters of recommendation are solicited from outside the nominee's field of work. Finally references from at least three and no more than five people familiar with the nominee's work are required. Once all the relevant information has been considered, the Optica Fellow Members Committee votes on the applications and those selected are forwarded to the Board of Directors.
List of notable Optica Fellows |
https://en.wikipedia.org/wiki/Acidophile | Acidophiles or acidophilic organisms are those that thrive under highly acidic conditions (usually at pH 5.0 or below). These organisms can be found in different branches of the tree of life, including Archaea, Bacteria, and Eukarya.
Examples
A list of these organisms includes:
Archaea
Sulfolobales, an order in the Thermoproteota branch of Archaea
Thermoplasmatales, an order in the Euryarchaeota branch of Archaea
ARMAN, in the Euryarchaeota branch of Archaea
Acidianus brierleyi, A. infernus, facultatively anaerobic thermoacidophilic archaebacteria
Halarchaeum acidiphilum, acidophilic member of the Halobacteriacaeae
Metallosphaera sedula, thermoacidophilic
Bacteria
Acidobacteriota, a phylum of Bacteria
Acidithiobacillales, an order of Pseudomonadota e.g. A. ferrooxidans, A. thiooxidans
Thiobacillus prosperus, T. acidophilus, T. organovorus, T. cuprinus
Acetobacter aceti, a bacterium that produces acetic acid (vinegar) from the oxidation of ethanol.
Alicyclobacillus, a genus of bacteria that can contaminate fruit juices.
Eukarya
Mucor racemosus
Urotricha
Dunaliella acidophila
Members of the algal class Cyanidiophyceae, including Cyanidioschyzon merolae
Mechanisms of adaptation to acidic environments
Most acidophile organisms have evolved extremely efficient mechanisms to pump protons out of the intracellular space in order to keep the cytoplasm at or near neutral pH. Therefore, intracellular proteins do not need to develop acid stability through evolution. However, other acidophiles, such as Acetobacter aceti, have an acidified cytoplasm which forces nearly all proteins in the genome to evolve acid stability. For this reason, Acetobacter aceti has become a valuable resource for understanding the mechanisms by which proteins can attain acid stability.
Studies of proteins adapted to low pH have revealed a few general mechanisms by which proteins can achieve acid stability. In most acid stable proteins (such as pepsin and the soxF protein from Sulfol |
https://en.wikipedia.org/wiki/LeetCode | LeetCode is an online platform for coding interview preparation. The service provides coding and algorithmic problems intended for users to practice coding. LeetCode has gained popularity among job seekers and coding enthusiasts as a resource for technical interviews and coding competitions.
Features
LeetCode offers both free and premium access options. While free users have access to a limited number of questions, premium users gain access to additional questions previously used in interviews at large tech companies. The performance of users' solutions is evaluated based on response speed and solution efficiency, and is ranked against other submissions in the LeetCode database.
Additionally, LeetCode provides its users with mock interviews and online assessments. LeetCode hosts weekly competitions and biweekly competitions, and its users compete against each other. LeetCode hosts weekly and biweekly contests, each having 4 problems. After you participate in a contest for the first time, you get assigned a ranking, which can be found in the profile.
LeetCode supports multiple programming languages, including Java, Python, JavaScript, and C. The platform features forums where users can engage in discussions related to problems, the interview process, and share their interview experiences.
Types of Problems
Currently, there are eighteen different categories that a LeetCode question can be from. In no particular order, these are: Arrays, Two Pointers, Stack, Binary Search, Sliding Window, Linked List, Trees, Tries, Backtracking, Heaps/Priority Queues, Graphs, Dynamic Programming, Intervals, Greedy Algorithms, Bit Manipulation, and Math/Geometry. Each problem category contains questions at three levels of difficulty; there are 736 easy questions, 1521 medium questions, and 634 hard questions available on LeetCode.
History
LeetCode was founded in Silicon Valley in 2015.
LeetCode expanded its operations to China in 2018. In 2021, LeetCode secured its first round |
https://en.wikipedia.org/wiki/Women%27s%20medicine%20in%20antiquity | Childbirth and obstetrics in Classical Antiquity (here meaning the ancient Greco-Roman world) were studied by the physicians of ancient Greece and Rome. Their ideas and practices during this time endured in Western medicine for centuries and many themes are seen in modern women's health. Classical gynecology and obstetrics were originally studied and taught mainly by midwives in the ancient world, but eventually scholarly physicians of both sexes became involved as well. Obstetrics is traditionally defined as the surgical specialty dealing with the care of a woman and her offspring during pregnancy, childbirth and the puerperium (recovery). Gynecology involves the medical practices dealing with the health of women's reproductive organs (vagina, uterus, ovaries) and breasts.
Midwifery and obstetrics are different but overlap in medical practice that focuses on pregnancy and labor. Midwifery emphasizes the normality of pregnancy along with the reproductive process. Classical Antiquity saw the beginning of attempts to classify various areas of medical research, and the terms gynecology and obstetrics came into use. The Hippocratic Corpus, a large collection of treatises attributed to Hippocrates, features a number of gynecological treatises, which date to the classical period.
Women as doctors
During the era of Classical Antiquity, women practiced as doctors, but they were by far in the minority and typically confined to only gynecology and obstetrics. Aristotle was an important influence on later medical writers in Greece and eventually Europe. Similar to the writers of the Hippocratic Corpus, Aristotle concluded that women's physiology was fundamentally different from that of men primarily because women were physically weaker, and therefore more prone to symptoms caused in some way by weakness, such as the theory of humourism. This belief claimed that both men and women had several "humours" regulating their physical health, and that women had a "cooler" humour |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.