source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Tent%20map | In mathematics, the tent map with parameter μ is the real-valued function fμ defined by
the name being due to the tent-like shape of the graph of fμ. For the values of the parameter μ within 0 and 2, fμ maps the unit interval [0, 1] into itself, thus defining a discrete-time dynamical system on it (equivalently, a recurrence relation). In particular, iterating a point x0 in [0, 1] gives rise to a sequence :
where μ is a positive real constant. Choosing for instance the parameter μ = 2, the effect of the function fμ may be viewed as the result of the operation of folding the unit interval in two, then stretching the resulting interval [0, 1/2] to get again the interval [0, 1]. Iterating the procedure, any point x0 of the interval assumes new subsequent positions as described above, generating a sequence xn in [0, 1].
The case of the tent map is a non-linear transformation of both the bit shift map and the r = 4 case of the logistic map.
Behaviour
The tent map with parameter μ = 2 and the logistic map with parameter r = 4 are topologically conjugate, and thus the behaviours of the two maps are in this sense identical under iteration.
Depending on the value of μ, the tent map demonstrates a range of dynamical behaviour ranging from predictable to chaotic.
If μ is less than 1 the point x = 0 is an attractive fixed point of the system for all initial values of x i.e. the system will converge towards x = 0 from any initial value of x.
If μ is 1 all values of x less than or equal to 1/2 are fixed points of the system.
If μ is greater than 1 the system has two fixed points, one at 0, and the other at μ/(μ + 1). Both fixed points are unstable, i.e. a value of x close to either fixed point will move away from it, rather than towards it. For example, when μ is 1.5 there is a fixed point at x = 0.6 (since 1.5(1 − 0.6) = 0.6) but starting at x = 0.61 we get
If μ is between 1 and the square root of 2 the system maps a set of intervals between μ − μ2/2 and μ/2 to the |
https://en.wikipedia.org/wiki/Hitori | Hitori (Japanese: "Alone" or "one person"; Hitori ni shite kure; literally "leave me alone") is a type of logic puzzle published by Nikoli.
Hitori is NP complete.
Rules
Hitori is played with a grid of squares or cells, with each cell initially containing a number. The game is played by eliminating squares/numbers and this is done by blacking them out. The objective is to transform the grid to a state wherein all three following rules are true:
no row or column can have more than one occurrence of any given number
black cells cannot be horizontally or vertically adjacent, although they can be diagonal to one another.
the remaining numbered cells must be all connected to each other, horizontally or vertically.
Solving techniques
Once it is determined that a cell cannot be black, some players find it useful to circle the number, as it makes the puzzle easier to read as the solution progresses. Below we assume that this convention is followed.
When it is determined that a cell must be black, all orthogonally adjacent cells cannot be black and so can be circled.
If a cell has been circled to show that it cannot be black, any cells containing the same number in that row and column must be black.
If blacking out a cell would cause a connected non-black area to become separated into several unconnected components, the cell cannot be black and so can be circled.
In a sequence of three identical, adjacent numbers, the centre number cannot be black and the cells on either side must be black. The reason is that if one of the end numbers remains non-black this would result in either two adjacent black cells or two cells with the same number in the same row or column, neither of which are allowed. (This is a special case of the next item.)
In case of two identical, adjacent numbers, if another cell occurs in the same row or column containing the same number, the latter cell must be black. Otherwise, if it remains non-black, this would result in either two cells with the same |
https://en.wikipedia.org/wiki/Costas%20array | In mathematics, a Costas array can be regarded geometrically as a set of n points, each at the center of a square in an n×n square tiling such that each row or column contains only one point, and all of the n(n − 1)/2 displacement vectors between each pair of dots are distinct. This results in an ideal "thumbtack" auto-ambiguity function, making the arrays useful in applications such as sonar and radar. Costas arrays can be regarded as two-dimensional cousins of the one-dimensional Golomb ruler construction, and, as well as being of mathematical interest, have similar applications in experimental design and phased array radar engineering.
Costas arrays are named after John P. Costas, who first wrote about them in a 1965 technical report. Independently, Edgar Gilbert also wrote about them in the same year, publishing what is now known as the logarithmic Welch method of constructing Costas arrays.
The general enumeration of Costas arrays is an open problem in computer science and finding an algorithm that can solve it in polynomial time is an open research question.
Numerical representation
A Costas array may be represented numerically as an n×n array of numbers, where each entry is either 1, for a point, or 0, for the absence of a point. When interpreted as binary matrices, these arrays of numbers have the property that, since each row and column has the constraint that it only has one point on it, they are therefore also permutation matrices. Thus, the Costas arrays for any given n are a subset of the permutation matrices of order n.
Arrays are usually described as a series of indices specifying the column for any row. Since it is given that any column has only one point, it is possible to represent an array one-dimensionally. For instance, the following is a valid Costas array of order N = 4:
or simply
There are dots at coordinates: (1,2), (2,1), (3,3), (4,4)
Since the x-coordinate increases linearly, we can write this in shorthand as the set of al |
https://en.wikipedia.org/wiki/Chow-chow%20%28food%29 | Chow-chow (also spelled chowchow or chow chow) is a North American pickled relish.
History
Possibly chow-chow found its way to the Southern United States during the expulsion of the Acadian people from Nova Scotia and their settlement in Louisiana. It is eaten by itself or as a condiment on fish cakes, mashed potatoes, biscuits and gravy, pinto beans, hot dogs, hamburgers and other foods. Southern food historian John Egerton cited a connection to relish recipes of Chinese rail workers in the 19th century.
Preparation
An early 20th-century American recipe for chow chow was made with cucumbers, onions, cauliflower and green peppers left overnight in brine, boiled in (cider) vinegar with whole mustard seed and celery seeds, then mashed into a paste with mustard, flour and turmeric.
Regional variations
Its ingredients vary considerably, depending on whether it is the "Northern" (primarily Pennsylvanian) or "Southern" variety, as well as separate (and likely the original) Canadian variety, prevalent in the Maritimes. The former is made from a combination of vegetables, mainly green and red tomatoes, onions, carrots, beans of various types, asparagus, cauliflower and peas. The latter is entirely or almost entirely green tomatoes or cabbage. These ingredients are pickled in a canning jar. After preserving, chow-chow is served cold, often as a condiment or relish.
See also |
https://en.wikipedia.org/wiki/Stability%20radius | In mathematics, the stability radius of an object (system, function, matrix, parameter) at a given nominal point is the radius of the largest ball, centered at the nominal point, all of whose elements satisfy pre-determined stability conditions. The picture of this intuitive notion is this:
where denotes the nominal point, denotes the space of all possible values of the object , and the shaded area, , represents the set of points that satisfy the stability conditions. The radius of the blue circle, shown in red, is the stability radius.
Abstract definition
The formal definition of this concept varies, depending on the application area. The following abstract definition is quite useful
where denotes a closed ball of radius in centered at .
History
It looks like the concept was invented in the early 1960s. In the 1980s it became popular in control theory and optimization. It is widely used as a model of local robustness against small perturbations in a given nominal value of the object of interest.
Relation to Wald's maximin model
It was shown that the stability radius model is an instance of Wald's maximin model. That is,
where
The large penalty () is a device to force the player not to perturb the nominal value beyond the stability radius of the system. It is an indication that the stability model is a model of local stability/robustness, rather than a global one.
Info-gap decision theory
Info-gap decision theory is a recent non-probabilistic decision theory. It is claimed to be radically different from all current theories of decision under uncertainty. But it has been shown that its robustness model, namely
is actually a stability radius model characterized by a simple stability requirement of the form where denotes the decision under consideration, denotes the parameter of interest, denotes the estimate of the true value of and denotes a ball of radius centered at .
Since stability radius models are designed to deal with smal |
https://en.wikipedia.org/wiki/Rehearsal%20letter | A rehearsal letter is a boldface letter of the alphabet in an orchestral score, and its corresponding parts, that provides the conductor, who typically leads rehearsals, with a convenient spot to tell the orchestra to begin at places other than the start of movements or pieces. Rehearsal letters are most often used in scores of the Romantic era and onwards, beginning with Louis Spohr. Rehearsal letters are typically placed at structural points in the piece.
Terminology
They may also be generically called rehearsal marks or rehearsal figures, or, when numbers are used instead of letters, rehearsal numbers.
Purpose
In the course of rehearsing a symphony or piece, it is often necessary for the conductor to stop and go back to some point in the middle, in order to master the more difficult passages or sections, or to resolve a challenge that the ensemble is having. Many scores and parts have bar numbers, every five or ten bars, or at the beginning of each page or line. But as pieces and individual movements of works became longer (extending to several hundred bars) as the Romantic era progressed, bar numbers became less practical in rehearsal.
For example, a conductor can tell their musicians to resume at bar 387, so that the musicians have to find the nearest bar number in their parts (e.g. 385 or 390) and count back or forward a couple of measures. Even if the number 387 is written at the appropriate bar, it might not particularly stand out. But if there is, for example, a big, bold letter M in the score and parts, it is much easier for the conductor to just say "begin at letter M". Even if the conductor were to say "one bar before letter M", that would still be more convenient than saying "bar 386". Alternatively the conductor could first say "before M..." and allow the players time to find M and then say "one bar".
In the score of a full orchestra, rehearsal letters are typically placed over the flutes' (or piccolo's) staff, and duplicated above the first viol |
https://en.wikipedia.org/wiki/Triangle%20group | In mathematics, a triangle group is a group that can be realized geometrically by sequences of reflections across the sides of a triangle. The triangle can be an ordinary Euclidean triangle, a triangle on the sphere, or a hyperbolic triangle. Each triangle group is the symmetry group of a tiling of the Euclidean plane, the sphere, or the hyperbolic plane by congruent triangles called Möbius triangles, each one a fundamental domain for the action.
Definition
Let l, m, n be integers greater than or equal to 2. A triangle group Δ(l,m,n) is a group of motions of the Euclidean plane, the two-dimensional sphere, the real projective plane, or the hyperbolic plane generated by the reflections in the sides of a triangle with angles π/l, π/m and π/n (measured in radians). The product of the reflections in two adjacent sides is a rotation by the angle which is twice the angle between those sides, 2π/l, 2π/m and 2π/n. Therefore, if the generating reflections are labeled a, b, c and the angles between them in the cyclic order are as given above, then the following relations hold:
It is a theorem that all other relations between a, b, c are consequences of these relations and that Δ(l,m,n) is a discrete group of motions of the corresponding space. Thus a triangle group is a reflection group that admits a group presentation
An abstract group with this presentation is a Coxeter group with three generators.
Classification
Given any natural numbers l, m, n > 1 exactly one of the classical two-dimensional geometries (Euclidean, spherical, or hyperbolic) admits a triangle with the angles (π/l, π/m, π/n), and the space is tiled by reflections of the triangle. The sum of the angles of the triangle determines the type of the geometry by the Gauss–Bonnet theorem: it is Euclidean if the angle sum is exactly π, spherical if it exceeds π and hyperbolic if it is strictly smaller than π. Moreover, any two triangles with the given angles are congruent. Each triangle group determines a |
https://en.wikipedia.org/wiki/Lebesgue%27s%20lemma | For Lebesgue's lemma for open covers of compact spaces in topology see Lebesgue's number lemma
In mathematics, Lebesgue's lemma is an important statement in approximation theory. It provides a bound for the projection error, controlling the error of approximation by a linear subspace based on a linear projection relative to the optimal error together with the operator norm of the projection.
Statement
Let be a normed vector space, a subspace of , and a linear projector on . Then for each in :
The proof is a one-line application of the triangle inequality: for any in , by writing as , it follows that
where the last inequality uses the fact that together with the definition of the operator norm .
See also
Lebesgue constant (interpolation) |
https://en.wikipedia.org/wiki/SUNMOS | SUNMOS (Sandia/UNM Operating System) is an operating system jointly developed by Sandia National Laboratories and the Computer Science Department at the University of New Mexico. The goal of the project, started in 1991, is to develop a highly portable, yet efficient, operating system for massively parallel-distributed memory systems.
SUNMOS uses a single-tasking kernel and does not provide demand paging. It takes control of all nodes in the distributed system. Once an application is loaded and running, it can manage all the available memory on a node and use the full resources provided by the hardware. Applications are started and controlled from a process called yod that runs on the host node. Yod runs on a Sun frontend for the nCUBE 2, and on a service node on the Intel Paragon.
SUNMOS was developed as a reaction to the heavy weight version of OSF/1 that ran as a single-system image on the Paragon and consumed 8-12 MB of the 16 MB available on each node, leaving little memory available for the compute applications. In comparison, SUNMOS used 250 KB of memory per node. Additionally, the overhead of OSF/1 limited the network bandwidth to 35 MB/s, while SUNMOS was able to use 170 MB/s of the peak 200 MB/s available.
The ideas in SUNMOS inspired PUMA, a multitasking variant that only ran on the i860 Paragon. Among the extensions in PUMA was the Portals API, a scalable, high performance message passing API. Intel ported PUMA and Portals to the Pentium Pro based ASCI Red system and named it Cougar. Cray ported Cougar to the Opteron based Cray XT3 and renamed it Catamount. A version of Catamount was released to the public named OpenCatamount.
In 2009, the Catamount lightweight kernel was selected for an R&D 100 Award.
See also
Compute Node Linux
CNK operating system |
https://en.wikipedia.org/wiki/SGI%20Crimson | The IRIS Crimson (code-named Diehard2) is a Silicon Graphics (SGI) computer released in 1992. It is the world's first 64-bit workstation.
Crimson is a member of Silicon Graphics's SGI IRIS 4D series of deskside systems; it is also known as the 4D/510 workstation. It is similar to other SGI IRIS 4D deskside workstations, and can use a wide range of graphics options (up to RealityEngine). It is also available as a file server with no graphics.
This machine makes a brief appearance in the movie Jurassic Park (1993) where Lex uses the machine to navigate the IRIX filesystem in 3D using the application fsn to restore power to the compound. The next year, Silicon Graphics released a rebadged, limited edition Crimson R4400/VGXT called the Jurassic Classic, with a special logo and SGI co-founder James H. Clark's signature on the drive door.
Features
One MIPS 100 MHz R4000 or 150 MHz R4400 processor
Choice of seven high performance 3D graphics subsystems
Up to 256 MB memory and internal disk capacity up to 7.2 GB, expandable to greater than 72 GB using additional enclosures
I/O subsystem includes four VMEbus expansion slots, Ethernet and two SCSI channels with disk striping support
Crimson memory is unique to this model. |
https://en.wikipedia.org/wiki/Popular%20mathematics | Popular mathematics is mathematical presentation aimed at a general audience. Sometimes this is in the form of books which require no mathematical background and in other cases it is in the form of expository articles written by professional mathematicians to reach out to others working in different areas.
Notable works of popular mathematics
Some of the most prolific popularisers of mathematics include Keith Devlin, Rintu Nath, Martin Gardner, and Ian Stewart. Titles by these three authors can be found on their respective pages.
On zero
On infinity
Rucker, Rudy (1982), Infinity and the Mind: The Science and Philosophy of the Infinite; Princeton, N.J.: Princeton University Press. .
On constants
On complex numbers
On the Riemann hypothesis
On recently solved problems
On classification of finite simple groups
On higher dimensions
Rucker, Rudy (1984), The Fourth Dimension: Toward a Geometry of Higher Reality; Houghton Mifflin Harcourt.
On introduction to mathematics for the general reader
Biographies
Magazines and journals
Popular science magazines such as New Scientist and Scientific American sometimes carry articles on mathematics.
Plus Magazine is a free online magazine run under the Millennium Mathematics Project at the University of Cambridge.
The journals listed below can be found in many university libraries.
American Mathematical Monthly is designed to be accessible to a wide audience.
The Mathematical Gazette contains letters, book reviews and expositions of attractive areas of mathematics.
Mathematics Magazine offers lively, readable, and appealing exposition on a wide range of mathematical topics.
The Mathematical Intelligencer is a mathematical journal that aims at a conversational and scholarly tone.
Notices of the AMS - Each issue contains one or two expository articles that describe current developments in mathematical research, written by professional mathematicians. The Notices also carries articles on the history of mathemati |
https://en.wikipedia.org/wiki/156%20%28number%29 | 156 (one hundred [and] fifty-six) is the natural number, following 155 and preceding 157.
In mathematics
156 is an abundant number, a pronic number, a dodecagonal number, and a refactorable number.
156 is the number of graphs on 6 unlabeled nodes.
156 is a repdigit in base 5 (1111), and also in bases 25, 38, 51, 77, and 155.
156 degrees is the internal angle of a pentadecagon.
In the military
Convoy HX-156 was the 156th of the numbered series of World War II HX convoys of merchant ships from Halifax, Nova Scotia to Liverpool during World War II
The Fieseler Fi 156 Storch was a small German liaison aircraft during World War II
The
was a United States Navy T2 tanker during World War II
was a United States Navy cargo ship during World War II
was a United States Navy during World War II
was a United States Navy ship during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy fast civilian yacht during World War I
In music
156, a song by the Danish rock band Mew appearing in both their 2000 album Half the World Is Watching Me and their 2003 album Frengers.
NM 156, a 1984 song by the heavy metal band Queensrÿche from the album The Warning
156, a song by the Polish Black Metal band Blaze of Perdition from the 2010 album Towards the Blaze of Perdition
In transportation
The Alfa Romeo 156 car produced from 1997 to 2006.
The Ferrari 156 was a racecar made by Ferrari from 1961 to 1963.
The Ferrari 156/85 was a Formula One car in the 1985 Formula One season.
The Class 156 "Super Sprinter" DMU train.
The Midland Railway 156 Class, a 2-4-0 tender engine built in the United Kingdom between 1866 and 1874.
London Buses route 156.
Martin 156, known as the Russian clipper, was a large flying boat aircraft intended for transoceanic service |
https://en.wikipedia.org/wiki/Heun%20function | In mathematics, the local Heun function is the solution of Heun's differential equation that is holomorphic and 1 at the singular point z = 0. The local Heun function is called a Heun function, denoted Hf, if it is also regular at z = 1, and is called a Heun polynomial, denoted Hp, if it is regular at all three finite singular points z = 0, 1, a.
Heun's equation
Heun's equation is a second-order linear ordinary differential equation (ODE) of the form
The condition is taken so that the characteristic exponents for the regular singularity at infinity are α and β (see below).
The complex number q is called the accessory parameter. Heun's equation has four regular singular points: 0, 1, a and ∞ with exponents (0, 1 − γ), (0, 1 − δ), (0, 1 − ϵ), and (α, β). Every second-order linear ODE on the extended complex plane with at most four regular singular points, such as the Lamé equation or the hypergeometric differential equation, can be transformed into this equation by a change of variable.
Coalescence of various regular singularities of the Heun equation into irregular singularities give rise to several confluent forms of the equation, as shown in the table below.
{| class="wikitable"
|+Forms of the Heun Equation
|-
! Form !! Singularities !! Equation
|-
| General
| 0, 1, a, ∞
|
|-
| Confluent
| 0, 1, ∞ (irregular, rank 1)
|
|-
| Doubly Confluent
| 0 (irregular, rank 1), ∞ (irregular, rank 1)
|
|-
| Biconfluent
| 0, ∞ (irregular, rank 2)
|
|-
| Triconfluent
| ∞ (irregular, rank 3)
|
|}
q-analog
The q-analog of Heun's equation has been discovered by and studied by .
Symmetries
Heun's equation has a group of symmetries of order 192, isomorphic to the Coxeter group of the Coxeter diagram D4, analogous to the 24 symmetries of the hypergeometric differential equations obtained by Kummer.
The symmetries fixing the local Heun function form a group of order 24 isomorphic to the symmetric group on 4 points, so there are 192/24 = 8 = 2 × 4 essentially diffe |
https://en.wikipedia.org/wiki/Oleoresin | Oleoresins are semi-solid extracts composed of resin and essential or fatty oil, obtained by evaporation of the solvents used for their production. The oleoresin of conifers is known as crude turpentine or gum turpentine, which consists of oil of turpentine and rosin.
Properties
In contrast to essential oils obtained by steam distillation, oleoresins abound in heavier, less volatile and lipophilic compounds, such as resins, waxes, fats and fatty oils. Gummo-oleoresins (oleo-gum resins, gum resins) occur mostly as crude balsams and contain also water-soluble gums. Processing of oleoresins is conducted on a large scale, especially in China (400,000 tons per year in the 1990s), but the technology is too labor-intensive to be viable in countries with high labor costs, such as the US.
Oleoresins are prepared from spices, such as basil, capsicum (paprika), cardamom, celery seed, cinnamon bark, clove bud, fenugreek, fir balsam, ginger, jambu, labdanum, mace, marjoram, nutmeg, parsley, pepper (black/white), pimenta (allspice), rosemary, sage, savory (summer/winter), thyme, turmeric, vanilla, and West Indian bay leaves. The solvents used are nonaqueous and may be polar (alcohols) or nonpolar (hydrocarbons, carbon dioxide).
Oleoresins are similar to perfumery concretes, obtained especially from flowers, and to perfumery resinoids, which are prepared also from animal secretions.
Use
Most oleoresins are used as flavors and perfumes, some are used medicinally (e. g., oleoresin of dry Cannabis infructescence). Oleoresin capsicum is commonly used as a basis for tear gases. There are also uses known in the manufacture of soaps of cosmetics, as well as coloring agents for foods. |
https://en.wikipedia.org/wiki/Luis%20Caffarelli | Luis Ángel Caffarelli (; born December 8, 1948) is an Argentine–American mathematician. He studies partial differential equations and their applications.
Career
Caffarelli was born and grew up in Buenos Aires. He obtained his Masters of Science (1968) and Ph.D. (1972) at the University of Buenos Aires. His Ph.D. advisor was Calixto Calderón. He currently holds the Sid Richardson Chair at the University of Texas at Austin. He also has been a professor at the University of Minnesota, the University of Chicago, and the Courant Institute of Mathematical Sciences at New York University. From 1986 to 1996 he was a professor at the Institute for Advanced Study in Princeton.
Research
Caffarelli received recognition with "The regularity of free boundaries in higher dimensions" published in 1977 in Acta Mathematica. He is considered an expert in free boundary problems and nonlinear partial differential equations. He proved several regularity results for fully nonlinear elliptic equations including the Monge-Ampere equation, and also contributed to homogenization. He is also interested in integro-differential equations.
One of his most cited results regards the Partial regularity of suitable weak solutions of the Navier–Stokes equations; it was obtained in 1982 in collaboration with Louis Nirenberg and Robert V. Kohn.
Awards and recognition
In 1991 he was elected to the U.S. National Academy of Sciences. He was awarded honorary doctorates by the École Normale Supérieure, Paris, the University of Notre Dame, the Universidad Autónoma de Madrid, and the Universidad de La Plata, Argentina. He received the Bôcher Memorial Prize in 1984. He is listed as an ISI highly cited researcher.
In 2003 Konex Foundation from Argentina granted him the Diamond Konex Award, one of the most prestigious awards in Argentina, as the most important Scientist of his country in the last decade. In 2005, he received the prestigious Rolf Schock Prize of the Royal Swedish Academy of Sciences "for his |
https://en.wikipedia.org/wiki/Lebesgue%20constant | In mathematics, the Lebesgue constants (depending on a set of nodes and of its size) give an idea of how good the interpolant of a function (at the given nodes) is in comparison with the best polynomial approximation of the function (the degree of the polynomials are fixed). The Lebesgue constant for polynomials of degree at most and for the set of nodes is generally denoted by . These constants are named after Henri Lebesgue.
Definition
We fix the interpolation nodes and an interval containing all the interpolation nodes. The process of interpolation maps the function to a polynomial . This defines a mapping from the space C([a, b]) of all continuous functions on [a, b] to itself. The map X is linear and it is a projection on the subspace of polynomials of degree or less.
The Lebesgue constant is defined as the operator norm of X. This definition requires us to specify a norm on C([a, b]). The uniform norm is usually the most convenient.
Properties
The Lebesgue constant bounds the interpolation error: let denote the best approximation of f among the polynomials of degree or less. In other words, minimizes among all p in Πn. Then
We will here prove this statement with the maximum norm.
by the triangle inequality. But X is a projection on Πn, so
.
This finishes the proof since . Note that this relation comes also as a special case of Lebesgue's lemma.
In other words, the interpolation polynomial is at most a factor worse than the best possible approximation. This suggests that we look for a set of interpolation nodes with a small Lebesgue constant.
The Lebesgue constant can be expressed in terms of the Lagrange basis polynomials:
In fact, we have the Lebesgue function
and the Lebesgue constant (or Lebesgue number) for the grid is its maximum value
Nevertheless, it is not easy to find an explicit expression for .
Minimal Lebesgue constants
In the case of equidistant nodes, the Lebesgue constant grows exponentially. More precisely, we hav |
https://en.wikipedia.org/wiki/Pumping%20lemma%20for%20regular%20languages | In the theory of formal languages, the pumping lemma for regular languages is a lemma that describes an essential property of all regular languages. Informally, it says that all sufficiently long strings in a regular language may be pumped—that is, have a middle section of the string repeated an arbitrary number of times—to produce a new string that is also part of the language.
Specifically, the pumping lemma says that for any regular language there exists a constant such that any string in with length at least can be split into three substrings , and (, with being non-empty), such that the strings constructed by repeating zero or more times are still in . This process of repetition is known as "pumping". Moreover, the pumping lemma guarantees that the length of will be at most , imposing a limit on the ways in which may be split.
Languages with a finite number of strings vacuously satisfy the pumping lemma by having equal to the maximum string length in plus one. By doing so, zero strings in have length greater than .
The pumping lemma is useful for disproving the regularity of a specific language in question. It was first proven by Michael Rabin and Dana Scott in 1959, and rediscovered shortly after by Yehoshua Bar-Hillel, Micha A. Perles, and Eli Shamir in 1961, as a simplification of their pumping lemma for context-free languages.
Formal statement
Let be a regular language. Then there exists an integer depending only on such that every string in of length at least ( is called the "pumping length") can be written as (i.e., can be divided into three substrings), satisfying the following conditions:
is the substring that can be pumped (removed or repeated any number of times, and the resulting string is always in ). (1) means the loop to be pumped must be of length at least one, that is, not an empty string; (2) means the loop must occur within the first characters. must be smaller than (conclusion of (1) and (2)), but apa |
https://en.wikipedia.org/wiki/Rabinovich%E2%80%93Fabrikant%20equations | The Rabinovich–Fabrikant equations are a set of three coupled ordinary differential equations exhibiting chaotic behaviour for certain values of the parameters. They are named after Mikhail Rabinovich and Anatoly Fabrikant, who described them in 1979.
System description
The equations are:
where α, γ are constants that control the evolution of the system. For some values of α and γ, the system is chaotic, but for others it tends to a stable periodic orbit.
Danca and Chen note that the Rabinovich–Fabrikant system is difficult to analyse (due to the presence of quadratic and cubic terms) and that different attractors can be obtained for the same parameters by using different step sizes in the integration, see on the right an example of a solution obtained by two different solvers for the same parameter values and initial conditions. Also, recently, a hidden attractor was discovered in the Rabinovich–Fabrikant system.
Equilibrium points
The Rabinovich–Fabrikant system has five hyperbolic equilibrium points, one at the origin and four dependent on the system parameters α and γ:
where
These equilibrium points only exist for certain values of α and γ > 0.
γ = 0.87, α = 1.1
An example of chaotic behaviour is obtained for γ = 0.87 and α = 1.1 with initial conditions of (−1, 0, 0.5), see trajectory on the right. The correlation dimension was found to be 2.19 ± 0.01. The Lyapunov exponents, λ are approximately 0.1981, 0, −0.6581 and the Kaplan–Yorke dimension, DKY ≈ 2.3010
γ = 0.1
Danca and Romera showed that for γ = 0.1, the system is chaotic for α = 0.98, but progresses on a stable limit cycle for α = 0.14.
See also
List of chaotic maps |
https://en.wikipedia.org/wiki/Prosector | A prosector is a person with the special task of preparing a dissection for demonstration, usually in medical schools or hospitals. Many important anatomists began their careers as prosectors working for lecturers and demonstrators in anatomy and pathology.
The act of prosecting differs from that of dissecting. A prosection is a professionally prepared dissection prepared by a prosector – a person who is well versed in anatomy and who therefore prepares a specimen so that others may study and learn anatomy from it. A dissection is prepared by a student who is dissecting the specimen for the purpose of learning more about the anatomical structures pertaining to that specimen. The term dissection may also be used to describe the act of cutting. Therefore, a prosector dissects to prepare a prosection.
Prosecting is intricate work where numerous tools are used to produce a desired specimen. Scalpels and scissors allow for sharp dissection where tissue is cut, e.g. the biceps brachii muscle can be removed from the specimen by cutting the origin and insertion with a scalpel. Probes and the prosector's own fingers are examples of tools used for blunt dissection where tissue may be separated from surrounding structures without cutting, i.e. the bellies of biceps brachii and coracobrachialis muscle were made clearer by loosening the fascia between the two muscles with a blunt probe.
Occupational risks
Generally, the risks to prosectors are low. Cadavers used for teaching purposes are embalmed before they are encountered by a prosector and students. Embalming fluid usually contains formaldehyde, phenol, Dettol, and glycerine which disinfect and kill pathogens within the cadaver. With exposure to embalming fluid, tissues and bodily fluids, such as blood, become fixed. Prosectors and students working with embalmed cadavers must always wear protective gloves, but that is more for protection against the harsh chemicals used in embalming, such as formaldehyde and Dettol, whic |
https://en.wikipedia.org/wiki/Java%20Card | Java Card is a software technology that allows Java-based applications (applets) to be run securely on smart cards and more generally on similar secure small memory footprint devices which are called "secure elements" (SE). Today, a Secure Element is not limited to its smart cards and other removable cryptographic tokens form factors; embedded SEs soldered onto a device board and new security designs embedded into general purpose chips are also widely used. Java Card addresses this hardware fragmentation and specificities while retaining code portability brought forward by Java.
Java Card is the tiniest of Java platforms targeted for embedded devices. Java Card gives the user the ability to program the devices and make them application specific. It is widely used in different markets: wireless telecommunications within SIM cards and embedded SIM, payment within banking cards and NFC mobile payment and for identity cards, healthcare cards, and passports. Several IoT products like gateways are also using Java Card based products to secure communications with a cloud service for instance.
The first Java Card was introduced in 1996 by Schlumberger's card division which later merged with Gemplus to form Gemalto. Java Card products are based on the specifications by Sun Microsystems (later a subsidiary of Oracle Corporation). Many Java card products also rely on the GlobalPlatform specifications for the secure management of applications on the card (download, installation, personalization, deletion).
The main design goals of the Java Card technology are portability, security and backward compatibility.
Portability
Java Card aims at defining a standard smart card computing environment allowing the same Java Card applet to run on different smart cards, much like a Java applet runs on different computers. As in Java, this is accomplished using the combination of a virtual machine (the Java Card Virtual Machine), and a well-defined runtime library, which largely abstrac |
https://en.wikipedia.org/wiki/Q-theta%20function | In mathematics, the q-theta function (or modified Jacobi theta function) is a type of q-series which is used to define elliptic hypergeometric series.
It is given by
where one takes 0 ≤ |q| < 1. It obeys the identities
It may also be expressed as:
where is the q-Pochhammer symbol.
See also
elliptic hypergeometric series
Jacobi theta function
Ramanujan theta function |
https://en.wikipedia.org/wiki/Mixminion | Mixminion is the standard implementation of the Type III anonymous remailer protocol. Mixminion can send and receive anonymous e-mail.
Mixminion uses a mix network architecture to provide strong anonymity, and prevent eavesdroppers and other attackers from linking senders and recipients. Volunteers run servers (called "mixes") that receive messages, decrypt them, re-order them, and re-transmit them toward their eventual destination. Every e-mail passes through several mixes so that no single mix can link message senders with recipients.
To send an anonymous message, mixminion breaks it into uniform-sized chunks (also called "packets"), pads the packets to a uniform size, and chooses a path through the mix network for each packet. The software encrypts every packet with the public keys for each server in its path, one by one. When it is time to transmit a packet, mixminion sends it to the first mix in the path. The first mix decrypts the packet, learns which mix will receive the packet, and relays it. Eventually, the packet arrives at a final (or "exit") mix, which sends it to the chosen recipient. Because no mix sees any more of the path besides the immediately adjacent mixes, they cannot link senders to recipients.
Mixminion supports Single-Use Reply Blocks (or SURBs) to allow anonymous recipients. A SURB encodes a half-path to a recipient, so that each mix in the sequence can unwrap a single layer of the path, and encrypt the message for the recipient. When the message reaches the recipient, the recipient can decode the message and learn which SURB was used to send it; the sender does not know which recipient has received the anonymous message.
The most current version of Mixminion Message Sender is 1.2.7 and was released on 11 February 2009.
On 2 September 2011, a news announcement was made that stated the source was uploaded to GitHub
See also
Anonymity
Anonymous P2P
Anonymous remailer
Cypherpunk anonymous remailer (Type I)
Mixmaster anonymous remail |
https://en.wikipedia.org/wiki/Elliptic%20gamma%20function | In mathematics, the elliptic gamma function is a generalization of the q-gamma function, which is itself the q-analog of the ordinary gamma function. It is closely related to a function studied by , and can be expressed in terms of the triple gamma function. It is given by
It obeys several identities:
and
where θ is the q-theta function.
When , it essentially reduces to the infinite q-Pochhammer symbol:
Multiplication Formula
Define
Then the following formula holds with (). |
https://en.wikipedia.org/wiki/Constitutional%20growth%20delay | Constitutional delay of growth and puberty (CDGP) is a term describing a temporary delay in the skeletal growth and thus height of a child with no physical abnormalities causing the delay. Short stature may be the result of a growth pattern inherited from a parent (familial) or occur for no apparent reason (idiopathic). Typically at some point during childhood, growth slows down, eventually resuming at a normal rate. CDGP is the most common cause of short stature and delayed puberty.
Synonyms
Constitutional Delay of Growth and Adolescence (CDGA)
Constitutional Growth Delay (CGD)
See also
Idiopathic short stature
Failure to thrive |
https://en.wikipedia.org/wiki/Bessel%20filter | In electronics and signal processing, a Bessel filter is a type of analog linear filter with a maximally flat group delay (i.e., maximally linear phase response), which preserves the wave shape of filtered signals in the passband. Bessel filters are often used in audio crossover systems.
The filter's name is a reference to German mathematician Friedrich Bessel (1784–1846), who developed the mathematical theory on which the filter is based. The filters are also called Bessel–Thomson filters in recognition of W. E. Thomson, who worked out how to apply Bessel functions to filter design in 1949.
The Bessel filter is very similar to the Gaussian filter, and tends towards the same shape as filter order increases. While the time-domain step response of the Gaussian filter has zero overshoot, the Bessel filter has a small amount of overshoot, but still much less than other common frequency-domain filters, such as Butterworth filters. It has been noted that the impulse response of Bessel–Thomson filters tends towards a Gaussian as the order of the filter is increased.
Compared to finite-order approximations of the Gaussian filter, the Bessel filter has better shaping factor, flatter phase delay, and flatter group delay than a Gaussian of the same order, although the Gaussian has lower time delay and zero overshoot.
The transfer function
A Bessel low-pass filter is characterized by its transfer function:
where is a reverse Bessel polynomial from which the filter gets its name and is a frequency chosen to give the desired cut-off frequency. The filter has a low-frequency group delay of . Since is indeterminate by the definition of reverse Bessel polynomials, but is a removable singularity, it is defined that .
Bessel polynomials
The transfer function of the Bessel filter is a rational function whose denominator is a reverse Bessel polynomial, such as the following:
The reverse Bessel polynomials are given by:
where
Setting the cutoff attenuation
There is no sta |
https://en.wikipedia.org/wiki/Picard%E2%80%93Fuchs%20equation | In mathematics, the Picard–Fuchs equation, named after Émile Picard and Lazarus Fuchs, is a linear ordinary differential equation whose solutions describe the periods of elliptic curves.
Definition
Let
be the j-invariant with and the modular invariants of the elliptic curve in Weierstrass form:
Note that the j-invariant is an isomorphism from the Riemann surface to the Riemann sphere ; where is the upper half-plane and is the modular group. The Picard–Fuchs equation is then
Written in Q-form, one has
Solutions
This equation can be cast into the form of the hypergeometric differential equation. It has two linearly independent solutions, called the periods of elliptic functions. The ratio of the two periods is equal to the period ratio τ, the standard coordinate on the upper-half plane. However, the ratio of two solutions of the hypergeometric equation is also known as a Schwarz triangle map.
The Picard–Fuchs equation can be cast into the form of Riemann's differential equation, and thus solutions can be directly read off in terms of Riemann P-functions. One has
At least four methods to find the j-function inverse can be given.
Dedekind defines the j-function by its Schwarz derivative in his letter to Borchardt. As a partial fraction, it reveals the geometry of the fundamental domain:
where (Sƒ)(x) is the Schwarzian derivative of ƒ with respect to x.
Generalization
In algebraic geometry, this equation has been shown to be a very special case of a general phenomenon, the Gauss–Manin connection. |
https://en.wikipedia.org/wiki/Rate%20of%20heat%20flow | The rate of heat flow is the amount of heat that is transferred per unit of time in some material, usually measured in watt (joules per second). Heat is the flow of thermal energy driven by thermal non-equilibrium, so the term 'heat flow' is a redundancy (i.e. a pleonasm). Heat must not be confused with stored thermal energy, and moving a hot object from one place to another must not be called heat transfer. However, it is common to say ‘heat flow’ to mean ‘heat content’.
The equation of heat flow is given by Fourier's Law of Heat Conduction.
Rate of heat flow = - (heat transfer coefficient) * (area of the body) * (variation of the temperature) / (length of the material)
The formula for the rate of heat flow is:
where
is the net heat (energy) transfer,
is the time taken,
is the difference in temperature between the cold and hot sides,
is the thickness of the material conducting heat (distance between hot and cold sides),
is the thermal conductivity, and
is the surface area of the surface emitting heat.
If a piece of material whose cross-sectional area is and thickness is with a temperature difference between its faces is observed, heat flows between the two faces in a direction perpendicular to the faces. The time rate of heat flow, , for small and small , is proportional to . In the limit of infinitesimal thickness , with temperature difference , this becomes , where is the time rate of heat flow through the area , is the temperature gradient across the material, and , the proportionality constant, is the thermal conductivity of the material. People often use , , or the Greek letter to represent this constant. The minus sign is there because the rate of heat flow is always negative—heat flows from the side at higher temperature to the one at lower temperature, not the other way around.
See also
Heat transfer coefficient
Heat transfer
Thermal conduction
Thermal conductivity
Heat flux
Watt
Flux |
https://en.wikipedia.org/wiki/Riemann%27s%20differential%20equation | In mathematics, Riemann's differential equation, named after Bernhard Riemann, is a generalization of the hypergeometric differential equation, allowing the regular singular points to occur anywhere on the Riemann sphere, rather than merely at 0, 1, and . The equation is also known as the Papperitz equation.
The hypergeometric differential equation is a second-order linear differential equation which has three regular singular points, 0, 1 and . That equation admits two linearly independent solutions; near a singularity , the solutions take the form , where is a local variable, and is locally holomorphic with . The real number is called the exponent of the solution at . Let α, β and γ be the exponents of one solution at 0, 1 and respectively; and let α', β' and γ' be those of the other. Then
By applying suitable changes of variable, it is possible to transform the hypergeometric equation: Applying Möbius transformations will adjust the positions of the regular singular points, while other transformations (see below) can change the exponents at the regular singular points, subject to the exponents adding up to 1.
Definition
The differential equation is given by
The regular singular points are , , and . The exponents of the solutions at these regular singular points are, respectively, , , and . As before, the exponents are subject to the condition
Solutions and relationship with the hypergeometric function
The solutions are denoted by the Riemann P-symbol (also known as the Papperitz symbol)
The standard hypergeometric function may be expressed as
The P-functions obey a number of identities; one of them allows a general P-function to be expressed in terms of the hypergeometric function. It is
In other words, one may write the solutions in terms of the hypergeometric function as
The full complement of Kummer's 24 solutions may be obtained in this way; see the article hypergeometric differential equation for a treatment of Kummer's solutio |
https://en.wikipedia.org/wiki/Causal%20dynamical%20triangulation | Causal dynamical triangulation (abbreviated as CDT), theorized by Renate Loll, Jan Ambjørn and Jerzy Jurkiewicz, is an approach to quantum gravity that, like loop quantum gravity, is background independent.
This means that it does not assume any pre-existing arena (dimensional space) but, rather, attempts to show how the spacetime fabric itself evolves.
There is evidence
that, at large scales, CDT approximates the familiar 4-dimensional spacetime but shows spacetime to be 2-dimensional near the Planck scale, and reveals a fractal structure on slices of constant time. These interesting results agree with the findings of Lauscher and Reuter, who use an approach called Quantum Einstein Gravity, and with other recent theoretical work.
Introduction
Near the Planck scale, the structure of spacetime itself is supposed to be constantly changing due to quantum fluctuations and topological fluctuations. CDT theory uses a triangulation process which varies dynamically and follows deterministic rules, to map out how this can evolve into dimensional spaces similar to that of our universe.
The results of researchers suggest that this is a good way to model the early universe, and describe its evolution. Using a structure called a simplex, it divides spacetime into tiny triangular sections. A simplex is the multidimensional analogue of a triangle [2-simplex]; a 3-simplex is usually called a tetrahedron, while the 4-simplex, which is the basic building block in this theory, is also known as the pentachoron. Each simplex is geometrically flat, but simplices can be "glued" together in a variety of ways to create curved spacetimes. Whereas previous attempts at triangulation of quantum spaces have produced jumbled universes with far too many dimensions, or minimal universes with too few, CDT avoids this problem by allowing only those configurations in which the timelines of all joined edges of simplices agree.
Derivation
CDT is a modification of quantum Regge calculus wh |
https://en.wikipedia.org/wiki/Human%20Genetic%20Diversity%3A%20Lewontin%27s%20Fallacy | "Human Genetic Diversity: Lewontin's Fallacy" is a 2003 paper by A. W. F. Edwards. He criticises an argument first made in Richard Lewontin's 1972 article "The Apportionment of Human Diversity", that the practice of dividing humanity into races is taxonomically invalid because any given individual will often have more in common genetically with members of other population groups than with members of their own. Edwards argued that this does not refute the biological reality of race since genetic analysis can usually make correct inferences about the perceived race of a person from whom a sample is taken, and that the rate of success increases when more genetic loci are examined.
Edwards' paper was reprinted, commented upon by experts such as Noah Rosenberg, and given further context in an interview with philosopher of science Rasmus Grønfeldt Winther in a 2018 anthology. Edwards' critique is discussed in a number of academic and popular science books, with varying degrees of support.
Some scholars, including Winther and Jonathan Marks, dispute the premise of "Lewontin's fallacy", arguing that Edwards' critique does not actually contradict Lewontin's argument. A 2007 paper in Genetics by David J. Witherspoon et al. concluded that the two arguments are in fact compatible, and that Lewontin's observation about the distribution of genetic differences across ancestral population groups applies "even when the most distinct populations are considered and hundreds of loci are used".
Lewontin's argument
In the 1972 study "The Apportionment of Human Diversity", Richard Lewontin performed a fixation index (FST) statistical analysis using 17 markers, including blood group proteins, from individuals across classically defined "races" (Caucasian, African, Mongoloid, South Asian Aborigines, Amerinds, Oceanians, and Australian Aborigines). He found that the majority of the total genetic variation between humans (i.e., of the 0.1% of DNA that varies between individuals), 85.4%, is |
https://en.wikipedia.org/wiki/Willmore%20energy | In differential geometry, the Willmore energy is a quantitative measure of how much a given surface deviates from a round sphere. Mathematically, the Willmore energy of a smooth closed surface embedded in three-dimensional Euclidean space is defined to be the integral of the square of the mean curvature minus the Gaussian curvature. It is named after the English geometer Thomas Willmore.
Definition
Expressed symbolically, the Willmore energy of S is:
where is the mean curvature, is the Gaussian curvature, and dA is the area form of S. For a closed surface, by the Gauss–Bonnet theorem, the integral of the Gaussian curvature may be computed in terms of the Euler characteristic of the surface, so
which is a topological invariant and thus independent of the particular embedding in that was chosen. Thus the Willmore energy can be expressed as
An alternative, but equivalent, formula is
where and are the principal curvatures of the surface.
Properties
The Willmore energy is always greater than or equal to zero. A round sphere has zero Willmore energy.
The Willmore energy can be considered a functional on the space of embeddings of a given surface, in the sense of the calculus of variations, and one can vary the embedding of a surface, while leaving it topologically unaltered.
Critical points
A basic problem in the calculus of variations is to find the critical points and minima of a functional.
For a given topological space, this is equivalent to finding the critical points of the function
since the Euler characteristic is constant.
One can find (local) minima for the Willmore energy by gradient descent, which in this context is called Willmore flow.
For embeddings of the sphere in 3-space, the critical points have been classified: they are all conformal transforms of minimal surfaces, the round sphere is the minimum, and all other critical values are integers greater than 4. They are called Willmore surfaces.
Willmore flow
The Willmore flow is the |
https://en.wikipedia.org/wiki/Principal%20curvature | In differential geometry, the two principal curvatures at a given point of a surface are the maximum and minimum values of the curvature as expressed by the eigenvalues of the shape operator at that point. They measure how the surface bends by different amounts in different directions at that point.
Discussion
At each point p of a differentiable surface in 3-dimensional Euclidean space one may choose a unit normal vector. A normal plane at p is one that contains the normal vector, and will therefore also contain a unique direction tangent to the surface and cut the surface in a plane curve, called normal section. This curve will in general have different curvatures for different normal planes at p. The principal curvatures at p, denoted k1 and k2, are the maximum and minimum values of this curvature.
Here the curvature of a curve is by definition the reciprocal of the radius of the osculating circle. The curvature is taken to be positive if the curve turns in the same direction as the surface's chosen normal, and otherwise negative. The directions in the normal plane where the curvature takes its maximum and minimum values are always perpendicular, if k1 does not equal k2, a result of Euler (1760), and are called principal directions. From a modern perspective, this theorem follows from the spectral theorem because these directions are as the principal axes of a symmetric tensor—the second fundamental form. A systematic analysis of the principal curvatures and principal directions was undertaken by Gaston Darboux, using Darboux frames.
The product k1k2 of the two principal curvatures is the Gaussian curvature, K, and the average (k1 + k2)/2 is the mean curvature, H.
If at least one of the principal curvatures is zero at every point, then the Gaussian curvature will be 0 and the surface is a developable surface. For a minimal surface, the mean curvature is zero at every point.
Formal definition
Let M be a surface in Euclidean space with second fundamenta |
https://en.wikipedia.org/wiki/Curvature%20collineation | A curvature collineation (often abbreviated to CC) is vector field which preserves the Riemann tensor in the sense that,
where are the components of the Riemann tensor. The set of all smooth curvature collineations forms a Lie algebra under the Lie bracket operation (if the smoothness condition is dropped, the set of all curvature collineations need not form a Lie algebra). The Lie algebra is denoted by and may be infinite-dimensional. Every affine vector field is a curvature collineation.
See also
Conformal vector field
Homothetic vector field
Killing vector field
Matter collineation
Spacetime symmetries
Mathematical methods in general relativity |
https://en.wikipedia.org/wiki/Hair%20clipper | A hair clipper, often individually called the apparent plurale tantum hair clippers (in a similar way to scissors), is a specialised tool used to cut human head hair. Hair clippers work on the same principle as scissors, but are distinct from scissors themselves and razors. Similar but heavier-duty implements are used to shear sheep, but are called handpieces or machine shears.
Operating principle
Hair clippers are made up of a pair of sharpened comb-like blades in close contact, one above the other, and the sides which slide sideways relative to each other, a mechanism which may be manual or electrical to make the blades oscillate from side to side, and a handle. The clipper is moved so that hair is positioned between the teeth of the comb, and cut with a scissor action when one blade slides sideways relative to the other. Friction between the blades needs to be as low as possible, which is attained by choice of material and finish, and frequent lubrication.
Manual clippers
Hair clippers are operated by a pair of handles which are alternately squeezed together and released. Barbers used them to cut hair close and fast. The hair was picked up in locks and the head was rapidly depilated. Such haircuts became popular among boys, mostly in schools, and young men in the military and in prisons.
Manual clippers were invented around 1855 by Nikola Bizumić, a Serbian barber. While they were widely used in the distant past, the advent and reduction in cost of electric hair clippers has led to them largely replacing manual clippers. Some barbers in Western countries continue to use them for trimming. They are also used in the Russian army: when conscripts enter boot camp, they cut their hair close to the skin, sometimes using manual clippers.
Culture and religion
In Greece, male students had their heads shaved with manual hair clippers from the early 20th century until it was abolished in 1982. The same practice was used in the military, where recruits had their heads s |
https://en.wikipedia.org/wiki/Mitotoxin | A mitotoxin is a cytotoxic molecule targeted to specific cells by a mitogen. Generally found in snake venom. Mitotoxins are responsible for mediating cell death by interfering with protein or DNA synthesis. Some mechanisms by which mitotoxins can interfere with DNA or protein synthesis include the inactivation of ribosomes or the inhibition of complexes in the mitochondrial electron transport chain. These toxins have a very high affinity and level of specificity for the receptors that they bind to. Mitotoxins bind to receptors on cell surfaces and are then internalized into cells via receptor-mediated endocytosis. Once in the endosome, the receptor releases its ligand and a mitotoxin can mediate cell death.
There are different classes of mitotoxins, each acting on a different type of cell or system. The mitotoxin classes that have been identified thus far include: interleukin-based, transferrin based, epidermal growth factor-based, nerve growth factor-based, insulin-like growth factor-I-based, and fibroblast growth factor-based mitotoxins. Because of the high affinity and specificity of mitotoxin binding, they present the possibility of creating precise therapeutic agents. A major one of these possibilities is the potential usage of growth factor-based mitotoxins as anti-neoplastic agents that can modulate the growth of melanomas. |
https://en.wikipedia.org/wiki/Capecitabine | Capecitabine, sold under the brand name Xeloda among others, is a anticancer medication used to treat breast cancer, gastric cancer and colorectal cancer. For breast cancer it is often used together with docetaxel. It is taken by mouth.
Common side effects include abdominal pain, vomiting, diarrhea, weakness, and rashes. Other severe side effects include blood clotting problems, allergic reactions, heart problems such as cardiomyopathy, and low blood cell counts. Use during pregnancy may result in harm to the fetus. Capecitabine, inside the body, is converted to 5-fluorouracil (5-FU) through which it acts. It belongs to the class of medications known as fluoropyrimidines, which also includes 5-FU and tegafur.
Capecitabine was patented in 1992 and approved for medical use in 1998. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Capecitabine is indicated for
adjuvant treatment of people with Stage III colon cancer as a single agent or as a component of a combination chemotherapy regimen;
perioperative treatment of adults with locally advanced rectal cancer as a component of chemoradiotherapy;
treatment of people with unresectable or metastatic colorectal cancer as a single agent or as a component of a combination chemotherapy regimen;
treatment of people with advanced or metastatic breast cancer as a single agent if an anthracycline- or taxane-containing chemotherapy is not indicated;
treatment of people with advanced or metastatic breast cancer in combination with docetaxel after disease progression on prior anthracycline-containing chemotherapy;
treatment of adults with unresectable or metastatic gastric, esophageal, or gastroesophageal junction cancer as a component of a combination chemotherapy regimen;
treatment of adults with HER2-overexpressing metastatic gastric or gastroesophageal junction adenocarcinoma who have not received prior treatment for metastatic disease as a component of a combination regimen;
adjuvant |
https://en.wikipedia.org/wiki/Semileptonic%20decay | In particle physics the semileptonic decay of a hadron is a decay caused by the weak force in which one lepton (and the corresponding neutrino) is produced in addition to one or more hadrons. An example for this can be
→ + +
This is to be contrasted with purely hadronic decays, such as → + , which are also mediated by the weak force.
Semileptonic decays of neutral kaons have been used to study kaon oscillations.
See also
Kaon
Pion
CP violation
CPT symmetry
Electroweak theory |
https://en.wikipedia.org/wiki/Spallation%20Neutron%20Source | The Spallation Neutron Source (SNS) is an accelerator-based neutron source facility in the U.S. that provides the most intense pulsed neutron beams in the world for scientific research and industrial development. Each year, this facility hosts hundreds of researchers from universities, national laboratories, and industry, who conduct basic and applied research and technology development using neutrons. SNS is part of Oak Ridge National Laboratory, which is managed by UT-Battelle for the United States Department of Energy (DOE). SNS is a DOE Office of Science user facility, and it is open to scientists and researchers from all over the world.
Neutron scattering research
Neutron scattering allows scientists to count scattered neutrons, measure their energies and the angles at which they scatter, and map their final positions. This information can reveal the molecular and magnetic structure and behavior of materials, such as high-temperature superconductors, polymers, metals, and biological samples. In addition to studies focused on fundamental physics, neutron scattering research has applications in structural biology and biotechnology, magnetism and superconductivity, chemical and engineering materials, nanotechnology, complex fluids, and others.
How SNS works
The spallation process at SNS begins with negatively charged hydrogen ions that are produced by an ion source. Each ion consists of a proton orbited by two electrons. The ions are injected into a linear particle accelerator, or linac, which accelerates them to an energy of about one GeV (or to about 90% the speed of light). The ions pass through a foil, which strips off each ion's two electrons, converting it to a proton. The protons pass into a ring-shaped structure, a proton accumulator ring, where they spin around at very high speeds and accumulate in "bunches." Each bunch of protons is released from the ring as a pulse, at a rate of 60 times per second (60 hertz). The high-energy proton pulses strike a |
https://en.wikipedia.org/wiki/Second-order%20cybernetics | Second-order cybernetics, also known as the cybernetics of cybernetics, is the recursive application of cybernetics to itself and the reflexive practice of cybernetics according to such a critique. It is cybernetics where "the role of the observer is appreciated and acknowledged rather than disguised, as had become traditional in western science". Second-order cybernetics was developed between the late 1960s and mid 1970s by Heinz von Foerster and others, with key inspiration coming from Margaret Mead. Foerster referred to it as "the control of control and the communication of communication" and differentiated first order cybernetics as "the cybernetics of observed systems" and second-order cybernetics as "the cybernetics of observing systems".
The concept of second-order cybernetics is closely allied to radical constructivism, which was developed around the same time by Ernst von Glasersfeld. While it is sometimes considered a break from the earlier concerns of cybernetics, there is much continuity with previous work and it can be thought of as a distinct tradition within cybernetics, with origins in issues evident during the Macy conferences in which cybernetics was initially developed. Its concerns include autonomy, epistemology, ethics, language, reflexivity, self-consistency, self-referentiality, and self-organizing capabilities of complex systems. It has been characterised as cybernetics where "circularity is taken seriously".
Overview
Terminology
Second-order cybernetics can be abbreviated as C2 or SOC, and is sometimes referred to as the cybernetics of cybernetics, or, more rarely, the new cybernetics, or second cybernetics.
These terms are often used interchangeably, but can also stress different aspects:
Most specifically, and especially where phrased as the cybernetics of cybernetics, second-order cybernetics is the recursive application of cybernetics to itself. This is closely associated with Mead's 1967 address to the American Society for Cyberne |
https://en.wikipedia.org/wiki/Manifold | In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an -dimensional manifold, or -manifold for short, is a topological space with the property that each point has a neighborhood that is homeomorphic to an open subset of -dimensional Euclidean space.
One-dimensional manifolds include lines and circles, but not lemniscates. Two-dimensional manifolds are also called surfaces. Examples include the plane, the sphere, and the torus, and also the Klein bottle and real projective plane.
The concept of a manifold is central to many parts of geometry and modern mathematical physics because it allows complicated structures to be described in terms of well-understood topological properties of simpler spaces. Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. The concept has applications in computer-graphics given the need to associate pictures with coordinates (e.g. CT scans).
Manifolds can be equipped with additional structure. One important class of manifolds are differentiable manifolds; their differentiable structure allows calculus to be done. A Riemannian metric on a manifold allows distances and angles to be measured. Symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity.
The study of manifolds requires working knowledge of calculus and topology.
Motivating examples
Circle
After a line, a circle is the simplest example of a topological manifold. Topology ignores bending, so a small piece of a circle is treated the same as a small piece of a line. Considering, for instance, the top part of the unit circle, x2 + y2 = 1, where the y-coordinate is positive (indicated by the yellow arc in Figure 1). Any point of this arc can be uniquely described by its x-coordinate. So, projection onto the first coordinate is a continuous and inv |
https://en.wikipedia.org/wiki/Beta%20oxidation | In biochemistry and metabolism, beta oxidation (also β-oxidation) is the catabolic process by which fatty acid molecules are broken down in the cytosol in prokaryotes and in the mitochondria in eukaryotes to generate acetyl-CoA, which enters the citric acid cycle, and NADH and FADH2, which are co-enzymes used in the electron transport chain. It is named as such because the beta carbon of the fatty acid undergoes oxidation to a carbonyl group. Beta-oxidation is primarily facilitated by the mitochondrial trifunctional protein, an enzyme complex associated with the inner mitochondrial membrane, although very long chain fatty acids are oxidized in peroxisomes.
The overall reaction for one cycle of beta oxidation is:
Cn-acyl-CoA + FAD + + + CoA → Cn-2-acyl-CoA + + NADH + + acetyl-CoA
Activation and membrane transport
Free fatty acids cannot penetrate any biological membrane due to their negative charge. Free fatty acids must cross the cell membrane through specific transport proteins, such as the SLC27 family fatty acid transport protein. Once in the cytosol, the following processes bring fatty acids into the mitochondrial matrix so that beta-oxidation can take place.
Long-chain-fatty-acid—CoA ligase catalyzes the reaction between a fatty acid with ATP to give a fatty acyl adenylate, plus inorganic pyrophosphate, which then reacts with free coenzyme A to give a fatty acyl-CoA ester and AMP.
If the fatty acyl-CoA has a long chain, then the carnitine shuttle must be utilized:
Acyl-CoA is transferred to the hydroxyl group of carnitine by carnitine palmitoyltransferase I, located on the cytosolic faces of the outer and inner mitochondrial membranes.
Acyl-carnitine is shuttled inside by a carnitine-acylcarnitine translocase, as a carnitine is shuttled outside.
Acyl-carnitine is converted back to acyl-CoA by carnitine palmitoyltransferase II, located on the interior face of the inner mitochondrial membrane. The liberated carnitine is shuttled back to the cytosol, as |
https://en.wikipedia.org/wiki/Television%20encryption | Television encryption, often referred to as scrambling, is encryption used to control access to pay television services, usually cable, satellite, or Internet Protocol television (IPTV) services.
History
Pay television exists to make revenue from subscribers, and sometimes those subscribers do not pay. The prevention of piracy on cable and satellite networks has been one of the main factors in the development of Pay TV encryption systems.
The early cable-based Pay TV networks used no security. This led to problems with people connecting to the network without paying. Consequently, some methods were developed to frustrate these self-connectors. The early Pay TV systems for cable television were based on a number of simple measures. The most common of these was a channel-based filter that would effectively stop the channel being received by those who had not subscribed. These filters would be added or removed according to the subscription. As the number of television channels on these cable networks grew, the filter-based approach became increasingly impractical.
Other techniques, such as adding an interfering signal to the video or audio, began to be used as the simple filter solutions were easily bypassed. As the technology evolved, addressable set-top boxes became common, and more complex scrambling techniques such as digital encryption of the audio or video cut and rotate (where a line of video is cut at a particular point and the two parts are then reordered around this point) were applied to signals.
Encryption was used to protect satellite-distributed feeds for cable television networks. Some of the systems used for cable feed distribution were expensive. As the DTH market grew, less secure systems began to be used. Many of these systems (such as Oak Orion) were variants of cable television scrambling systems that affected the synchronisation part of the video, inverted the video signal, or added an interfering frequency to the video. All of these analogue |
https://en.wikipedia.org/wiki/Homeotropic%20alignment | In liquid crystals, homeotropic alignment is one of the ways of alignment of liquid crystalline molecules. Homeotropic alignment is the state in which a rod-like liquid crystalline molecule aligns perpendicularly to the substrate. In the polydomain state, the parts also are called homeotropic domains. In contrast, the state in which the molecule aligns to a substance in parallel is called homogeneous alignment.
There are various other ways of alignment in liquid crystals. Because homeotropic alignment is not anisotropic optically, a dark field is observed between crossed polarizers in polarizing optical microscopy.
By conoscope observation, however, a cross image is observed in the homeotropic alignments. Homeotropic alignment often appears in the smectic A phase (SA).
In discotic liquid crystals homeotropic alignment is defined as the state in which an axis of the column structure, which is formed by disc-like liquid crystalline molecules, aligns perpendicularly to a substance. In other words, this alignment looks like a state in which columns formed by piled-up coins are arranged in an orderly way on a table.
In practice, the homeotropic alignment is usually achieved by surfactants and detergent for example lecithin, some esilanes or some special polyimide (PI 1211). Generally liquid crystals align homeotropically at an air or glass interface. |
https://en.wikipedia.org/wiki/Stationary%20spacetime | In general relativity, specifically in the Einstein field equations, a spacetime is said to be stationary if it admits a Killing vector that is asymptotically timelike.
Description and analysis
In a stationary spacetime, the metric tensor components, , may be chosen so that they are all independent of the time coordinate. The line element of a stationary spacetime has the form
where is the time coordinate, are the three spatial coordinates and is the metric tensor of 3-dimensional space. In this coordinate system the Killing vector field has the components . is a positive scalar representing the norm of the Killing vector, i.e., , and is a 3-vector, called the twist vector, which vanishes when the Killing vector is hypersurface orthogonal. The latter arises as the spatial components of the twist 4-vector (see, for example, p. 163) which is orthogonal to the Killing vector , i.e., satisfies . The twist vector measures the extent to which the Killing vector fails to be orthogonal to a family of 3-surfaces. A non-zero twist indicates the presence of rotation in the spacetime geometry.
The coordinate representation described above has an interesting geometrical interpretation. The time translation Killing vector generates a one-parameter group of motion in the spacetime . By identifying the spacetime points that lie on a particular trajectory (also called orbit) one gets a 3-dimensional space (the manifold of Killing trajectories) , the quotient space. Each point of represents a trajectory in the spacetime . This identification, called a canonical projection, is a mapping that sends each trajectory in onto a point in and induces a metric on via pullback. The quantities , and are all fields on and are consequently independent of time. Thus, the geometry of a stationary spacetime does not change in time. In the special case the spacetime is said to be static. By definition, every static spacetime is stationary, but the converse is not generall |
https://en.wikipedia.org/wiki/Daniel%20Pedoe | Dan Pedoe (29 October 1910, London – 27 October 1998, St Paul, Minnesota, USA) was an English-born mathematician and geometer with a career spanning more than sixty years. In the course of his life he wrote approximately fifty research and expository papers in geometry. He is also the author of various core books on mathematics and geometry some of which have remained in print for decades and been translated into several languages. These books include the three-volume Methods of Algebraic Geometry (which he wrote in collaboration with W. V. D. Hodge), The Gentle Art of Mathematics, Circles: A Mathematical View, Geometry and the Visual Arts and most recently Japanese Temple Geometry Problems: San Gaku (with Hidetoshi Fukagawa).
Early life
Daniel Pedoe was born in London in 1910, the youngest of thirteen children of Szmul Abramski, a Jewish immigrant from Poland who found himself in London in the 1890s: he had boarded a cattleboat not knowing whether it was bound for New York or London, so his final destination was one of blind chance. Pedoe's mother, Ryfka Raszka Pedowicz, was the only child of Wolf Pedowicz, a corn merchant and his wife, Sarah Haimnovna Pecheska from Łomża then in Congress Poland (that part of Poland then under Russian control). The family name requires some explanation. The father, Abramski, was one of the Kohanim, a priestly group, and once in Britain, he changed his surname to Cohen. At first, all thirteen children took the surname Cohen, but later, to avoid any potential antisemitism, some of the Cohen children changed their surname to Pedoe, a contraction of their mother's maiden name; this happened while Daniel was at school, aged 12.
"Danny" was the youngest child in a family of thirteen children and his childhood was spent in relative poverty in the East End of London, despite their father being a skilled cabinetmaker. He attended the Central Foundation Boys' School where he was first influenced in his love of geometry by the headmaster N |
https://en.wikipedia.org/wiki/Hydra%20%28chess%29 | Hydra was a chess machine, designed by a team with Dr. Christian "Chrilly" Donninger, Dr. Ulf Lorenz, GM Christopher Lutz and Muhammad Nasir Ali. Since 2006 the development team consisted only of Donninger and Lutz. Hydra was under the patronage of the PAL Group and Sheikh Tahnoon Bin Zayed Al Nahyan of Abu Dhabi. The goal of the Hydra Project was to dominate the computer chess world, and finally have an accepted victory over humans.
Hydra represented a potentially significant leap in the strength of computer chess. Design team member Lorenz estimates its FIDE equivalent playing strength to be over Elo 3000, and this is in line with its results against Michael Adams and Shredder 8, the former micro-computer chess champion.
Hydra began competing in 2002 and played its last game in June 2006. In June 2009, Christopher Lutz stated that "unfortunately the Hydra project is discontinued." The sponsors decided to end the project.
Architecture
The Hydra team originally planned to have Hydra appear in four versions: Orthus, Chimera, Scylla and then the final Hydra version – the strongest of them all. The original version of Hydra evolved from an earlier design called Brutus and works in a similar fashion to Deep Blue, utilising large numbers of purpose-designed chips (in this case implemented as a field-programmable gate array or FPGA). In Hydra, there are multiple computers, each with its own FPGA acting as a chess coprocessor. These co-processors enabled Hydra to search enormous numbers of positions per second, making each processor more than ten times faster than an unaided computer.
Hydra ran on a 32-node Intel Xeon with a Xilinx FPGA accelerator card cluster, with a total of 64 gigabytes of RAM. It evaluates about 150,000,000 chess positions per second, roughly the same as the 1997 Deep Blue which defeated Garry Kasparov, but with several times more overall computing power. Whilst FPGAs generally have a lower performance level than ASIC chips, modern-day FPGAs run |
https://en.wikipedia.org/wiki/Association%20for%20Behavior%20Analysis%20International | The Association for Behavior Analysis International (ABAI) is a nonprofit organization dedicated to promoting behavior analysis. The organization has over 9,000 members. The group organizes conferences and publishes journals on the topic of applied behavior analysis (ABA). ABAI has issued detailed, specific position papers intended to guide practitioners of ABA. The ABAI publishes six scholarly journals including The Psychological Record and their primary organ, Perspectives on Behavior Science, formerly The Behavior Analyst. They also publish an informational journal, Education and Treatment of Children, describing practical treatment of children with behavioral problems.
ABAI has been criticized for its connections to the Judge Rotenberg Center (JRC), a school that has been condemned by the United Nations for torture. According to the Autistic Self Advocacy Network (ASAN), ABAI has endorsed the methods of the JRC, including its use of the Graduated Electronic Decelerator, a device that delivers painful electric skin shocks, by allowing them to present at ABAI's annual conferences. ABAI has honored Robert A. Sherman for his legal defense of the JRC's use of aversive punishments on its students. In 2022, ABAI's membership voted to support a position that strongly opposed contingent electric skin shock.
History
The Association for Behavior Analysis International (ABAI) was founded in 1974 as the MidWestern Association for Behavior Analysis (MABA) to serve as an interdisciplinary group of professionals, paraprofessionals, and students. The first annual conference was a response by a group of behavior analysts who were having problems presenting their work at psychology conferences and other related events. Some of the members included Sidney Bijou, James Dinsmoor, Bill Hopkins, and Roger Ulrich. The first headquarters were located on the campus of Western Michigan University (WMU) in Kalamazoo, Michigan. The association changed its name to the Association for Behav |
https://en.wikipedia.org/wiki/Batcher%20odd%E2%80%93even%20mergesort | Batcher's odd–even mergesort
is a generic construction devised by Ken Batcher for sorting networks of size O(n (log n)2) and depth O((log n)2), where n is the number of items to be sorted. Although it is not asymptotically optimal, Knuth concluded in 1998, with respect to the AKS network that "Batcher's method is much better, unless n exceeds the total memory capacity of all computers on earth!"
It is popularized by the second GPU Gems book, as an easy way of doing reasonably efficient sorts on graphics-processing hardware.
Pseudocode
Various recursive and iterative schemes are possible to calculate the indices of the elements to be compared and sorted. This is one iterative technique to generate the indices for sorting n elements:
# note: the input sequence is indexed from 0 to (n-1)
for p = 1, 2, 4, 8, ... # as long as p < n
for k = p, p/2, p/4, p/8, ... # as long as k >= 1
for j = mod(k,p) to (n-1-k) with a step size of 2k
for i = 0 to min(k-1, n-j-k-1) with a step size of 1
if floor((i+j) / (p*2)) == floor((i+j+k) / (p*2))
compare and sort elements (i+j) and (i+j+k)
Non-recursive calculation of the partner node index is also possible.
See also
Bitonic sorter
Pairwise sorting network |
https://en.wikipedia.org/wiki/Correlation%20dimension | In chaos theory, the correlation dimension (denoted by ν) is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension.
For example, if we have a set of random points on the real number line between 0 and 1, the correlation dimension will be ν = 1, while if they are distributed on say, a triangle embedded in three-dimensional space (or m-dimensional space), the correlation dimension will be ν = 2. This is what we would intuitively expect from a measure of dimension. The real utility of the correlation dimension is in determining the (possibly fractional) dimensions of fractal objects. There are other methods of measuring dimension (e.g. the Hausdorff dimension, the box-counting dimension, and the
information dimension) but the correlation dimension has the advantage of being straightforwardly and quickly calculated, of being less noisy when only a small number of points is available, and is often in agreement with other calculations of dimension.
For any set of N points in an m-dimensional space
then the correlation integral C(ε) is calculated by:
where g is the total number of pairs of points which have a distance between them that is less than distance ε (a graphical representation of such close pairs is the recurrence plot). As the number of points tends to infinity, and the distance between them tends to zero, the correlation integral, for small values of ε, will take the form:
If the number of points is sufficiently large, and evenly distributed, a log-log graph of the correlation integral versus ε will yield an estimate of ν. This idea can be qualitatively understood by realizing that for higher-dimensional objects, there will be more ways for points to be close to each other, and so the number of pairs close to each other will rise more rapidly for higher dimensions.
Grassberger and Procaccia introduced the technique in 1983; the article gives the results of such estimates for a nu |
https://en.wikipedia.org/wiki/Process%20%28anatomy%29 | In anatomy, a process () is a projection or outgrowth of tissue from a larger body. For instance, in a vertebra, a process may serve for muscle attachment and leverage (as in the case of the transverse and spinous processes), or to fit (forming a synovial joint), with another vertebra (as in the case of the articular processes). The word is also used at the microanatomic level, where cells can have processes such as cilia or pedicels. Depending on the tissue, processes may also be called by other terms, such as apophysis, tubercle, or protuberance.
Examples
Examples of processes include:
The many processes of the human skull:
The mastoid and styloid processes of the temporal bone
The zygomatic process of the temporal bone
The zygomatic process of the frontal bone
The orbital, temporal, lateral, frontal, and maxillary processes of the zygomatic bone
The anterior, middle, and posterior clinoid processes and the petrosal process of the sphenoid bone
The uncinate process of the ethmoid bone
The jugular process of the occipital bone
The alveolar, frontal, zygomatic, and palatine processes of the maxilla
The ethmoidal and maxillary processes of the inferior nasal concha
The pyramidal, orbital, and sphenoidal processes of the palatine bone
The coronoid and condyloid processes of the mandible
The xiphoid process at the end of the sternum
The acromion and coracoid processes of the scapula
The coronoid process of the ulna
The radial and ulnar styloid processes
The uncinate processes of ribs found in birds and reptiles
The uncinate process of the pancreas
The spinous, articular, transverse, accessory, uncinate, and mammillary processes of the vertebrae
The trochlear process of the heel
The appendix, which is sometimes called the "vermiform process", notably in Gray's Anatomy
The olecranon process of the ulna
See also
Eminence
Tubercle
Appendage
Pedicle of vertebral arch
Notes |
https://en.wikipedia.org/wiki/The%20Elements%20of%20Programming%20Style | The Elements of Programming Style, by Brian W. Kernighan and P. J. Plauger, is a study of programming style, advocating the notion that computer programs should be written not only to satisfy the compiler or personal programming "style", but also for "readability" by humans, specifically software maintenance engineers, programmers and technical writers. It was originally published in 1974.
The book pays explicit homage, in title and tone, to The Elements of Style, by Strunk & White and is considered a practical template promoting Edsger Dijkstra's structured programming discussions. It has been influential and has spawned a series of similar texts tailored to individual languages, such as The Elements of C Programming Style, The Elements of C# Style, The Elements of Java(TM) Style, The Elements of MATLAB Style, etc.
The book is built on short examples from actual, published programs in programming textbooks. This results in a practical treatment rather than an abstract or academic discussion. The style is diplomatic and generally sympathetic in its criticism, and unabashedly honest as well— some of the examples with which it finds fault are from the authors' own work (one example in the second edition is from the first edition).
Lessons
Its lessons are summarized at the end of each section in pithy maxims, such as "Let the machine do the dirty work":
Write clearly – don't be too clever.
Say what you mean, simply and directly.
Use library functions whenever feasible.
Avoid too many temporary variables.
Write clearly – don't sacrifice clarity for efficiency.
Let the machine do the dirty work.
Replace repetitive expressions by calls to common functions.
Parenthesize to avoid ambiguity.
Choose variable names that won't be confused.
Avoid unnecessary branches.
If a logical expression is hard to understand, try transforming it.
Choose a data representation that makes the program simple.
Write first in easy-to-understand pseudo language; then translate into |
https://en.wikipedia.org/wiki/Natriuresis | Natriuresis is the process of sodium excretion in the urine through the action of the kidneys. It is promoted by ventricular and atrial natriuretic peptides as well as calcitonin, and inhibited by chemicals such as aldosterone. Natriuresis lowers the concentration of sodium in the blood and also tends to lower blood volume because osmotic forces drag water out of the body's blood circulation and into the urine along with the sodium. Many diuretic drugs take advantage of this mechanism to treat medical conditions like hypernatremia and hypertension, which involve excess blood volume.
Excess natriuresis can be caused by:
Medullary cystic disease
Bartter syndrome
Diuretic phase of acute tubular necrosis
Some diuretics
Primary renal diseases
Congenital adrenal hyperplasia
Syndrome of inappropriate antidiuretic hormone hypersecretion
Endogenous natriuretic hormones include:
Atrial natriuretic peptide
Brain natriuretic peptide
C-type natriuretic peptide
This is a natural process in infants at the time of birth. |
https://en.wikipedia.org/wiki/F5b | F5b is a type of radio control electric model aircraft contest that consists of doing as many laps as possible between 2 poles 150 meters apart in 200 seconds followed by 10 minutes of thermalling, and then landing on a 30-meter landing circle. The laps must be made while gliding only, no motor allowed, so the motor is used to rapidly climb and power into the course. There is a limit of 10 climbs, so to get more than 20 laps (a complete circuit- to the far pole & back- is 2 laps) the plane must climb high and glide 4 laps. To score more than 40 laps, the plane must glide a combination of 4 laps sets and 6 lap sets.
A typical F5b aircraft is commonly referred to as a hotliner. competition rules are set by the Fédération Aéronautique Internationale(FAI) in Sporting Code Volume F5 Radio Control Electric Powered Model Aircraft, 2013 Edition |
https://en.wikipedia.org/wiki/List%20of%20battery%20sizes | This is a list of the sizes, shapes, and general characteristics of some common primary and secondary battery types in household, automotive and light industrial use.
The complete nomenclature for a battery specifies size, chemistry, terminal arrangement, and special characteristics. The same physically interchangeable cell size or battery size may have widely different characteristics; physical interchangeability is not the sole factor in substituting a battery.
The full battery designation identifies not only the size, shape and terminal layout of the battery but also the chemistry (and therefore the voltage per cell) and the number of cells in the battery. For example, a CR123 battery is always LiMnO2 ('Lithium') chemistry, in addition to its unique size.
The following tables give the common battery chemistry types for the current common sizes of batteries. See Battery chemistry for a list of other electrochemical systems.
Cylindrical batteries
Rectangular batteries
Camera batteries
As well as other types, digital and film cameras often use specialized primary batteries to produce a compact product. Flashlights and portable electronic devices may also use these types.
Button cells – coin, watch
Lithium cells
Coin-shaped cells are thin compared to their diameter. Polarity is usually stamped on the metal casing.
The IEC prefix "CR" denotes lithium manganese dioxide chemistry. Since LiMnO2 cells produce 3 volts there are no widely available alternative chemistries for a lithium coin battery. The "BR" prefix indicates a round lithium/carbon monofluoride cell. See lithium battery for discussion of the different performance characteristics. One LiMnO2 cell can replace two alkaline or silver-oxide cells.
IEC designation numbers indicate the physical dimensions of the cylindrical cell. Cells less than one centimeter in height are assigned four-digit numbers, where the first two digits are the diameter in millimeters, while the last two digits are the height in |
https://en.wikipedia.org/wiki/Glaisher%E2%80%93Kinkelin%20constant | In mathematics, the Glaisher–Kinkelin constant or Glaisher's constant, typically denoted , is a mathematical constant, related to the -function and the Barnes -function. The constant appears in a number of sums and integrals, especially those involving gamma functions and zeta functions. It is named after mathematicians James Whitbread Lee Glaisher and Hermann Kinkelin.
Its approximate value is:
= ... .
The Glaisher–Kinkelin constant can be given by the limit:
where is the hyperfactorial. This formula displays a similarity between and which is perhaps best illustrated by noting Stirling's formula:
which shows that just as is obtained from approximation of the factorials, can also be obtained from a similar approximation to the hyperfactorials.
An equivalent definition for involving the Barnes -function, given by where is the gamma function is:
.
The Glaisher–Kinkelin constant also appears in evaluations of the derivatives of the Riemann zeta function, such as:
where is the Euler–Mascheroni constant. The latter formula leads directly to the following product found by Glaisher:
An alternative product formula, defined over the prime numbers, reads
where denotes the th prime number.
The following are some integrals that involve this constant:
A series representation for this constant follows from a series for the Riemann zeta function given by Helmut Hasse. |
https://en.wikipedia.org/wiki/Spanish%20fess | In heraldry and vexillology, a Spanish fess is a term occasionally used to describe the central horizontal stripe of a tricolour or triband flag that is twice the width of the stripes on either side of it.
The name is based on the most well-known example of this style of flag, the flag of Spain, and in analogy to the equivalent term for vertically striped flags, the Canadian pale.
Looser definition
As with the Canadian pale, a looser definition of Spanish fess also exists, in which the central stripe is considerably larger than, but not necessarily twice the width of the two outer stripes.
Other flags featuring a Spanish fess include the national flags of Lebanon, Cambodia, Laos and Tajikistan, the restored flag of Libya, the flag of French Polynesia, the flag of Prussia, and the proposed national flag of Cyprus. Had the flag of Israel lacked the top and bottom white bands, it too would have featured a Spanish fess.
Flag gallery
1:2:1 proportions
Other proportions
See also
The flag of Colombia, with a ratio of 2:1:1, instead of 1:2:1
The flag of Ecuador, with a ratio of 2:1:1, instead of 1:2:1
The bisexual pride flag, with a ratio of 2:1:2, instead of 1:2:1
The Canadian pale, with a vertical 1:2:1 ratio
Flags by design
Heraldic ordinaries |
https://en.wikipedia.org/wiki/Electrolaser | An electrolaser is a type of electroshock weapon that is also a directed-energy weapon. It uses lasers to form an electrically conductive laser-induced plasma channel (LIPC). A fraction of a second later, a powerful electric current is sent down this plasma channel and delivered to the target, thus functioning overall as a large-scale, high energy, long-distance version of the Taser electroshock gun.
Alternating current is sent through a series of step-up transformers, increasing the voltage and decreasing the current. The final voltage may be between 108 and 109 volts. This current is fed into the plasma channel created by the laser beam.
Laser-induced plasma channel
A laser-induced plasma channel (LIPC) is formed by the following process:
A laser emits a laser beam into the air.
The laser beam rapidly heats and ionizes surrounding gases to form plasma.
The plasma forms an electrically conductive plasma channel.
Because a laser-induced plasma channel relies on ionization, gas must exist between the electrolaser weapon and its target. If a laser-beam is intense enough, its electromagnetic field is strong enough to rip electrons off of air molecules, or whatever gas happens to be in between, creating plasma. Similar to lightning, the rapid heating also creates a sonic boom.
Uses
Methods of use:
To kill or incapacitate a living target through electric shock.
To seriously damage, disable, or destroy any electric or electronic devices in the target.
As electrolasers and natural lightning both use plasma channels to conduct electric current, an electrolaser can set up a light-induced plasma channel for uses such as:
To study lightning
During a thunderstorm, to make lightning discharge at a safe time and place, as with a lightning conductor.
Directing atmospheric lightning to a terrestrial collection station for the purpose of electrical power generation.
As a weapon, to make a thunderhead deliver a precise lightning strike onto a target from an aircraft; in |
https://en.wikipedia.org/wiki/AES%20key%20schedule | AES uses a key schedule to expand a short key into a number of separate round keys. The three AES variants have a different number of rounds. Each variant requires a separate 128-bit round key for each round plus one more. The key schedule produces the needed round keys from the initial key.
Round constants
The round constant for round of the key expansion is the 32-bit word:
where is an eight-bit value defined as :
where is the bitwise XOR operator and constants such as and are given in hexadecimal. Equivalently:
where the bits of are treated as the coefficients of an element of the finite field , so that e.g. represents the polynomial .
AES uses up to for AES-128 (as 11 round keys are needed), up to for AES-192, and up to for AES-256.
The key schedule
Define:
as the length of the key in 32-bit words: 4 words for AES-128, 6 words for AES-192, and 8 words for AES-256
, , ... as the 32-bit words of the original key
as the number of round keys needed: 11 round keys for AES-128, 13 keys for AES-192, and 15 keys for AES-256
, , ... as the 32-bit words of the expanded key
Also define as a one-byte left circular shift:
and as an application of the AES S-box to each of the four bytes of the word:
Then for :
Notes |
https://en.wikipedia.org/wiki/Knuckle-walking | Knuckle-walking is a form of quadrupedal walking in which the forelimbs hold the fingers in a partially flexed posture that allows body weight to press down on the ground through the knuckles. Gorillas and chimpanzees use this style of locomotion, as do anteaters and platypuses.
Knuckle-walking helps with actions other than locomotion on the ground. For the gorilla, the fingers are used for the manipulation of food, and in chimpanzees, for the manipulation of food and climbing. In anteaters and pangolins, the fingers have large claws for opening the mounds of social insects. Platypus fingers have webbing that extend past the fingers to aid in swimming, thus knuckle-walking is used to prevent stumbling. Gorillas move around by knuckle-walking, although they sometimes walk bipedally for short distances while carrying food or in defensive situations. Mountain gorillas use knuckle-walking plus other parts of their hand—fist-walking does not use the knuckles, using the backs of their hand, and using their palms.
Anthropologists once thought that the common ancestor of chimpanzees and humans engaged in knuckle-walking, and humans evolved upright walking from knuckle-walking, a view thought to be supported by reanalysis of overlooked features on hominid fossils. Since then, scientists discovered Ardipithecus ramidus, a human-like hominid descended from the common ancestor of chimpanzees and humans. Ar. ramidus engaged in upright walking, but not knuckle-walking. This leads to the conclusion that chimpanzees evolved knuckle-walking after they split from humans six million years ago, and humans evolved upright walking without knuckle-walking. This would imply that knuckle-walking evolved independently in the African great apes, which would mean a homoplasic evolution of this locomotor behaviour in gorillas and chimpanzees. However, other studies have argued the opposite by pointing out that the differences in knuckle-walking between gorillas and chimpanzees can be explaine |
https://en.wikipedia.org/wiki/Baire%20space%20%28set%20theory%29 | In set theory, the Baire space is the set of all infinite sequences of natural numbers with a certain topology. This space is commonly used in descriptive set theory, to the extent that its elements are often called "reals". It is denoted NN, ωω, by the symbol or also ωω, not to be confused with the countable ordinal obtained by ordinal exponentiation.
The Baire space is defined to be the Cartesian product of countably infinitely many copies of the set of natural numbers, and is given the product topology (where each copy of the set of natural numbers is given the discrete topology). The Baire space is often represented using the tree of finite sequences of natural numbers.
The Baire space can be contrasted with Cantor space, the set of infinite sequences of binary digits.
Topology and trees
The product topology used to define the Baire space can be described more concretely in terms of trees. The basic open sets of the product topology are cylinder sets, here characterized as:
If any finite set of natural number coordinates I={i} is selected, and for each i a particular natural number value vi is selected, then the set of all infinite sequences of natural numbers that have value vi at position i is a basic open set. Every open set is a countable union of a collection of these.
Using more formal notation, one can define the individual cylinders as
for a fixed integer location n and integer value v. The cylinders are then the generators for the cylinder sets: the cylinder sets then consist of all intersections of a finite number of cylinders. That is, given any finite set of natural number coordinates and corresponding natural number values for each , one considers the intersection of cylinders
This intersection is called a cylinder set, and the set of all such cylinder sets provides a basis for the product topology. Every open set is a countable union of such cylinder sets.
By moving to a different basis for the same topology, an alternate charac |
https://en.wikipedia.org/wiki/Autotomy | Autotomy (from the Greek auto-, "self-" and tome, "severing", αὐτοτομία) or 'self-amputation', is the behaviour whereby an animal sheds or discards one or more of its own appendages, usually as a self-defense mechanism to elude a predator's grasp or to distract the predator and thereby allow escape. Some animals have the ability to regenerate the lost body part later. Autotomy has multiple evolutionary origins and is thought to have evolved at least nine times independently in animals. The term was coined in 1883 by Leon Fredericq.
Vertebrates
Reptiles and amphibians
Some lizards, salamanders and tuatara when caught by the tail will shed part of it in attempting to escape. In many species the detached tail will continue to wriggle, creating a deceptive sense of continued struggle, and distracting the predator's attention from the fleeing prey animal. In addition, many species of lizards, such as Plestiodon fasciatus, Cordylosaurus subtessellatus, Holaspis guentheri, Phelsuma barbouri, and Ameiva wetmorei, have elaborately colored blue tails which have been shown to divert predatory attacks toward the tail and away from the body and head. Depending upon the species, the animal may be able to partially regenerate its tail, typically over a period of weeks or months. Though functional, the new tail section often is shorter and will contain cartilage rather than regenerated vertebrae of bone, and in color and texture the skin of the regenerated organ generally differs distinctly from its original appearance. However, some salamanders can regenerate a morphologically complete and identical tail. Some reptiles such as the crested gecko do not regenerate the tail after autotomy.
Mechanism
The technical term for this ability to drop the tail is 'caudal autotomy'. In most lizards that sacrifice the tail in this manner, breakage occurs only when the tail is grasped with sufficient force, but some animals, such as some species of geckos, can perform true autotomy, throwing |
https://en.wikipedia.org/wiki/CO-OPN | The CO-OPN (Concurrent Object-Oriented Petri Nets) specification language is based on both algebraic specifications and algebraic Petri nets formalisms. The former formalism represent the data structures aspects, while the latter stands for the behavioral and concurrent aspects of systems. In order to deal with large specifications some structuring capabilities have been introduced. The object-oriented paradigm has been adopted, which means that a CO-OPN specification is a collection of objects which interact concurrently. Cooperation between the objects is achieved by means of a synchronization mechanism, i.e., each object event may request to be synchronized with some methods (parameterized events) of one or a group of partners by means of a synchronization expression.
A CO-OPN specification consists of a collection of two different modules: the abstract data type modules and the object modules. The abstract data type modules concern the data structure component of the specifications, and many sorted algebraic specifications are used when describing these modules. Furthermore, the object modules represent the concept of encapsulated entities that possess an internal state and provide the exterior with various services. For this second sort of modules, an algebraic net formalism has been adopted. Algebraic Petri nets, a kind of high level nets, are a great improvement over the Petri nets, i.e. Petri nets tokens are replaced with data structures which are described by means of algebraic abstract data types. For managing visibility, both abstract data type modules and object modules are composed of an interface (which allows some operations to be visible from the outside) and a body (which mainly encapsulates the operations properties and some operation which are used for building the model). In the case of the objects modules, the state
and the behavior of the objects remain concealed within the body section.
To develop models using the CO-OPN language it is poss |
https://en.wikipedia.org/wiki/History%20of%20quantum%20field%20theory | In particle physics, the history of quantum field theory starts with its creation by Paul Dirac, when he attempted to quantize the electromagnetic field in the late 1920s. Heisenberg was awarded the 1932 Nobel Prize in Physics "for the creation of quantum mechanics". Major advances in the theory were made in the 1940s and 1950s, leading to the introduction of renormalized quantum electrodynamics (QED). QED was so successful and accurately predictive that efforts were made to apply the same basic concepts for the other forces of nature. By the late 1970s, these efforts successfully utilized gauge theory in the strong nuclear force and weak nuclear force, producing the modern Standard Model of particle physics.
Efforts to describe gravity using the same techniques have, to date, failed. The study of quantum field theory is still flourishing, as are applications of its methods to many physical problems. It remains one of the most vital areas of theoretical physics today, providing a common language to several different branches of physics.
Early developments
Quantum field theory originated in the 1920s from the problem of creating a quantum mechanical theory of the electromagnetic field. In particular, de Broglie in 1924 introduced the idea of a wave description of elementary systems in the following way: "we proceed in this work from the assumption of the existence of a certain periodic phenomenon of a yet to be determined character, which is to be attributed to each and every isolated energy parcel".
In 1925, Werner Heisenberg, Max Born, and Pascual Jordan constructed just such a theory by expressing the field's internal degrees of freedom as an infinite set of harmonic oscillators, and by then utilizing the canonical quantization procedure to these oscillators; their paper was published in 1926. This theory assumed that no electric charges or currents were present and today would be called a free field theory.
The first reasonably complete theory of quantum e |
https://en.wikipedia.org/wiki/Lemniscate%20constant | In mathematics, the lemniscate constant is a transcendental mathematical constant that is the ratio of the perimeter of Bernoulli's lemniscate to its diameter, analogous to the definition of for the circle. Equivalently, the perimeter of the lemniscate is . The lemniscate constant is closely related to the lemniscate elliptic functions and approximately equal to 2.62205755. The symbol is a cursive variant of ; see Pi § Variant pi.
Gauss's constant, denoted by G, is equal to .
John Todd named two more lemniscate constants, the first lemniscate constant and the second lemniscate constant .
Sometimes the quantities or are referred to as the lemniscate constant.
History
Gauss's constant is named after Carl Friedrich Gauss, who calculated it via the arithmetic–geometric mean as . By 1799, Gauss had two proofs of the theorem that where is the lemniscate constant.
The lemniscate constant and first lemniscate constant were proven transcendental by Theodor Schneider in 1937 and the second lemniscate constant and Gauss's constant were proven transcendental by Theodor Schneider in 1941. In 1975, Gregory Chudnovsky proved that the set is algebraically independent over , which implies that and are algebraically independent as well. But the set (where the prime denotes the derivative with respect to the second variable) is not algebraically independent over . In fact,
Forms
Usually, is defined by the first equality below.
where is the complete elliptic integral of the first kind with modulus , is the beta function, is the gamma function and is the Riemann zeta function.
The lemniscate constant can also be computed by the arithmetic–geometric mean ,
Moreover,
which is analogous to
where is the Dirichlet beta function and is the Riemann zeta function.
Gauss's constant is typically defined as the reciprocal of the arithmetic–geometric mean of 1 and the square root of 2, after his calculation of published in 1800:
Gauss's constant is equal t |
https://en.wikipedia.org/wiki/Neutron-induced%20swelling | Neutron-induced swelling is the increase of volume and decrease of density of materials subjected to intense neutron radiation. Neutrons impacting the material's lattice rearrange its atoms, causing buildup of dislocations, voids, and Wigner energy. Together with the resulting strength reduction and embrittlement, it is a major concern for materials for nuclear reactors.
Materials show significant differences in their swelling resistance.
See also
Radiation hardening
Radiation effects
Nuclear technology
Materials degradation |
https://en.wikipedia.org/wiki/Undeletion | Undeletion is a feature for restoring computer files which have been removed from a file system by file deletion. Deleted data can be recovered on many file systems, but not all file systems provide an undeletion feature. Recovering data without an undeletion facility is usually called data recovery, rather than undeletion. Undeletion can both help prevent users from accidentally losing data, or can pose a computer security risk, since users may not be aware that deleted files remain accessible.
Support
Not all file systems or operating systems support undeletion. Undeletion is possible on all FAT file systems, with undeletion utilities provided since MS-DOS 5.0 and DR DOS 6.0 in 1991. It is not supported by most modern UNIX file systems, though AdvFS is a notable exception. The ext2 file system has an add-on program called e2undel which allows file undeletion. The similar ext3 file system does not officially support undeletion, but utilities like ext4magic, extundelete, PhotoRec and ext3grep were written to automate the undeletion on ext3 volumes. Undelete was proposed in ext4, but is yet to be implemented. However, a trash bin feature was posted as a patch on December 4, 2006. The Trash bin feature uses undelete
attributes in ext2/3/4 and Reiser file systems.
Command-line tools
Norton Utilities
Norton UNERASE was an important component in Norton Utilities version 1.0 in 1982.
MS-DOS
Microsoft included a similar UNDELETE program in versions 5.0 to 6.22 of MS-DOS, but applied the Recycle Bin approach instead in later operating systems using FAT.
DR DOS
DR DOS 6.0 and higher support UNDELETE as well, but optionally offer additional protection utilizing the FAT snapshot utility DISKMAP and the resident DELWATCH deletion tracking component, which actively maintains deleted files' date and time stamps and keeps the contents of deleted files from being overwritten unless running out of disk space. DELWATCH also supports undeletion of remote files on file servers. |
https://en.wikipedia.org/wiki/Anatomical%20theatre | An anatomical theatre (Latin: ) was a specialised building or room, resembling a theatre, used in teaching anatomy at early modern universities. They were typically constructed with a tiered structure surrounding a central table, allowing a larger audience to see the dissection of cadavers more closely than would have been possible in a non-specialized setting.
Description
An anatomical theatre was usually a room of roughly amphitheatrical shape, in the centre of which would stand a table on which the dissection of human or animal bodies took place. Around this table were several circular, elliptic or octagonal tiers with railings, steeply tiered so that observers (typically students) could stand and observe the dissection below, without spectators in the front-most rows blocking their view. It was common to display skeletons in some location within the theatre.
The first anatomical theatre, the Anatomical Theatre of Padua, was built at the University of Padua in 1594, and has been preserved into the modern day. Other early examples include the Theatrum Anatomicum of Leiden University, built in 1596 and reconstructed in 1988, and the Anatomical Theatre of the Archiginnasio in Bologna (whose building dates from 1563 and the anatomical theatre from 1637).
The anatomical theatre of the University of Uppsala is well-known, having been completed in 1663 by medical profession and amateur architect Olaus Rudbeck (1630-1702). The theatre is housed in the idiosyncratic cupola constructed on the top of the Gustavianum building, one of the older buildings of the university. Rudbeck had spent time in the Dutch city of Leiden, and the construction of both the anatomical theatre and the botanical garden he founded in Uppsala in 1655 were influenced by his experiences there. The anatomical theatre is now preserved as part of the Gustavianum, now preserved as a museum for the general public under the name Museum Gustavianum.
Thomas Jefferson built an anatomical theatre for the |
https://en.wikipedia.org/wiki/Cauchy%20index | In mathematical analysis, the Cauchy index is an integer associated to a real rational function over an interval. By the Routh–Hurwitz theorem, we have the following interpretation: the Cauchy index of
r(x) = p(x)/q(x)
over the real line is the difference between the number of roots of f(z) located in the right half-plane and those located in the left half-plane. The complex polynomial f(z) is such that
f(iy) = q(y) + ip(y).
We must also assume that p has degree less than the degree of q.
Definition
The Cauchy index was first defined for a pole s of the rational function r by Augustin-Louis Cauchy in 1837 using one-sided limits as:
A generalization over the compact interval [a,b] is direct (when neither a nor b are poles of r(x)): it is the sum of the Cauchy indices of r for each s located in the interval. We usually denote it by .
We can then generalize to intervals of type since the number of poles of r is a finite number (by taking the limit of the Cauchy index over [a,b] for a and b going to infinity).
Examples
Consider the rational function:
We recognize in p(x) and q(x) respectively the Chebyshev polynomials of degree 3 and 5. Therefore, r(x) has poles , , , and , i.e. for . We can see on the picture that and . For the pole in zero, we have since the left and right limits are equal (which is because p(x) also has a root in zero).
We conclude that since q(x) has only five roots, all in [−1,1]. We cannot use here the Routh–Hurwitz theorem as each complex polynomial with f(iy) = q(y) + ip(y) has a zero on the imaginary line (namely at the origin).
External links
The Cauchy Index
Mathematical analysis |
https://en.wikipedia.org/wiki/Carrier%20generation%20and%20recombination | In the solid-state physics of semiconductors, carrier generation and carrier recombination are processes by which mobile charge carriers (electrons and electron holes) are created and eliminated. Carrier generation and recombination processes are fundamental to the operation of many optoelectronic semiconductor devices, such as photodiodes, light-emitting diodes and laser diodes. They are also critical to a full analysis of p-n junction devices such as bipolar junction transistors and p-n junction diodes.
The electron–hole pair is the fundamental unit of generation and recombination in inorganic semiconductors, corresponding to an electron transitioning between the valence band and the conduction band where generation of electron is a transition from the valence band to the conduction band and recombination leads to a reverse transition.
Overview
Like other solids, semiconductor materials have an electronic band structure determined by the crystal properties of the material. Energy distribution among electrons is described by the Fermi level and the temperature of the electrons. At absolute zero temperature, all of the electrons have energy below the Fermi level; but at non-zero temperatures the energy levels are filled following a Fermi-Dirac distribution.
In undoped semiconductors the Fermi level lies in the middle of a forbidden band or band gap between two allowed bands called the valence band and the conduction band. The valence band, immediately below the forbidden band, is normally very nearly completely occupied. The conduction band, above the Fermi level, is normally nearly completely empty. Because the valence band is so nearly full, its electrons are not mobile, and cannot flow as electric current.
However, if an electron in the valence band acquires enough energy to reach the conduction band (as a result of interaction with other electrons, holes, photons, or the vibrating crystal lattice itself), it can flow freely among the nearly empty conduction |
https://en.wikipedia.org/wiki/Syn%20and%20anti%20addition | In organic chemistry, syn- and anti-addition are different ways in which substituent molecules can be added to an alkene () or alkyne (). The concepts of syn and anti addition are used to characterize the different reactions of organic chemistry by reflecting the stereochemistry of the products in a reaction.
The type of addition that occurs depends on multiple different factors of a reaction, and is defined by the final orientation of the substituents on the parent molecule. Syn and anti addition are related to Markovnikov's rule for the orientation of a reaction, which refers to the bonding preference of different substituents for different carbons on an alkene or alkyne. In order for a reaction to follow Markovnikov's rule, the intermediate carbocation of the mechanism of a reaction must be on the more-substituted carbon, allowing the substituent to bond to the more-stable carbocation and the more-substituted carbon.
Syn addition is the addition of two substituents to the same side (or face) of a double bond or triple bond, resulting in a decrease in bond order but an increase in number of substituents. Generally the substrate will be an alkene or alkyne. An example of syn addition would be the oxidation of an alkene to a diol by way of a suitable oxidizing agent such as osmium tetroxide, , or potassium permanganate, .
Anti addition is in direct contrast to syn addition. In anti addition, two substituents are added to opposite sides (or faces) of a double bond or triple bond, once again resulting in a decrease in bond order and increase in number of substituents. The classical example of this is bromination (any halogenation) of alkenes. An anti addition reaction results in a trans-isomer of the products, as the substituents are on opposite faces of the bond.
Depending on the substrate double bond, addition can have different effects on the molecule. After addition to a straight-chain alkene such as ethene (), the resulting alkane will rapidly and freely ro |
https://en.wikipedia.org/wiki/Tanada%20effect | The Tanada effect refers to the adhesion of root tips to glass surfaces. It is believed to involve electric potentials. It is named for the scientist who first described the effect, Takuma Tanada.
The phenomenon was observed while Dr. Tanada was rinsing glassware and noticed that excised root tips occasionally stuck to pyrex beakers. Upon investigating the phenomenon closely he determined that this process could be studied in a mixture of ATP, ascorbate, auxin, magnesium, manganese and potassium. The tips would stick when the beaker was swirled slowly.
Most importantly, the reaction was light-dependent. Exposure to red light would cause the tips to stick, while exposure to far-red would allow them to release. This simple experiment was indicative of phytochrome function, and the rapid nature of the response suggested that changes in bioelectric potential were seminal events in phytochrome signal propagation.
Root tips stick to glass surfaces because they acquire a positive electrostatic charge due to some unknown effect from exposure to red light. The glass surface has a negative charge due to adsorbed phosphate ions. The opposite charges attract each other.
This phenomenon is the first reported generation of a bioelectric potential by a photomorphogenic pigment.
Several years later, Dr. Tanada found that the electric charge is generated by the trace element boron. Root tips from plants deficient in boron fail to stick to glass. In a dilute solution of boric acid, these tips gradually stick to the glass. |
https://en.wikipedia.org/wiki/Delayed%20nuclear%20radiation | Delayed nuclear radiation is a form of nuclear decay. When an isotope decays into a very short-lived isotope and then decays again to a relatively long-lived isotope, the products of the second decay are delayed. The short-lived isotope is usually a meta-stable nuclear isomer.
For example, gallium-73 decays via beta decay into germanium-73m2, which is short-lived (499ms). The germanium isotope emits two weak gamma rays and a conversion electron.
→ + 2 + ; → + (53.4 keV) + (13.3 keV) +
Because the middle isotope is so short-lived, the gamma rays are considered part of the gallium decay. Therefore, the above equations are combined.
→ + 4 + 2
However, since there is a short time delay between the beta decay and the high energy gamma emissions and the third and fourth gamma rays, it is said that the lower energy gamma rays are delayed.
Delayed gamma emissions are the most common form of delayed radiation, but are not the only form. It is common for the short-lived isotopes to have delayed emissions of various particles. In these cases, it is commonly called a beta-delayed emission. This is because the decay is delayed until a beta decay takes place. For instance, nitrogen-17 emits two beta-delayed neutrons after its primary beta emission. Just as in the above delayed gamma emission, the nitrogen is not the actual source of the neutrons, the source of the neurons is a short-lived isotope of oxygen.
See also
Prompt neutron
External links
Flash animation of beta-delayed neutron emission
Flash animation of beta-delayed proton emission
Flash animation of beta-delayed alpha emission
Radioactivity |
https://en.wikipedia.org/wiki/Antilocution | Antilocution describes a form of prejudice in which negative verbal remarks against a person, group, or community, are made but not addressed directly to the subject.
History
American psychologist Gordon Allport coined this term in his 1954 book, The Nature of Prejudice. Antilocution is the first point on Allport's Scale, which can be used to measure the degree of bias or prejudice in a society. Allport's stages of prejudice are antilocution, avoidance, discrimination, physical attack, and extermination.
Antilocution is a compound noun consisting of the word 'locution' and prefix 'anti' which expresses locution's antithesis.
Description
Allport considered antilocution to be the least aggressive form of prejudice. It can nevertheless be destructive and life-changing for its object/target. Those who employ antilocution may neither know what they are doing nor consider themselves committing a prejudicial act. A subject may feel the need to join in if the antilocution is employed by the majority. This can either bind the subject to the group and/or spread biased information that engenders discriminatory behaviors toward the object.
Antilocution is similar to 'talking behind someone's back,' though antilocution may result in an in-group ostracizing an out-group on a biased basis.
"Antilocution" is used less often than "hate speech", which has a similar but more aggressive meaning and which places no regard on the fact that the out-group is unaware of the discrimination.
Causes, employment, and danger
Individuals may engage in prejudicial conversation when they feel threatened. Such conversations may be based on misperceptions and by the subject. For example, a group may apply stereotypes to a new, unknown member. Such individuals may deny that their behavior is prejudicial, and is instead a matter of expressing opinions. Antilocution can lead to widespread discrimination toward the object as the subject(s) do not feel that they are transgressing. Facts are neede |
https://en.wikipedia.org/wiki/Chkrootkit | Chkrootkit (Check Rootkit) is a widely used Unix-based utility designed to aid system administrators in examining their systems for rootkits. Operating as a shell script, it leverages common Unix/Linux tools such as the strings and grep command. The primary purpose is to scan core system programs for identifying signatures and to compare data obtained from traversal the /proc with the output derived from the ps (process status) command, aiming to identify inconsistencies. It offers flexibility in execution, allowing it to function from a rescue disc, often a live CD, and provides an optional alternative directory for executing its commands. These approaches enhance chkrootkit's reliance on the commands it employs.
It's crucial to recognize the inherent limitations of any program that strives to detect compromises, including rootkits and malware. Modern rootkits might deliberately attempt to identify and target copies of the chkrootkit program, or adopt other strategies to elude detection by it.
See also
Host-based intrusion detection system comparison
Hardening (computing)
Linux malware
MalwareMustDie
rkhunter
Lynis
OSSEC
Samhain (software) |
https://en.wikipedia.org/wiki/DShield | DShield is a community-based collaborative firewall log correlation system. It receives logs from volunteers worldwide and uses them to analyze attack trends. It is used as the data collection engine behind the SANS Internet Storm Center (ISC). DShield was officially launched end of November 2000 by Johannes Ullrich. Since then, it has grown to be a dominating attack correlation engine with worldwide coverage.
DShield is regularly used by the media to cover current events. Analysis provided by DShield has been used in the early detection of several worms, like "Ramen", Code Red, "Leaves", "SQL Snake" and more. DShield data is regularly used by researchers to analyze attack patterns.
The goal of the DShield project is to allow access to its correlated information to the public at no charge to raise awareness and provide accurate and current snapshots of internet attacks. Several data feeds are provided to users to either include in their own web sites or to use as an aide to analyze events.
See also
SANS Institute (SysAdmin, Audit, Network and Security – SANS)
Comparison of network monitoring systems
ShieldsUP
SPEWS |
https://en.wikipedia.org/wiki/Latch-up | In electronics, a latch-up is a type of short circuit which can occur in an integrated circuit (IC). More specifically, it is the inadvertent creation of a low-impedance path between the power supply rails of a MOSFET circuit, triggering a parasitic structure which disrupts proper functioning of the part, possibly even leading to its destruction due to overcurrent. A power cycle is required to correct this situation.
The parasitic structure is usually equivalent to a thyristor (or SCR), a PNPN structure which acts as a PNP and an NPN transistor stacked next to each other. During a latch-up when one of the transistors is conducting, the other one begins conducting too. They both keep each other in saturation for as long as the structure is forward-biased and some current flows through it - which usually means until a power-down. The SCR parasitic structure is formed as a part of the totem-pole PMOS and NMOS transistor pair on the output drivers of the gates.
The latch-up does not have to happen between the power rails - it can happen at any place where the required parasitic structure exists. A common cause of latch-up is a positive or negative voltage spike on an input or output pin of a digital chip that exceeds the rail voltage by more than a diode drop. Another cause is the supply voltage exceeding the absolute maximum rating, often from a transient spike in the power supply. It leads to a breakdown of an internal junction. This frequently happens in circuits which use multiple supply voltages that do not come up in the required sequence on power-up, leading to voltages on data lines exceeding the input rating of parts that have not yet reached a nominal supply voltage. Latch-ups can also be caused by an electrostatic discharge event.
Another common cause of latch-ups is ionizing radiation which makes this a significant issue in electronic products designed for space (or very high-altitude) applications. A single event latch-up is a latch-up caused by a si |
https://en.wikipedia.org/wiki/Mathematical%20psychology | Mathematical psychology is an approach to psychological research that is based on mathematical modeling of perceptual, thought, cognitive and motor processes, and on the establishment of law-like rules that relate quantifiable stimulus characteristics with quantifiable behavior (in practice often constituted by task performance). The mathematical approach is used with the goal of deriving hypotheses that are more exact and thus yield stricter empirical validations. There are five major research areas in mathematical psychology: learning and memory, perception and psychophysics, choice and decision-making, language and thinking, and measurement and scaling.
Although psychology, as an independent subject of science, is a more recent discipline than physics, the application of mathematics to psychology has been done in the hope of emulating the success of this approach in the physical sciences, which dates back to at least the seventeenth century. Mathematics in psychology is used extensively roughly in two areas: one is the mathematical modeling of psychological theories and experimental phenomena, which leads to mathematical psychology, the other is the statistical approach of quantitative measurement practices in psychology, which leads to psychometrics.
As quantification of behavior is fundamental in this endeavor, the theory of measurement is a central topic in mathematical psychology. Mathematical psychology is therefore closely related to psychometrics. However, where psychometrics is concerned with individual differences (or population structure) in mostly static variables, mathematical psychology focuses on process models of perceptual, cognitive and motor processes as inferred from the 'average individual'. Furthermore, where psychometrics investigates the stochastic dependence structure between variables as observed in the population, mathematical psychology almost exclusively focuses on the modeling of data obtained from experimental paradigms and is ther |
https://en.wikipedia.org/wiki/Hardware%20overlay | In computing, hardware overlay, a type of video overlay, provides a method of rendering an image to a display screen with a dedicated memory buffer inside computer video hardware. The technique aims to improve the display of a fast-moving video image — such as a computer game, a DVD, or the signal from a TV card. Most video cards manufactured since about 1998 and most media players support hardware overlay.
The overlay is a dedicated buffer into which one app can render (typically video), without incurring the significant performance cost of checking for clipping and overlapping rendering by other apps. The framebuffer has hardware support for importing and rendering the buffer contents without going through the GPU.
Overview
The use of a hardware overlay is important for several reasons:
In a graphical user interface (GUI) operating system such as Windows, one display-device can typically display multiple applications simultaneously.
Consider how a display works without a hardware overlay. When each application draws to the screen, the operating system's graphical subsystem must constantly check to ensure that the objects being drawn appear on the appropriate location on the screen, and that they don't collide with overlapping and neighboring windows. The graphical subsystem must clip objects while they are being drawn when a collision occurs. This constant checking and clipping ensures that different applications can cooperate with one another in sharing a display, but also consumes a significant proportion of computing power.
A computer draws on its display by writing a bitmapped representation of the graphics into a special portion of its memory known as video memory. Without any hardware overlays, only one chunk of video memory exists which all applications must share - and the location of a given application's video memory moves whenever the user changes the position of the application's window. With shared video memory, an application must constan |
https://en.wikipedia.org/wiki/Chapman%20function | A Chapman function describes the integration of atmospheric absorption along a slant path on a spherical earth, relative to the vertical case. It applies to any quantity with a concentration decreasing exponentially with increasing altitude. To a first approximation, valid at small zenith angles, the Chapman function for optical absorption is equal to
where z is the zenith angle and sec denotes the secant function.
The Chapman function is named after Sydney Chapman, who introduced the function in 1931.
Definition
In an isothermal model of the atmosphere, the density varies exponentially with altitude according to the Barometric formula:
,
where denotes the density at sea level () and the so-called scale height.
The total amount of matter traversed by a vertical ray starting at altitude towards infinity is given by the integrated density ("column depth")
.
For inclined rays having a zenith angle , the integration is not straight-forward due to the non-linear relationship between altitude and path length when considering the
curvature of Earth. Here, the integral reads
,
where we defined ( denotes the Earth radius).
The Chapman function is defined as the ratio between slant depth and vertical column depth . Defining , it can be written as
.
Representations
A number of different integral representations have been developed in the literature. Chapman's original representation reads
.
Huestis developed the representation
,
which does not suffer from numerical singularities present in Chapman's representation.
Special cases
For (horizontal incidence), the Chapman function reduces to
.
Here, refers to the modified Bessel function of the second kind of the first order. For large values of , this can further be approximated by
.
For and , the Chapman function converges to the secant function:
.
In practical applications related to the terrestrial atmosphere, where , is a good approximation for zenith angles up to 60° to 70°, depending on the accuracy |
https://en.wikipedia.org/wiki/KPDX | KPDX (channel 49) is a television station licensed to Vancouver, Washington, United States, serving the Portland, Oregon area as an affiliate of MyNetworkTV. It is the only major commercial station in Portland that is licensed to the Washington side of the market.
KPDX is owned by Gray Television alongside Fox affiliate KPTV (channel 12). Both stations share studios on NW Greenbrier Parkway in Beaverton, while KPDX's transmitter is located in the Sylvan-Highlands section of Portland. KPDX's signal is relayed in Central Oregon through translator station KUBN-LD (channel 9) in Bend, making the station available in about two-thirds of the state.
Since February 2018, KPDX has been branded as Fox 12 Plus, an extension of the branding used by KPTV.
History
As an independent station
In August 1980, the local KLRK Broadcasting Corporation filed an application to construct a new TV station on channel 49 at Vancouver. The construction permit was granted by the Federal Communications Commission (FCC) on January 5, 1981, and took the KLRK call letters, representing Clark County. KLRK foresaw an independent station emphasizing Southwest Washington sports and news. However, work came to a halt when KLRK ran out of money to build the facility. In late 1981, Camellia City Telecasters, the owner of KTXL-TV in Sacramento, California, filed to buy the construction permit, an action decried by newly built KECH-TV (channel 22 in Salem) and Cascade Video, applicant for a station on channel 40. Camellia's entry in the Portland market was significant because it bought rights to $10 million of films and syndicated programs, which particularly harmed KECH.
Channel 49 would miss several planned launch dates due to multiple factors. The station was forced by Multnomah County to allow other interested broadcasters to rent tower space, and Oregon Public Broadcasting's KOAP-TV and KOPB-FM used the opportunity to consolidate their transmission facilities with the new transmitter. There were a |
https://en.wikipedia.org/wiki/Potential%20temperature | The potential temperature of a parcel of fluid at pressure is the temperature that the parcel would attain if adiabatically brought to a standard reference pressure , usually . The potential temperature is denoted and, for a gas well-approximated as ideal, is given by
where is the current absolute temperature (in K) of the parcel, is the gas constant of air, and is the specific heat capacity at a constant pressure.
for air (meteorology). The reference point for potential temperature in the ocean is usually at the ocean's surface which has a water pressure of 0 dbar. The potential temperature in the ocean doesn't account for the varying heat capacities of seawater, therefore it is not a conservative measure of heat content. Graphical representation of potential temperature will always be less than the actual temperature line in a temperature vs depth graph.
Contexts
The concept of potential temperature applies to any stratified fluid. It is most frequently used in the atmospheric sciences and oceanography. The reason that it is used
in both fields is that changes in pressure can result in warmer fluid residing under colder fluid – examples being dropping air temperature with altitude and increasing water temperature with depth in very deep ocean trenches and
within the ocean mixed layer. When the potential temperature is used instead, these apparently unstable conditions vanish as a parcel of fluid is invariant along its isolines. In the oceans, the potential temperature referenced to the surface will be slightly less than the in-situ temperature (the temperature that a water volume has at the specific depth that the instrument measured it in) since the expansion due to reduction in pressure leads to cooling. The numeric difference between the in situ and potential temperature is almost always less than 1.5 degrees Celsius. However, it's important to use potential temperature when comparing temperatures of water from very different depths.
Comments
Pot |
https://en.wikipedia.org/wiki/Wireworld | Wireworld, alternatively WireWorld, is a cellular automaton first proposed by Brian Silverman in 1987, as part of his program Phantom Fish Tank. It subsequently became more widely known as a result of an article in the "Computer Recreations" column of Scientific American. Wireworld is particularly suited to simulating transistors, and is Turing-complete.
Rules
A Wireworld cell can be in one of four different states, usually numbered 0–3 in software, modeled by colors in the examples here:
empty (black),
electron head (blue),
electron tail (red),
conductor (yellow).
As in all cellular automata, time proceeds in discrete steps called generations (sometimes "gens" or "ticks"). Cells behave as follows:
empty → empty,
electron head → electron tail,
electron tail → conductor,
conductor → electron head if exactly one or two of the neighbouring cells are electron heads, otherwise remains conductor.
Wireworld uses what is called the Moore neighborhood, which means that in the rules above, neighbouring means one cell away (range value of one) in any direction, both orthogonal and diagonal.
These simple rules can be used to construct logic gates (see below).
Applications
Entities built within Wireworld universes include Langton's Ant (allowing any Langton's Ant pattern to be built within Wireworld) and the Wireworld computer, a Turing-complete computer implemented as a cellular automaton.
See also
von Neumann's cellular automaton |
https://en.wikipedia.org/wiki/B%C3%A9zout%20matrix | In mathematics, a Bézout matrix (or Bézoutian or Bezoutiant) is a special square matrix associated with two polynomials, introduced by and and named after Étienne Bézout. Bézoutian may also refer to the determinant of this matrix, which is equal to the resultant of the two polynomials. Bézout matrices are sometimes used to test the stability of a given polynomial.
Definition
Let and be two complex polynomials of degree at most n,
(Note that any coefficient or could be zero.) The Bézout matrix of order n associated with the polynomials f and g is
where the entries result from the identity
It is an n × n complex matrix, and its entries are such that if we let for each , then:
To each Bézout matrix, one can associate the following bilinear form, called the Bézoutian:
Examples
For n = 3, we have for any polynomials f and g of degree (at most) 3:
Let and be the two polynomials. Then:
The last row and column are all zero as f and g have degree strictly less than n (which is 4). The other zero entries are because for each , either or is zero.
Properties
is symmetric (as a matrix);
;
;
is a bilinear function;
is a real matrix if f and g have real coefficients;
is nonsingular with if and only if f and g have no common roots.
with has determinant which is the resultant of f and g.
Applications
An important application of Bézout matrices can be found in control theory. To see this, let f(z) be a complex polynomial of degree n and denote by q and p the real polynomials such that f(iy) = q(y) + ip(y) (where y is real). We also denote r for the rank and σ for the signature of . Then, we have the following statements:
f(z) has n − r roots in common with its conjugate;
the left r roots of f(z) are located in such a way that:
(r + σ)/2 of them lie in the open left half-plane, and
(r − σ)/2 lie in the open right half-plane;
f is Hurwitz stable if and only if is positive definite.
The third statement gives a necessary and sufficient |
https://en.wikipedia.org/wiki/WESH | WESH (channel 2) is a television station licensed to Daytona Beach, Florida, United States, serving the Orlando area as an affiliate of NBC. It is owned by Hearst Television alongside Clermont-licensed CW affiliate WKCF (channel 18). The stations share studios on North Wymore Road in Eatonville (using a Winter Park address), while WESH's transmitter is located near Christmas, Florida.
WESH formerly served as a default NBC affiliate for the Gainesville market as the station's analog transmitter provided a city-grade off-air signal in Gainesville proper (and also provided Grade B signal coverage in the fringes of the Tampa Bay and Jacksonville markets). However, since January 1, 2009, Gainesville has been served by an in-market affiliate, WNBW (channel 9); although Cox Communications continues to carry WESH on its Gainesville area system.
History
WESH-TV first signed on the air on June 11, 1956. At first, it ran as an independent station, but on October 27, 1957, it became an NBC affiliate, and has been with NBC ever since. Businessman W. Wright Esch (for whom the station is named) won the license, but sold it to Perry Publications of Palm Beach just before the station made its debut. The station's original studios were located on Corporation Street in Holly Hill, near Daytona Beach.
The station's original transmitter tower was only high, which was tiny even by 1950s' standards, and limited channel 2's signal coverage to Volusia County. As such, it shared the NBC affiliation in Central Florida with primary CBS affiliate WDBO-TV (channel 6, now WKMG-TV). It finally became the market's exclusive NBC affiliate on November 5, 1957, when WDBO-TV relinquished its secondary affiliation with the network. On that day, the station activated a new transmitter tower in Orange City. The tower was located farther north than the other major Orlando stations' transmitters because of Federal Communications Commission (FCC) rules at the time that required a station's transmitter t |
https://en.wikipedia.org/wiki/R.%20Duncan%20Luce | Robert Duncan Luce (May 16, 1925 – August 11, 2012) was an American mathematician and social scientist, and one of the most preeminent figures in the field of mathematical psychology. At the end of his life, he held the position of Distinguished Research Professor of Cognitive Science at the University of California, Irvine.
Education and career
Luce received a Bachelor of Science degree in Aeronautical Engineering from the Massachusetts Institute of Technology in 1945, and PhD in Mathematics from the same university in 1950 under Irvin S. Cohen with thesis On Semigroups.
He began his professorial career at Columbia University in 1954, where he was an assistant professor in mathematical statistics and sociology. Following a lecturership at Harvard University from 1957 to 1959, he became a professor at the University of Pennsylvania in 1959, and was awarded the Benjamin Franklin Professorship of Psychology in 1968. After visiting the Institute for Advanced Study beginning in 1969, he joined the UC Irvine faculty in 1972, but returned to Harvard in 1976 as Alfred North Whitehead Professor of Psychology and then later as Victor S. Thomas Professor of Psychology. In 1988 Luce rejoined the UC Irvine faculty as Distinguished Professor of Cognitive Sciences and (from 1988 to 1998) director of UCI's Institute for Mathematical Behavioral Sciences.
Contributions
Contributions for which Luce is known include formulating Luce's choice axiom formalizing the principle that additional options should not affect the probability of selecting one item over another, defining semiorders, introducing graph-theoretic methods into the social sciences, and coining the term "clique" for a complete subgraph in graph theory.
Recognition
In 1966, Luce was elected to the American Academy of Arts and Sciences. Luce was elected to the National Academy of Sciences in 1972 for his work on fundamental measurement, utility theory, global psychophysics, and mathematical behavioral sciences. He rece |
https://en.wikipedia.org/wiki/Beck%27s%20monadicity%20theorem | In category theory, a branch of mathematics, Beck's monadicity theorem gives a criterion that characterises monadic functors, introduced by in about 1964. It is often stated in dual form for comonads. It is sometimes called the Beck tripleability theorem because of the older term triple for a monad.
Beck's monadicity theorem asserts that a functor
is monadic if and only if
U has a left adjoint;
U reflects isomorphisms (if U(f) is an isomorphism then so is f); and
C has coequalizers of U-split parallel pairs (those parallel pairs of morphisms in C, which U sends to pairs having a split coequalizer in D), and U preserves those coequalizers.
There are several variations of Beck's theorem: if U has a left adjoint then any of the following conditions ensure that U is monadic:
U reflects isomorphisms and C has coequalizers of reflexive pairs (those with a common right inverse) and U preserves those coequalizers. (This gives the crude monadicity theorem.)
Every diagram in C which is by U sent to a split coequalizer sequence in D is itself a coequalizer sequence in C. In different words, U creates (preserves and reflects) U-split coequalizer sequences.
Another variation of Beck's theorem characterizes strictly monadic functors: those for which the comparison functor is an isomorphism rather than just an equivalence of categories. For this version the definitions of what it means to create coequalizers is changed slightly: the coequalizer has to be unique rather than just unique up to isomorphism.
Beck's theorem is particularly important in its relation with the descent theory, which plays a role in sheaf and stack theory, as well as in the Alexander Grothendieck's approach to algebraic geometry. Most cases of faithfully flat descent of algebraic structures (e.g. those in FGA and in SGA1) are special cases of Beck's theorem. The theorem gives an exact categorical description of the process of 'descent', at this level. In 1970 the Grothendieck approach via fibere |
https://en.wikipedia.org/wiki/Beck%27s%20theorem%20%28geometry%29 | In discrete geometry, Beck's theorem is any of several different results, two of which are given below. Both appeared, alongside several other important theorems, in a well-known paper by József Beck. The two results described below primarily concern lower bounds on the number of lines determined by a set of points in the plane. (Any line containing at least two points of point set is said to be determined by that point set.)
Erdős–Beck theorem
The Erdős–Beck theorem is a variation of a classical result by L. M. Kelly and W. O. J. Moser involving configurations of n points of which at most n − k are collinear, for some 0 < k < O(). They showed that if n is sufficiently large, relative to k, then the configuration spans at least kn − (1/2)(3k + 2)(k − 1) lines.
Elekes and Csaba Toth noted that the Erdős–Beck theorem does not easily extend to higher dimensions. Take for example a set of 2n points in R3 all lying on two skew lines. Assume that these two lines are each incident to n points. Such a configuration of points spans only 2n planes. Thus, a trivial extension to the hypothesis for point sets in Rd is not sufficient to obtain the desired result.
This result was first conjectured by Erdős, and proven by Beck. (See Theorem 5.2 in.)
Statement
Let S be a set of n points in the plane. If no more than n − k points lie on any line for some 0 ≤ k < n − 2, then there exist Ω(nk) lines determined by the points of S.
Proof
Beck's theorem
Beck's theorem says that finite collections of points in the plane fall into one of two extremes; one where a large fraction of points lie on a single line, and one where a large number of lines are needed to connect all the points.
Although not mentioned in Beck's paper, this result is implied by the Erdős–Beck theorem.
Statement
The theorem asserts the existence of positive constants C, K such that given any n points in the plane, at least one of the following statements is true:
There is a line which contains at least n/C of |
https://en.wikipedia.org/wiki/Scratch%20building | Scratch building is the process of building a scale model "from scratch", i.e. from raw materials, rather than building it from a commercial kit, kitbashing or buying it pre-assembled.
Scratch building is easiest if original plans of the subject exist; however, many models have been built from photographs by measuring a known object in the photograph and extrapolating the rest of the dimensions. The necessary parts are then fashioned out of a suitable material, such as wood, plastic, plaster, clay, metal, polymer clay, or even paper, and then assembled. Some purists consider a model not to be truly scratchbuilt unless all of the parts were made from raw materials. However most modellers would consider a model including commercial detail parts as scratchbuilt. Scratchbuilding a new body onto an altered ready-to-run chassis is also acceptable.
Motives
The reasons hobbyists scratchbuild may vary. Often a desired model is unavailable in kit form in the desired scale, or entirely non-existent. Sometimes the hobbyist may be dissatisfied with the accuracy or detail of kits that are available. Other times a hobbyist will opt to scratchbuild simply for the challenge. Less frequently a hobbyist will scratchbuild out of economy, as often the raw materials cost less than a packaged commercial kit.
Techniques
Most hobbyists develop their skills by building kits, then progress to kitbashing, where various kits are combined to create a unique model before attempting to scratchbuild. Sometimes scratchbuilders utilize discarded parts of other models or toys, with or without modification, either in order to speed up the building process or to allow the process to continue in spite of certain parts being difficult to make. Some companies sell parts that are of little use to anyone but scratchbuilders.
Building stock
Building stock, in whichever material, can be plain sheets, strips, bars, tubes, rods, or even structural shapes such as L or T girders. Stock can also be embossed o |
https://en.wikipedia.org/wiki/Parallels%20Workstation | Parallels Workstation is the first commercial software product released by Parallels, Inc., a developer of desktop and server virtualization software. The Workstation software consists of a virtual machine suite for Intel x86-compatible computers (running Microsoft Windows or Linux) (for Mac version, see Parallels Desktop for Mac) which allows the simultaneous creation and execution of multiple x86 virtual computers. The product is distributed as a download package. Parallels Workstation has been discontinued for Windows and Linux as of 2013.
Implementation
Like other virtualization software, Parallels Workstation uses hypervisor technology, which is a thin software layer between Primary OS and host computer. The hypervisor directly controls some of the host machine's hardware resources and provides an interface to it for both virtual machine monitors and primary OS. This allows virtualization software to reduce overhead. Parallels Workstation's hypervisor also supports hardware virtualization technologies like Intel VT-x and AMD-V.
Features
Parallels Workstation is a hardware emulation virtualization software, in which a virtual machine engine enables each virtual machine to work with its own processor, RAM, floppy drive, CD drive, I/O devices, and hard disk – everything a physical computer contains. Parallels Workstation virtualizes all devices within the virtual environment, including the video adapter, network adapter, and hard disk adapters. It also provides pass-through drivers for parallel port and USB devices.
Because all guest virtual machines use the same hardware drivers irrespective of the actual hardware on the host computer, virtual machine instances are highly portable between computers. For example, a running virtual machine can be stopped, copied to another physical computer, and restarted.
Parallels Workstation is able to virtualize a full set of standard PC hardware, including:
A 64-bit processor with NX and AES-NI instructions.
A generic |
https://en.wikipedia.org/wiki/Nuclear%20Power%202010%20Program | The "Nuclear Power 2010 Program" was launched in 2002 by President George W. Bush in 2002, 13 months after the beginning of his presidency, in order to restart orders for nuclear power reactors in the U.S. by providing subsidies for a handful of Generation III+ demonstration plants. The expectation was that these plants would come online by 2010, but it was not met.
In March 2017, the leading nuclear-plant maker, Westinghouse Electric Company, filed for bankruptcy due to losing over $9 billion in construction losses from working on two nuclear plants. This loss was partly caused by safety concerns due to the Fukushima disaster, Germany's Energiewende, the growth of solar and wind power, and low natural gas prices.
Overview
The "Nuclear Power 2010 Program" was unveiled by the U.S. Secretary of Energy Spencer Abraham on February 14, 2002 as one means towards addressing the expected need for new power plants. The program was a joint government/industry cost-shared effort to identify sites for new nuclear power plants, to develop and bring to market advanced nuclear plant technologies, evaluate the business case for building new nuclear power plants, and demonstrate untested regulatory processes leading to an industry decision in the next few years to seek Nuclear Regulatory Commission (NRC) approval to build and operate at least one new advanced nuclear power plant in the United States.
Three consortia responded in 2004 to the U.S. Department of Energy's solicitation under the Nuclear Power 2010 initiative and were awarded matching funds.
The Dominion-led consortium includes General Electric (GE) Energy, Hitachi America, and Bechtel Corporation, and has selected General Electric's Economic Simplified Boiling Water Reactor (ESBWR, a passively safe version of the BWR).
The NuStart Energy Development, LLC consortium consists of DTE Energy, Duke Energy, EDF International North America, Entergy Nuclear, Exelon Generation, Florida Power & Light Co., Progress Energy, SCA |
https://en.wikipedia.org/wiki/Fimbriation | In heraldry and vexillology, fimbriation is the placement of small stripes of contrasting colour around common charges or ordinaries, usually in order for them to stand out from the background, but often simply due to the designer's subjective aesthetic preferences, or for a more technical reason (in heraldry only) to avoid what would otherwise be a violation of the rule of tincture. While fimbriation almost invariably applies to both or all sides of a charge, there are very unusual examples of fimbriation on one side only. Another rather rare form is double fimbriation (blazoned "double fimbriated"), where the charge or ordinary is accompanied by two stripes of colour instead of only one. In cases of double fimbriation the outer colour is blazoned first. The municipal flag of Mozirje, in Slovenia, show an example of fimbriation that itself is fimbriated.
Fimbriation may also be used when a charge is the same colour as the field on which it is placed. A red charge placed on a red background may be necessary, for instance where the charge and field are both a specific colour for symbolic or historical reasons, and in these cases fimbriation becomes a necessity in order for the charge to be visible. In some cases, such as a fimbriated cross placed on a field of the same colour as the cross, the effect is identical to the use of cross voided, i.e. a cross shown in outline only.
According to the rule of tincture, one of the fundamental rules of heraldic design, colour may not be placed on colour nor metal on metal. (In heraldry, "metal" refers to gold and silver, frequently represented using yellow and white respectively. "Colour" refers to all other colours.) Sometimes, however, it is desired to do something like this, so fimbriation is used to comply with the rule.
In vexillology that is not specifically heraldic, the rules of heraldry do not apply, yet fimbriation is still frequently seen. The reason for this is largely the same as the reason for the heraldic rule |
https://en.wikipedia.org/wiki/D-Bus | D-Bus (short for "Desktop Bus")
is a message-oriented middleware mechanism that allows communication between multiple processes running concurrently on the same machine. D-Bus was developed as part of the freedesktop.org project, initiated by GNOME developer Havoc Pennington to standardize services provided by Linux desktop environments such as GNOME and KDE.
The freedesktop.org project also developed a free and open-source software library called libdbus, as a reference implementation of the specification. This library should not be confused with D-Bus itself, as other implementations of the D-Bus specification also exist, such as GDBus (GNOME), QtDBus (Qt/KDE), dbus-java and sd-bus (part of systemd).
Overview
D-Bus is an inter-process communication (IPC) mechanism initially designed to replace the software component communications systems used by the GNOME and KDE Linux desktop environments (CORBA and DCOP respectively). The components of these desktop environments are normally distributed in many processes, each one providing only a few—usually one—services. These services may be used by regular client applications or by other components of the desktop environment to perform their tasks.
Due to the large number of processes involved—adding up processes providing the services and clients accessing them—establishing one-to-one IPC between all of them becomes an inefficient and quite unreliable approach. Instead, D-Bus provides a software-bus abstraction that gathers all the communications between a group of processes over a single shared virtual channel. Processes connected to a bus do not know how it is internally implemented, but D-Bus specification guarantees that all processes connected to the bus can communicate with each other through it.
Linux desktop environments take advantage of the D-Bus facilities by instantiating multiple buses, notably:
a single system bus, available to all users and processes of the system, that provides access to system servic |
https://en.wikipedia.org/wiki/Isogonal%20conjugate | __notoc__
In geometry, the isogonal conjugate of a point with respect to a triangle is constructed by reflecting the lines about the angle bisectors of respectively. These three reflected lines concur at the isogonal conjugate of . (This definition applies only to points not on a sideline of triangle .) This is a direct result of the trigonometric form of Ceva's theorem.
The isogonal conjugate of a point is sometimes denoted by . The isogonal conjugate of is .
The isogonal conjugate of the incentre is itself. The isogonal conjugate of the orthocentre is the circumcentre . The isogonal conjugate of the centroid is (by definition) the symmedian point . The isogonal conjugates of the Fermat points are the isodynamic points and vice versa. The Brocard points are isogonal conjugates of each other.
In trilinear coordinates, if is a point not on a sideline of triangle , then its isogonal conjugate is For this reason, the isogonal conjugate of is sometimes denoted by . The set of triangle centers under the trilinear product, defined by
is a commutative group, and the inverse of each in is .
As isogonal conjugation is a function, it makes sense to speak of the isogonal conjugate of sets of points, such as lines and circles. For example, the isogonal conjugate of a line is a circumconic; specifically, an ellipse, parabola, or hyperbola according as the line intersects the circumcircle in 0, 1, or 2 points. The isogonal conjugate of the circumcircle is the line at infinity. Several well-known cubics (e.g., Thompson cubic, Darboux cubic, Neuberg cubic) are self-isogonal-conjugate, in the sense that if is on the cubic, then is also on the cubic.
Another construction for the isogonal conjugate of a point
For a given point in the plane of triangle , let the reflections of in the sidelines be . Then the center of the circle is the isogonal conjugate of .
See also
Isotomic conjugate
Central line (geometry)
Triangle center |
https://en.wikipedia.org/wiki/Navigation%20mesh | A navigation mesh, or navmesh, is an abstract data structure used in artificial intelligence applications to aid agents in pathfinding through complicated spaces. This approach has been known since at least the mid-1980s in robotics, where it has been called a meadow map, and was popularized in video game AI in 2000.
Description
A navigation mesh is a collection of two-dimensional convex polygons (a polygon mesh) that define which areas of an environment are traversable by agents. In other words, a character in a game could freely walk around within these areas unobstructed by trees, lava, or other barriers that are part of the environment. Adjacent polygons are connected to each other in a graph.
Pathfinding within one of these polygons can be done trivially in a straight line because the polygon is convex and traversable. Pathfinding between polygons in the mesh can be done with one of the large number of graph search algorithms, such as A*. Agents on a navmesh can thus avoid computationally expensive collision detection checks with obstacles that are part of the environment.
Representing traversable areas in a 2D-like form simplifies calculations that would otherwise need to be done in the "true" 3D environment, yet unlike a 2D grid it allows traversable areas that overlap above and below at different heights. The polygons of various sizes and shapes in navigation meshes can represent arbitrary environments with greater accuracy than regular grids can.
Creation
Navigation meshes can be created manually, automatically, or by some combination of the two. In video games, a level designer might manually define the polygons of the navmesh in a level editor. This approach can be quite labor intensive. Alternatively, an application could be created that takes the level geometry as input and automatically outputs a navmesh.
It is commonly assumed that the environment represented by a navmesh is static – it does not change over time – and thus the navmesh can be crea |
https://en.wikipedia.org/wiki/Slipstream%20%28computer%20science%29 | A slipstream processor is an architecture designed to reduce the length of a running program by removing the non-essential instructions.
It is a form of speculative computing.
Non-essential instructions include such things as results that are not written to memory, or compare operations that will always return true. Also as statistically most branch instructions will be taken it makes sense to assume this will always be the case.
Because of the speculation involved slipstream processors are generally described as having two parallel executing streams. One is an optimized faster A-stream (advanced stream) executing the reduced code, the other is the slower R-stream (redundant stream), which runs behind the A-stream and executes the full code. The R-stream runs faster than if it were a single stream due to data being prefetched by the A-stream effectively hiding memory latency, and due to the A-stream's assistance with branch prediction. The two streams both complete faster than a single stream would. As of 2005, theoretical studies have shown that this configuration can lead to a speedup of around 20%.
The main problem with this approach is accuracy: as the A-stream becomes more accurate and less speculative, the overall system runs slower. Furthermore, a large enough distance is needed between the A-stream and the R-stream so that cache misses generated by the A-stream do not slow down the R-stream. |
https://en.wikipedia.org/wiki/Colortrak | Colortrak was a trademark used on several RCA color televisions beginning in the 1970s and lasting into the 1990s. After RCA was acquired by General Electric in 1986, GE began marketing sets identical to those from RCA. GE sold both RCA and GE consumer electronics lines to Thomson SA in 1988. RCA televisions with the Colortrak branding were mid-range models; positioned above the low-end XL-100 series but below the high-end Dimensia and Colortrak 2000 series. RCA discontinued the Colortrak name in the late 1990s, with newer models badged as the Entertainment Series.
Design quirks
During the early 1980s, RCA responded to increased demand for component televisions with monitor capabilities by adding composite and S-video inputs to the Colortrak lineup. These inputs allowed owners to easily connect a stereo audio/video source, like a Video Cassette Recorder, LaserDisc player, or with use of an RCA SelectaVision CED videodisc player to the television. For example, early composite video-equipped RCA sets were to coincidentally be tuned to Non-broadcast channel 91 to display a composite video signal, if a set was equipped with more than one input, subsequent inputs are designated to channels 92 to 95, which are usually accessed from the remote control. Later models abandoned this design, favoring A/V inputs which were accessible by pressing the channel up/down buttons, or A/V inputs which were controlled by their own button.
Tuner Issues
After Thomson SA acquired the GE and RCA brand names, they began designing a new chassis for RCA and GE televisions, which debuted in 1993 models. Instead of using a tuner module soldered to the circuit board, Thomson decided to integrate the tuner into the board itself. Due to the heating and cooling cycles of the circuit board and tuner from normal use, the solder connections between the tuner and the board would fail, causing an intermittent picture or no signal from the coaxial connector. This is easily repairable by desoldering the |
https://en.wikipedia.org/wiki/Lazarus%20%28software%29 | Lazarus is a free, cross-platform, integrated development environment (IDE) for rapid application development (RAD) using the Free Pascal compiler. Its goal is to provide an easy-to-use development environment for programmers developing with the Object Pascal language, which is as close as possible to Delphi.
Software developers use Lazarus to create native-code console and graphical user interface (GUI) applications for the desktop, and also for mobile devices, web applications, web services, visual components and function libraries for a number of different platforms, including Mac, Linux and Windows.
A project created by using Lazarus on one platform can be compiled on any other one which Free Pascal compiler supports. For desktop applications a single source can target macOS, Linux, and Windows, with little or no modification. An example is the Lazarus IDE itself, created from a single code base and available on all major platforms including the Raspberry Pi.
Features
Lazarus provides a WYSIWYG development environment for the creation of rich user interfaces, application logic, and other supporting code artifacts, similar to Borland Delphi. Along with project management features, the Lazarus IDE also provides:
A visual windows layout designer
GUI widgets or visual components such as edit boxes, buttons, dialogs, menus, etc.
Non-visual components for common behaviors such as persistence of application settings
Data-connectivity components for MySQL, PostgreSQL, FireBird, Oracle, SQLite, Sybase, and others
Data-aware widget set that allows the developer to see data in visual components in the designer to assist with development
Interactive debugger
Code completion
Code templates
Syntax highlighting
Context-sensitive help
Text resource manager for internationalization
Automatic code formatting
Extensibility via custom components
Cross-platform development
Lazarus uses Free Pascal as its back-end compiler. As Free Pascal supports cross-compiling |
https://en.wikipedia.org/wiki/Neutral%20particle%20oscillation | In particle physics, neutral particle oscillation is the transmutation of a particle with zero electric charge into another neutral particle due to a change of a non-zero internal quantum number, via an interaction that does not conserve that quantum number. Neutral particle oscillations were first investigated in 1954 by Murray Gell-mann and Abraham Pais.
For example, a neutron cannot transmute into an antineutron as that would violate the conservation of baryon number. But in those hypothetical extensions of the Standard Model which include interactions that do not strictly conserve baryon number, neutron–antineutron oscillations are predicted to occur.
Such oscillations can be classified into two types:
Particle–antiparticle oscillation (for example, oscillation).
Flavor oscillation (for example, oscillation).
In those cases where the particles decay to some final product, then the system is not purely oscillatory, and an interference between oscillation and decay is observed.
History and motivation
CP violation
After the striking evidence for parity violation provided by Wu et al. in 1957, it was assumed that CP (charge conjugation-parity) is the quantity which is conserved. However, in 1964 Cronin and Fitch reported CP violation in the neutral Kaon system. They observed the long-lived KL (with ) undergoing decays into two pions (with ) thereby violating CP conservation.
In 2001, CP violation in the system was confirmed by the BaBar and the Belle experiments. Direct CP violation in the system was reported by both the labs by 2005.
The and the systems can be studied as two state systems, considering the particle and its antiparticle as the two states.
The solar neutrino problem
The pp chain in the sun produces an abundance of . In 1968, R. Davis et al. first reported the results of the Homestake experiment. Also known as the Davis experiment, it used a huge tank of perchloroethylene in Homestake mine (it was deep underground to eliminate backg |
https://en.wikipedia.org/wiki/Specific%20volume | In thermodynamics, the specific volume of a substance (symbol: , nu) is a mass-specific intrinsic property of the substance, defined as the quotient of the substance's volume () to its mass (). It is the reciprocal of density (rho) and it is also related to the molar volume and molar mass:
The standard unit of specific volume is cubic meters per kilogram (m3/kg), but other units include ft3/lb, ft3/slug, or mL/g.
Specific volume for an ideal gas is related to the molar gas constant () and the gas's temperature (), pressure (), and molar mass () as shown:
Since and
Applications
Specific volume is commonly applied to:
Molar volume
Volume (thermodynamics)
Partial molar volume
Imagine a variable-volume, airtight chamber containing a certain number of atoms of oxygen gas. Consider the following four examples:
If the chamber is made smaller without allowing gas in or out, the density increases and the specific volume decreases.
If the chamber expands without letting gas in or out, the density decreases and the specific volume increases.
If the size of the chamber remains constant and new atoms of gas are injected, the density increases and the specific volume decreases.
If the size of the chamber remains constant and some atoms are removed, the density decreases and the specific volume increases.
Specific volume is a property of materials, defined as the number of cubic meters occupied by one kilogram of a particular substance. The standard unit is the meter cubed per kilogram (m3/kg or m3·kg−1).
Sometimes specific volume is expressed in terms of the number of cubic centimeters occupied by one gram of a substance. In this case, the unit is the centimeter cubed per gram (cm3/g or cm3·g−1). To convert m3/kg to cm3/g, multiply by 1000; conversely, multiply by 0.001.
Specific volume is inversely proportional to density. If the density of a substance doubles, its specific volume, as expressed in the same base units, is cut in half. If the density drops to 1/10 |
https://en.wikipedia.org/wiki/OpenID | OpenID is an open standard and decentralized authentication protocol promoted by the non-profit OpenID Foundation. It allows users to be authenticated by co-operating sites (known as relying parties, or RP) using a third-party identity provider (IDP) service, eliminating the need for webmasters to provide their own ad hoc login systems, and allowing users to log in to multiple unrelated websites without having to have a separate identity and password for each. Users create accounts by selecting an OpenID identity provider, and then use those accounts to sign on to any website that accepts OpenID authentication. Several large organizations either issue or accept OpenIDs on their websites.
The OpenID standard provides a framework for the communication that must take place between the identity provider and the OpenID acceptor (the "relying party"). An extension to the standard (the OpenID Attribute Exchange) facilitates the transfer of user attributes, such as name and gender, from the OpenID identity provider to the relying party (each relying party may request a different set of attributes, depending on its requirements). The OpenID protocol does not rely on a central authority to authenticate a user's identity. Moreover, neither services nor the OpenID standard may mandate a specific means by which to authenticate users, allowing for approaches ranging from the common (such as passwords) to the novel (such as smart cards or biometrics).
The final version of OpenID is OpenID 2.0, finalized and published in December 2007. The term OpenID may also refer to an identifier as specified in the OpenID standard; these identifiers take the form of a unique Uniform Resource Identifier (URI), and are managed by some "OpenID provider" that handles authentication.
Adoption
, there are over 1 billion OpenID-enabled accounts on the Internet (see below) and approximately 1,100,934 sites have integrated OpenID consumer support: AOL, Flickr, Google, Amazon.com, Canonical (provider |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.