source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s%20constant | In mathematics, Apéry's constant is the sum of the reciprocals of the positive cubes. That is, it is defined as the number
where is the Riemann zeta function. It has an approximate value of
.
The constant is named after Roger Apéry. It arises naturally in a number of physical problems, including in the second- and third-order terms of the electron's gyromagnetic ratio using quantum electrodynamics. It also arises in the analysis of random minimum spanning trees and in conjunction with the gamma function when solving certain integrals involving exponential functions in a quotient, which appear occasionally in physics, for instance, when evaluating the two-dimensional case of the Debye model and the Stefan–Boltzmann law.
Irrational number
was named Apéry's constant after the French mathematician Roger Apéry, who proved in 1978 that it is an irrational number. This result is known as Apéry's theorem. The original proof is complex and hard to grasp, and simpler proofs were found later.
Beukers's simplified irrationality proof involves approximating the integrand of the known triple integral for ,
by the Legendre polynomials.
In particular, van der Poorten's article chronicles this approach by noting that
where , are the Legendre polynomials, and the subsequences are integers or almost integers.
It is still not known whether Apéry's constant is transcendental.
Series representations
Classical
In addition to the fundamental series:
Leonhard Euler gave the series representation:
in 1772, which was subsequently rediscovered several times.
Fast convergence
Since the 19th century, a number of mathematicians have found convergence acceleration series for calculating decimal places of . Since the 1990s, this search has focused on computationally efficient series with fast convergence rates (see section "Known digits").
The following series representation was found by A. A. Markov in 1890, rediscovered by Hjortnaes in 1953, and rediscovered once more |
https://en.wikipedia.org/wiki/Native%20capacity | In computing, Native capacity refers to the uncompressed storage capacity of any medium that is usually spoken of in compressed sizes. For example, tape cartridges are rated in compressed capacity, which usually assumes 2:1 compression ratio over the native capacity. |
https://en.wikipedia.org/wiki/Linear%20amplifier | A linear amplifier is an electronic circuit whose output is proportional to its input, but capable of delivering more power into a load. The term usually refers to a type of radio-frequency (RF) power amplifier, some of which have output power measured in kilowatts, and are used in amateur radio. Other types of linear amplifier are used in audio and laboratory equipment. Linearity refers to the ability of the amplifier to produce signals that are accurate copies of the input. A linear amplifier responds to different frequency components independently, and tends not to generate harmonic distortion or intermodulation distortion. No amplifier can provide perfect linearity however, because the amplifying devices—transistors or vacuum tubes—follow nonlinear transfer function and rely on circuitry techniques to reduce those effects. There are a number of amplifier classes providing various trade-offs between implementation cost, efficiency, and signal accuracy.
Explanation
Linearity refers to the ability of the amplifier to produce signals that are accurate copies of the input, generally at increased power levels. Load impedance, supply voltage, input base current, and power output capabilities can affect the efficiency of the amplifier.
Class-A amplifiers can be designed to have good linearity in both single ended and push-pull topologies. Amplifiers of classes AB1, AB2 and B can be linear only when a tuned tank circuit is employed, or in the push-pull topology, in which two active elements (tubes, transistors) are used to amplify positive and negative parts of the RF cycle respectively. Class-C amplifiers are not linear in any topology.
Amplifier classes
There are a number of amplifier classes providing various trade-offs between implementation cost, efficiency, and signal accuracy. Their use in RF applications are listed briefly below:
Class-A amplifiers are very inefficient, they can never have an efficiency better than 50%. The semiconductor or vacuum tube cond |
https://en.wikipedia.org/wiki/Circular%20symmetry | In geometry, circular symmetry is a type of continuous symmetry for a planar object that can be rotated by any arbitrary angle and map onto itself.
Rotational circular symmetry is isomorphic with the circle group in the complex plane, or the special orthogonal group SO(2), and unitary group U(1). Reflective circular symmetry is isomorphic with the orthogonal group O(2).
Two dimensions
A 2-dimensional object with circular symmetry would consist of concentric circles and annular domains.
Rotational circular symmetry has all cyclic symmetry, Zn as subgroup symmetries. Reflective circular symmetry has all dihedral symmetry, Dihn as subgroup symmetries.
Three dimensions
In 3-dimensions, a surface or solid of revolution has circular symmetry around an axis, also called cylindrical symmetry or axial symmetry. An example is a right circular cone. Circular symmetry in 3 dimensions has all pyramidal symmetry, Cnv as subgroups.
A double-cone, bicone, cylinder, toroid and spheroid have circular symmetry, and in addition have a bilateral symmetry perpendular to the axis of system (or half cylindrical symmetry). These reflective circular symmetries have all discrete prismatic symmetries, Dnh as subgroups.
Four dimensions
In four dimensions, an object can have circular symmetry, on two orthogonal axis planes, or duocylindrical symmetry. For example, the duocylinder and Clifford torus have circular symmetry in two orthogonal axes. A spherinder has spherical symmetry in one 3-space, and circular symmetry in the orthogonal direction.
Spherical symmetry
An analogous 3-dimensional equivalent term is spherical symmetry.
Rotational spherical symmetry is isomorphic with the rotation group SO(3), and can be parametrized by the Davenport chained rotations pitch, yaw, and roll. Rotational spherical symmetry has all the discrete chiral 3D point groups as subgroups. Reflectional spherical symmetry is isomorphic with the orthogonal group O(3) and has the 3-dimensional discrete po |
https://en.wikipedia.org/wiki/Solenoid%20%28mathematics%29 | This page discusses a class of topological groups. For the wrapped loop of wire, see Solenoid.
In mathematics, a solenoid is a compact connected topological space (i.e. a continuum) that may be obtained as the inverse limit of an inverse system of topological groups and continuous homomorphisms
where each is a circle and fi is the map that uniformly wraps the circle for times () around the circle . This construction can be carried out geometrically in the three-dimensional Euclidean space R3. A solenoid is a one-dimensional homogeneous indecomposable continuum that has the structure of a compact topological group.
Solenoids were first introduced by Vietoris for the case, and by van Dantzig the case, where is fixed. Such a solenoid arises as a one-dimensional expanding attractor, or Smale–Williams attractor, and forms an important example in the theory of hyperbolic dynamical systems.
Construction
Geometric construction and the Smale–Williams attractor
Each solenoid may be constructed as the intersection of a nested system of embedded solid tori in R3.
Fix a sequence of natural numbers {ni}, ni ≥ 2. Let T0 = S1 × D be a solid torus. For each i ≥ 0, choose a solid torus Ti+1 that is wrapped longitudinally ni times inside the solid torus Ti. Then their intersection
is homeomorphic to the solenoid constructed as the inverse limit of the system of circles with the maps determined by the sequence {ni}.
Here is a variant of this construction isolated by Stephen Smale as an example of an expanding attractor in the theory of smooth dynamical systems. Denote the angular coordinate on the circle S1 by t (it is defined mod 2π) and consider the complex coordinate z on the two-dimensional unit disk D. Let f be the map of the solid torus T = S1 × D into itself given by the explicit formula
This map is a smooth embedding of T into itself that preserves the foliation by meridional disks (the constants 1/2 and 1/4 are somewhat arbitrary, but it is essential tha |
https://en.wikipedia.org/wiki/Crush%20syndrome | Crush syndrome (also traumatic rhabdomyolysis or Bywaters' syndrome) is a medical condition characterized by major shock and kidney failure after a crushing injury to skeletal muscle. Crush injury is compression of the arms, legs, or other parts of the body that causes muscle swelling and/or neurological disturbances in the affected areas of the body, while crush syndrome is localized crush injury with systemic manifestations. Cases occur commonly in catastrophes such as earthquakes, to individuals that have been trapped under fallen or moving masonry.
People with crushing damage present some of the greatest challenges in field medicine, and may need a physician's attention on the site of their injury. Appropriate physiological preparation of the injured is mandatory. It may be possible to free the patient without amputation; however, field amputations may be necessary in drastic situations.
Pathophysiology
Seigo Minami, a Japanese physician, first reported the crush syndrome in 1923. He studied the pathology of three soldiers who died in World War I due to kidney failure. The renal changes were due to the buildup of excess myoglobin, resulting from the destruction of muscles from lack of oxygen. The progressive acute kidney failure is because of acute tubular necrosis.
The syndrome was later described by British physician Eric Bywaters in patients during the 1941 wartime bombing of London (the Blitz). It is a reperfusion injury that appears after the release of the crushing pressure. The mechanism is believed to be the release into the bloodstream of muscle breakdown products—notably myoglobin, potassium and phosphorus—that are the products of rhabdomyolysis (the breakdown of skeletal muscle damaged by ischemic conditions).
The specific action on the kidneys is not understood completely, but may be due partly to nephrotoxic metabolites of myoglobin.
The most devastating systemic effects can occur when the crushing pressure is suddenly released, without prope |
https://en.wikipedia.org/wiki/J.%20Heinrich%20Matthaei | Johannes Heinrich Matthaei (born 4 May 1929) is a German biochemist. He is best known for his unique contribution to solving the genetic code on 15 May 1961.
Career
Whilst a post-doctoral visitor in the laboratory of Marshall Warren Nirenberg at the NIH in Bethesda, Maryland, he discovered that a synthetic RNA polynucleotide, composed of a repeating uridylic acid residue (Uracil), coded for a polypeptide chain encoding just one kind of amino acid, phenylalanine. In scientific terms, he discovered that polyU codes for polyphenylalanine and hence the coding unit for this amino acid is composed of a series of Us or, as we now know the genetic code is read in triplets, the codon for phenylalanine is UUU. This single experiment opened the way to the solution of the genetic code. It was for this and later work on the genetic code for which Nirenberg shared the Nobel Prize for Medicine and Physiology. In addition, Matthaei and his co-workers in the following years published a multitude of results concerning the early understanding of the form and function of the genetic code.
Why Matthaei, who personally deciphered the genetic code, was excluded from this scientific prize is one of the Nobel Prize controversies.
Later, Matthaei was a member of the Max Planck Society in Göttingen as a director.
Bibliography |
https://en.wikipedia.org/wiki/STN%20display | A super-twisted nematic (STN) display is a type of monochrome passive-matrix liquid crystal display (LCD).
History
This type of LCD was first patented by C. M. Waters and E. P. Raynes in 1982 whilst work was also conducted at the Brown Boveri Research Center, Baden, Switzerland, in 1983. For years a better scheme for multiplexing was sought. Standard twisted nematic (TN) LCDs with a 90 degrees twisted structure of the molecules have a contrast vs. voltage characteristic unfavorable for passive-matrix addressing as there is no distinct threshold voltage. STN displays, with the molecules twisted from 180 to 270 degrees, have superior characteristics.
Features
The main advantage of STN LCDs is their more pronounced electro-optical threshold allowing for passive-matrix addressing with many more lines and columns. For the first time, a prototype STN matrix display with 540x270 pixels was made by Brown Boveri (today ABB) in 1984, which was considered a breakthrough for the industry.
STN LCDs require less power and are less expensive to manufacture than TFT LCDs, another popular type of LCD that has largely superseded STN for mainstream laptops. STN displays typically suffer from lower image quality and slower response time than TFT displays. However, STN LCDs can be made purely reflective for viewing under direct sunlight. STN displays are used in some inexpensive mobile phones and informational screens of some digital products. In the early 1990s, they had been used in some portable computers such as Amstrad's PPC512 and PPC640, and in Nintendo's Game Boy.
Variants
CSTN (color super-twist nematic) is a color form for electronic display screens originally developed by Sharp Electronics. The CSTN uses red, green and blue filters to display color. The original CSTN displays developed in the early 1990s suffered from slow response times and ghosting (where text or graphic changes are blurred because the pixels cannot turn off and on fast enough). Recent advances in |
https://en.wikipedia.org/wiki/Reverse%20video | Reverse video (or invert video or inverse video or reverse screen) is a computer display technique whereby the background and text color values are inverted. On older computers, displays were usually designed to display text on a black background by default. For emphasis, the color scheme was swapped to bright background with dark text. Nowadays the two tend to be switched, since most computers today default to white as a background color. The opposite of reverse video is known as true video.
Video is usually reversed by inverting the brightness values of the pixels of the involved region of the display. If there are 256 levels of brightness, encoded as 0 to 255, the 255 value becomes 0 and vice versa. A value of 1 becomes 254, 2 of 253, and so on: n is swapped for r - n, for r levels of brightness. This is occasionally called a ones' complement. If the source image is of middle brightness, reverse video can be difficult to see, 127 becomes 128 for example, which is only one level of brightness different. The computer displays where it was most commonly used were monochrome and only displayed two values so this issue seldom arose.
Reverse video is commonly used in software programs as a visual aid to highlight a selection that has been made as an aid in preventing description errors, where an intended action is performed on an object that is not the one intended. It is more common in modern desktop environments to change the background to other colors such as blue, or to use a semi-transparent background to "highlight" the selected text.
On a terminal understanding ANSI escape sequences, the reverse video function is activated using the escape sequence CSI 7 m (which equals SGR 7).
Accessibility
Reverse video is also sometimes used for accessibility reasons. When most computer displays were light-on-dark, it was found that users looking back and forth between a white paper and dark screen would experience eyestrain due to their pupils constantly dilating and c |
https://en.wikipedia.org/wiki/Crystallographic%20restriction%20theorem | The crystallographic restriction theorem in its basic form was based on the observation that the rotational symmetries of a crystal are usually limited to 2-fold, 3-fold, 4-fold, and 6-fold. However, quasicrystals can occur with other diffraction pattern symmetries, such as 5-fold; these were not discovered until 1982 by Dan Shechtman.
Crystals are modeled as discrete lattices, generated by a list of independent finite translations . Because discreteness requires that the spacings between lattice points have a lower bound, the group of rotational symmetries of the lattice at any point must be a finite group (alternatively, the point is the only system allowing for infinite rotational symmetry). The strength of the theorem is that not all finite groups are compatible with a discrete lattice; in any dimension, we will have only a finite number of compatible groups.
Dimensions 2 and 3
The special cases of 2D (wallpaper groups) and 3D (space groups) are most heavily used in applications, and they can be treated together.
Lattice proof
A rotation symmetry in dimension 2 or 3 must move a lattice point to a succession of other lattice points in the same plane, generating a regular polygon of coplanar lattice points. We now confine our attention to the plane in which the symmetry acts , illustrated with lattice vectors in the figure.
Now consider an 8-fold rotation, and the displacement vectors between adjacent points of the polygon. If a displacement exists between any two lattice points, then that same displacement is repeated everywhere in the lattice. So collect all the edge displacements to begin at a single lattice point. The edge vectors become radial vectors, and their 8-fold symmetry implies a regular octagon of lattice points around the collection point. But this is impossible, because the new octagon is about 80% as large as the original. The significance of the shrinking is that it is unlimited. The same construction can be repeated with the new octagon, |
https://en.wikipedia.org/wiki/Gecos%20field | The gecos field, or GECOS field is a field in each record in the /etc/passwd file on Unix and similar operating systems. On UNIX, it is the 5th of 7 fields in a record.
It is typically used to record general information about the account or its user(s) such as their real name and phone number.
Format
The typical format for the GECOS field is a comma-delimited list with this order:
User's full name (or application name, if the account is for a program)
Building and room number or contact person
Office telephone number
Home telephone number
Any other contact information (pager number, fax, external e-mail address, etc.)
In most UNIX systems non-root users can change their own information using the chfn or chsh command.
History
Some early Unix systems at Bell Labs used GECOS machines for print spooling and various other services, so this field was added to carry information on a user's GECOS identity.
Other uses
On Internet Relay Chat (IRC), the real name field is sometimes referred to as the gecos field. IRC clients are able to supply this field when connecting. Hexchat, an X-Chat fork, defaults to 'realname', TalkSoup.app on GNUstep defaults to 'John Doe', and irssi reads the operating system user's full name, replacing it with 'unknown' if not defined. Some IRC clients use this field for advertising; for example, ZNC defaulted to "Got ZNC?", but changed it to "RealName = " to match its configuration syntax in 2015.
See also
General Comprehensive Operating System |
https://en.wikipedia.org/wiki/Vessel%20element | A vessel element or vessel member (also called a xylem vessel) is one of the cell types found in xylem, the water conducting tissue of plants. Vessel elements are found in angiosperms (flowering plants) but absent from gymnosperms such as conifers. Vessel elements are the main feature distinguishing the "hardwood" of angiosperms from the "softwood" of conifers.
Anatomy
Xylem is the tissue in vascular plants that conducts water (and substances dissolved in it) upwards from the roots to the shoots. Two kinds of cell are involved in xylem transport: tracheids and vessel elements. Vessel elements are the building blocks of vessels, the conducting pathways that constitute the major part of the water transporting system in flowering plants. Vessels form an efficient system for transporting water (including necessary minerals) from the root to the leaves and other parts of the plant.
In secondary xylem – the xylem that is produced as a stem thickens rather than when it first appears – a vessel element originates from the vascular cambium. A long cell, oriented along the axis of the stem, called a "fusiform initial", divides along its length forming new vessel elements. The cell wall of a vessel element becomes strongly "lignified", i.e. it develops reinforcing material made of lignin. The side walls of a vessel element have pits: more or less circular regions in contact with neighbouring cells. Tracheids also have pits, but only vessel elements have openings at both ends that connect individual vessel elements to form a continuous tubular vessel. These end openings are called perforations or perforation plates. They have a variety of shapes: the most common are the simple perforation (a simple opening) and the scalariform perforation (several elongated openings in a ladder-like design). Other types include the foraminate perforation plate (several round openings) and the reticulate perforation plate (a net-like pattern, with many openings).
At maturity, the protoplast |
https://en.wikipedia.org/wiki/Polymersome | In biotechnology, polymersomes are a class of artificial vesicles, tiny hollow spheres that enclose a solution. Polymersomes are made using amphiphilic synthetic block copolymers to form the vesicle membrane, and have radii ranging from 50 nm to 5 µm or more. Most reported polymersomes contain an aqueous solution in their core and are useful for encapsulating and protecting sensitive molecules, such as drugs, enzymes, other proteins and peptides, and DNA and RNA fragments. The polymersome membrane provides a physical barrier that isolates the encapsulated material from external materials, such as those found in biological systems.
Synthosomes are polymersomes engineered to contain channels (transmembrane proteins) that allow certain chemicals to pass through the membrane, into or out of the vesicle. This allows for the collection or enzymatic modification of these substances.
The term "polymersome" for vesicles made from block copolymers was coined in 1999. Polymersomes are similar to liposomes, which are vesicles formed from naturally occurring lipids. While having many of the properties of natural liposomes, polymersomes exhibit increased stability and reduced permeability. Furthermore, the use of synthetic polymers enables designers to manipulate the characteristics of the membrane and thus control permeability, release rates, stability and other properties of the polymersome.
Preparation
Several different morphologies of the block copolymer used to create the polymersome have been used. The most frequently used are the linear diblock or triblock copolymers. In these cases, the block copolymer has one block that is hydrophobic; the other block or blocks are hydrophilic. Other morphologies used include comb copolymers, where the backbone block is hydrophilic and the comb branches are hydrophobic, and dendronized block copolymers, where the dendrimer portion is hydrophilic.
In the case of diblock, comb and dendronized copolymers the polymersome membrane has the |
https://en.wikipedia.org/wiki/Blum%20integer | In mathematics, a natural number n is a Blum integer if is a semiprime for which p and q are distinct prime numbers congruent to 3 mod 4. That is, p and q must be of the form , for some integer t. Integers of this form are referred to as Blum primes. This means that the factors of a Blum integer are Gaussian primes with no imaginary part. The first few Blum integers are
21, 33, 57, 69, 77, 93, 129, 133, 141, 161, 177, 201, 209, 213, 217, 237, 249, 253, 301, 309, 321, 329, 341, 381, 393, 413, 417, 437, 453, 469, 473, 489, 497, ...
The integers were named for computer scientist Manuel Blum.
Properties
Given a Blum integer, Qn the set of all quadratic residues modulo n and coprime to n and . Then:
a has four square roots modulo n, exactly one of which is also in Qn
The unique square root of a in Qn is called the principal square root of a modulo n
The function f : Qn → Qn defined by f(x) = x2 mod n is a permutation. The inverse function of f is: f(x) = .
For every Blum integer n, −1 has a Jacobi symbol mod n of +1, although −1 is not a quadratic residue of n:
History
Before modern factoring algorithms, such as MPQS and NFS, were developed, it was thought to be useful to select Blum integers as RSA moduli. This is no longer regarded as a useful precaution, since MPQS and NFS are able to factor Blum integers with the same ease as RSA moduli constructed from randomly selected primes. |
https://en.wikipedia.org/wiki/Astronomia%20nova | Astronomia nova (English: New Astronomy, full title in original Latin: ) is a book, published in 1609, that contains the results of the astronomer Johannes Kepler's ten-year-long investigation of the motion of Mars.
One of the most significant books in the history of astronomy, the Astronomia nova provided strong arguments for heliocentrism and contributed valuable insight into the movement of the planets. This included the first mention of the planets' elliptical paths and the change of their movement to the movement of free floating bodies as opposed to objects on rotating spheres. It is recognized as one of the most important works of the Scientific Revolution.
Background
Prior to Kepler, Nicolaus Copernicus proposed in 1543 that the Earth and other planets orbit the Sun. The Copernican model of the Solar System was regarded as a device to explain the observed positions of the planets rather than a physical description.
Kepler sought for and proposed physical causes for planetary motion. His work is primarily based on the research of his mentor, Tycho Brahe. The two, though close in their work, had a tumultuous relationship. Regardless, in 1601 on his deathbed, Brahe asked Kepler to make sure that he did not "die in vain," and to continue the development of his model of the Solar System. Kepler would instead write the Astronomia nova, in which he rejects the Tychonic system, as well as the Ptolemaic system and the Copernican system. Some scholars have speculated that Kepler's dislike for Brahe may have had a hand in his rejection of the Tychonic system and formation of a new one.
By 1602, Kepler set to work on determining the orbit pattern of Mars, keeping David Fabricius informed of his progress. He suggested the possibility of an oval orbit to Fabricius by early 1604, though was not believed. Later in the year, Kepler wrote back with his discovery of Mars's elliptical orbit. The manuscript for Astronomia nova was completed by September 1607, and was in pr |
https://en.wikipedia.org/wiki/De%20motu%20corporum%20in%20gyrum | (from Latin: "On the motion of bodies in an orbit"; abbreviated ) is the presumed title of a manuscript by Isaac Newton sent to Edmond Halley in November 1684. The manuscript was prompted by a visit from Halley earlier that year when he had questioned Newton about problems then occupying the minds of Halley and his scientific circle in London, including Sir Christopher Wren and Robert Hooke.
This manuscript gave important mathematical derivations relating to the three relations now known as "Kepler's laws of planetary motion" (before Newton's work, these had not been generally regarded as scientific laws). Halley reported the communication from Newton to the Royal Society on 10 December 1684 (Old Style). After further encouragement from Halley, Newton went on to develop and write his book (commonly known as the ) from a nucleus that can be seen in – of which nearly all of the content also reappears in the .
Contents
One of the surviving copies of De Motu was made by being entered in the Royal Society's register book, and its (Latin) text is available online.
For ease of cross-reference to the contents of De Motu that appeared again in the Principia, there are online sources for the Principia in English translation, as well as in Latin.
De motu corporum in gyrum is short enough to set out here the contents of its different sections. It contains 11 propositions, labelled as 'theorems' and 'problems', some with corollaries. Before reaching this core subject-matter, Newton begins with some preliminaries:
3 Definitions:
1: 'Centripetal force' (Newton originated this term, and its first occurrence is in this document) impels or attracts a body to some point regarded as a center. (This reappears in Definition 5 of the Principia.)
2: 'Inherent force' of a body is defined in a way that prepares for the idea of inertia and of Newton's first law (in the absence of external force, a body continues in its state of motion either at rest or in uniform motion along a stra |
https://en.wikipedia.org/wiki/Adatom | An adatom is an atom that lies on a crystal surface, and can be thought of as the opposite of a surface vacancy. This term is used in surface chemistry and epitaxy, when describing single atoms lying on surfaces and surface roughness. The word is a portmanteau of "adsorbed atom". A single atom, a cluster of atoms, or a molecule or cluster of molecules may all be referred to by the general term "adparticle". This is often a thermodynamically unfavorable state. However, cases such as graphene may provide counter-examples.
Adatom growth
″Adatom″ is a portmanteau word, short for adsorbed atom. When the atom arrives at a crystal surface, it is adsorbed by the periodic potential of the crystal, thus becoming an adatom. The minima of this potential form a network of adsorption sites on the surface. There are different types of adsorption sites. Each of these sites corresponds to a different structure of the surface. There are five different types of adsorption sites, which are: on a terrace, where the adsorption site is on top of the surface layer that is growing; at the step edge, which is next to the growing layer; in the kink of a growing layer; in the step edge of a growing layer, and in the surface layer, where the adsorption site is inside the lower layer.
Out of these adsorption site types, kink sites play the most important role in crystal growth. Kink density is a major factor of growth kinetics. Attachment of an atom to the kink site, or removal of the atom from the kink, does not change the free surface energy of the crystal, since the number of broken bonds does not change. This gives that the chemical potential of an atom in the kink site is equal to that of the crystal, which means that the kink site is the one adsorption site type where an adatom becomes a part of the crystal.
If crystallography is used, or if the growth temperatures are higher, which would give an entropy effect, the crystal surface becomes rough, causing greater number of kinks. Thi |
https://en.wikipedia.org/wiki/Mental%20poker | Mental poker is the common name for a set of cryptographic problems that concerns playing a fair game over distance without the need for a trusted third party. The term is also applied to the theories surrounding these problems and their possible solutions. The name comes from the card game poker which is one of the games to which this kind of problem applies. Similar problems described as two party games are Blum's flipping a coin over a distance, Yao's Millionaires' Problem, and Rabin's oblivious transfer.
The problem can be described thus: "How can one allow only authorized actors to have access to certain information while not using a trusted arbiter?" (Eliminating the trusted third-party avoids the problem of trying to determine whether the third party can be trusted or not, and may also reduce the resources required.)
In poker, this could translate to: "How can we make sure no player is stacking the deck or peeking at other players' cards when we are shuffling the deck ourselves?". In a physical card game, this would be relatively simple if the players were sitting face to face and observing each other, at least if the possibility of conventional cheating can be ruled out. However, if the players are not sitting at the same location but instead are at widely separate locations and pass the entire deck between them (using the postal mail, for instance), this suddenly becomes very difficult. And for electronic card games, such as online poker, where the mechanics of the game are hidden from the user, this is impossible unless the method used is such that it cannot allow any party to cheat by manipulating or inappropriately observing the electronic "deck".
Several protocols for doing this have been suggested, the first by Adi Shamir, Ron Rivest and Len Adleman (the creators of the RSA-encryption protocol). This protocol was the first example of two parties conducting secure computation rather than secure message transmission, employing cryptography; later on |
https://en.wikipedia.org/wiki/Veritas%20Volume%20Manager | The Veritas Volume Manager (VVM or VxVM) is a proprietary logical volume manager from Veritas (which was part of Symantec until January 2016).
Details
It is available for Windows, AIX, Solaris, Linux, and HP-UX. A modified version is bundled with HP-UX as its built-in volume manager. It offers volume management and Multipath I/O functionalities (when used with Veritas Dynamic Multi-Pathing feature). The Veritas Volume Manager Storage Administrator (VMSA) is a GUI manager.
Versions
Veritas Volume Manager 7.4.1
Release date (Windows): February 2019
Veritas Volume Manager 6.0
Release date (Windows): December 2011
Release date (UNIX): December 2011
Veritas Volume Manager 5.1
Release date (Windows): August 2008
Release date (UNIX): December 2009
Veritas Volume Manager 5.0
Release date (UNIX): August 2006
Release date (Windows): January 2007
Veritas Volume Manager 4.1
Release date (UNIX): April 2005
Release date (Windows): June 2004
Veritas Volume Manager 4.0
Release date: February 2004
Veritas Volume Manager 3.5
Release date: September 2002
Veritas Volume Manager 3.2
Veritas Volume Manager 3.1
Release date: August 2000
Veritas Volume Manager 3.0
Microsoft once licensed a version of Veritas Volume Manager for Windows 2000, allowing operating systems to store and modify large amounts of data. Symantec acquired Veritas on July 2, 2005, and claimed Microsoft misused their intellectual property to develop functionalities in Windows Server 2003, later Windows Vista and Windows Server 2008, which competed with Veritas' Storage Foundation, according to Michael Schallop, the director of legal affairs at Symantec. A representative claims Microsoft bought all "intellectual property rights for all relevant technologies from Veritas in 2004". The lawsuit was dropped in 2008; terms were not disclosed.
See also
Veritas Storage Foundation
Veritas Volume Replicator
Symantec Operations Readiness Tools (SORT) |
https://en.wikipedia.org/wiki/Routing%20and%20wavelength%20assignment | The routing and wavelength assignment (RWA) problem is an optical networking problem with the goal of maximizing the number of optical connections.
Definition
The general objective of the RWA problem is to maximize the number of established connections. Each connection request must be given a route and wavelength. The wavelength must be consistent for the entire path, unless the usage of wavelength converters is assumed. Two connections requests can share the same optical link, provided a different wavelength is used.
The RWA problem can be formally defined in an integer linear program (ILP). The ILP formulation given here is taken from.
Maximize:
subject to
is the number of source-destination pairs, while is the number of connections established for each source-destination pair. is the number of links and is the number of wavelengths. is the set of paths to route connections. is a matrix which shows which source-destination pairs are active, is a matrix which shows which links are active, and is a route and wavelength assignment matrix.
Note that the above formulation assumes that the traffic demands are known a priori. This type of problem is known as Static Lightpath Establishment (SLE). The above formulation also does not consider the signal quality.
It has been shown that the SLE RWA problem is NP-complete in. The proof involves a reduction to the -graph colorability problem. In other words, solving the SLE RWA problem is as complex as finding the chromatic number of a general graph. Given that dynamic RWA is more complex than static RWA, it must be the case that dynamic RWA is also NP-complete.
Another NP-complete proof is given in. This proof involves a reduction to the Multi-commodity Flow Problem.
The RWA problem is further complicated by the need to consider signal quality. Many of the optical impairments are nonlinear, so a standard shortest path algorithm can't be used to solve them optimally even if we know the exact state of the netw |
https://en.wikipedia.org/wiki/Proofs%20involving%20the%20addition%20of%20natural%20numbers | This article contains mathematical proofs for some properties of addition of the natural numbers: the additive identity, commutativity, and associativity. These proofs are used in the article Addition of natural numbers.
Definitions
This article will use the Peano axioms for the definition of natural numbers. With these axioms, addition is defined from the constant 0 and the successor function S(a) by the two rules
For the proof of commutativity, it is useful to give the name "1" to the successor of 0; that is,
1 = S(0).
For every natural number a, one has
Proof of associativity
We prove associativity by first fixing natural numbers a and b and applying induction on the natural number c.
For the base case c = 0,
(a+b)+0 = a+b = a+(b+0)
Each equation follows by definition [A1]; the first with a + b, the second with b.
Now, for the induction. We assume the induction hypothesis, namely we assume that for some natural number c,
(a+b)+c = a+(b+c)
Then it follows,
In other words, the induction hypothesis holds for S(c). Therefore, the induction on c is complete.
Proof of identity element
Definition [A1] states directly that 0 is a right identity.
We prove that 0 is a left identity by induction on the natural number a.
For the base case a = 0, 0 + 0 = 0 by definition [A1].
Now we assume the induction hypothesis, that 0 + a = a.
Then
This completes the induction on a.
Proof of commutativity
We prove commutativity (a + b = b + a) by applying induction on the natural number b. First we prove the base cases b = 0 and b = S(0) = 1 (i.e. we prove that 0 and 1 commute with everything).
The base case b = 0 follows immediately from the identity element property (0 is an additive identity), which has been proved above:
a + 0 = a = 0 + a.
Next we will prove the base case b = 1, that 1 commutes with everything, i.e. for all natural numbers a, we have a + 1 = 1 + a. We will prove this by induction on a (an induction proof within an induction proof). We have pro |
https://en.wikipedia.org/wiki/Antonio%20Scarpa | Antonio Scarpa (9 May 1752 – 31 October 1832) was an Italian anatomist and professor.
Biography
Scarpa was born to an impoverished family in the frazione of Lorenzaga, Motta di Livenza, Veneto. An uncle, who was a member of the priesthood, gave him instruction until the age of 15, when he passed the entrance exam for the University of Padua. He was a pupil of Giovanni Battista Morgagni and Marc Antonio Caldani. Under the former, he became doctor of medicine on 19 May 1770; in 1772, he became professor at the University of Modena.
For a time he chose to travel, visiting Holland, France and England. When he returned to Italy, he was made professor of anatomy at the University of Pavia in 1783, on the strong recommendation of Emperor Joseph II. His lectures were so popular with students that Emperor Joseph II commissioned Leopoldo Pollack to build a new anatomic theater, now called Aula Scarpa, inside the Old Campus of the University of Pavia. He remained in that post until 1804, when he stepped down to allow his student Santo Fattori to assume the chair.
In May 1791, he was elected a Fellow of the Royal Society on account of being the "Author of some ingenious observations on the Ganglions of the Nerves, on the structure of the organs of hearing and smell, and other subjects of anatomy and Physiology"
In 1805, Napoleon was made King of Italy. He chose to visit the University of Pavia, upon which he inquired as to the whereabouts of Dr. Scarpa. He was informed that the doctor had been dismissed because of his political opinions and his refusal to take oaths, whereupon Dr. Scarpa was restored to his position as the chair. In 1821, he was elected a foreign member of the Royal Swedish Academy of Sciences.
During his lifetime he became a rich man, acquiring a collection of valuable paintings and living a wealthy lifestyle. He was a confirmed bachelor, and fathered several sons out of wedlock (whom he favoured through nepotism). In his career, he earned a reputation fo |
https://en.wikipedia.org/wiki/Deficits%20in%20attention%2C%20motor%20control%20and%20perception | DAMP—deficits in attention, motor control, and perception—is a psychiatric concept conceived by Christopher Gillberg. DAMP is defined by the presence of five properties: Problems of attention, gross and fine motor skills, perceptual deficits, and speech-language impairments. While routinely diagnosed in Scandinavian countries, the diagnosis has been rejected in the rest of the world. Minor cases of DAMP are roughly defined as a combination of developmental coordination disorder (DCD) and a pervading attention deficit.
DAMP is similar to minimal brain dysfunction (MBD), a concept that was formulated in the 1960s. and which has since been recognised as attention deficit hyperactivity disorder. Both concepts are related to certain psychiatric conditions, such as hyperactivity. The concept of MBD was strongly criticized by Sir Michael Rutter [Gillberg, 2003, p. 904] and several other researchers, and this led to its abandonment in the 1980s. At the same time, research showed that something similar was needed. One alternative concept was attention-deficit hyperactivity disorder (ADHD). Gillberg proposed another alternative: DAMP. Gillberg's concept was formulated in the early 1980s, and the term itself was introduced in a paper that Gillberg published in 1986 (see Gillberg [1986]). (DAMP is essentially MBD without the etiological assumptions.)
The concept of DAMP met with considerable criticism. For example, Sir Michael Rutter stated that the concept of DAMP (unlike ADHD) was "muddled" and "lacks both internal coherence and external discriminative validity ... it has no demonstrated treatment or prognostic implications"; he concluded that the concept should be abandoned. Another example is the criticism of Per-Anders Rydelius, Professor of Child Psychiatry at the Karolinska Institute, who argued that the definition of DAMP was too vague: "the borderline between DAMP and conduct disorders [is] unclear ... the borderline between DAMP and ADHD [is] unclear"; he concluded |
https://en.wikipedia.org/wiki/Calcium%20signaling | Calcium signaling is the use of calcium ions (Ca2+) to communicate and drive intracellular processes often as a step in signal transduction. Ca2+ is important for cellular signalling, for once it enters the cytosol of the cytoplasm it exerts allosteric regulatory effects on many enzymes and proteins. Ca2+ can act in signal transduction resulting from activation of ion channels or as a second messenger caused by indirect signal transduction pathways such as G protein-coupled receptors.
Concentration regulation
The resting concentration of Ca2+ in the cytoplasm is normally maintained around 100 nM. This is 20,000- to 100,000-fold lower than typical extracellular concentration. To maintain this low concentration, Ca2+ is actively pumped from the cytosol to the extracellular space, the endoplasmic reticulum (ER), and sometimes into the mitochondria. Certain proteins of the cytoplasm and organelles act as buffers by binding Ca2+. Signaling occurs when the cell is stimulated to release Ca2+ ions from intracellular stores, and/or when Ca2+ enters the cell through plasma membrane ion channels. Under certain conditions, the intracellular Ca2+ concentration may begin to oscillate at a specific frequency.
Phospholipase C pathway
Specific signals can trigger a sudden increase in the cytoplasmic Ca2+ levels to 500–1,000 nM by opening channels in the ER or the plasma membrane. The most common signaling pathway that increases cytoplasmic calcium concentration is the phospholipase C (PLC) pathway.
Many cell surface receptors, including G protein-coupled receptors and receptor tyrosine kinases, activate the PLC enzyme.
PLC uses hydrolysis of the membrane phospholipid PIP2 to form IP3 and diacylglycerol (DAG), two classic secondary messengers.
DAG attaches to the plasma membrane and recruits protein kinase C (PKC).
IP3 diffuses to the ER and is bound to the IP3 receptor.
The IP3 receptor serves as a Ca2+ channel, and releases Ca2+ from the ER.
The Ca2+ bind to PKC and o |
https://en.wikipedia.org/wiki/Check%20Point | Check Point is an American-Israeli multinational provider of software and combined hardware and software products for IT security, including network security, endpoint security, cloud security, mobile security, data security and security management.
, the company has approximately 6,000 employees worldwide. Headquartered in Tel Aviv, Israel and San Carlos, California, the company has development centers in Israel and Belarus and previously held in the United States (ZoneAlarm), Sweden (former Protect Data development centre) following acquisitions of companies who owned these centers. The company has offices in over 70 locations worldwide including main offices in North America, 10 in the United States (including in San Carlos, California and Dallas, Texas), 4 in Canada (including Ottawa, Ontario) as well as in Europe (London, Paris, Munich, Madrid) and in Asia Pacific (Singapore, Japan, Bengaluru, Sydney).
History
Check Point was established in Ramat Gan, Israel in 1993, by Gil Shwed (CEO ), Marius Nacht (Chairman ) and Shlomo Kramer (who left Check Point in 2003). Shwed had the initial idea for the company's core technology known as stateful inspection, which became the foundation for the company's first product, FireWall-1; soon afterwards they also developed one of the world's first VPN products, VPN-1. Shwed developed the idea while serving in the Unit 8200 of the Israel Defense Forces, where he worked on securing classified networks.
Initial funding of US$250,000 was provided by venture capital fund BRM Group.
In 1994 Check Point signed an OEM agreement with Sun Microsystems, followed by a distribution agreement with HP in 1995. The same year, the U.S. head office was established in Redwood City, California.
By February 1996, the company was named worldwide firewall market leader by IDC, with a market share of 40 percent.
In June 1996 Check Point raised $67 million from its initial public offering on NASDAQ.
In 1998, Check Point established a partnership |
https://en.wikipedia.org/wiki/Negative%20refraction | Negative refraction is the electromagnetic phenomenon where light rays become refracted at an interface that is opposite to their more commonly observed positive refractive properties. Negative refraction can be obtained by using a metamaterial which has been designed to achieve a negative value for (electric) permittivity (ε) and (magnetic) permeability (μ); in such cases the material can be assigned a negative refractive index. Such materials are sometimes called "double negative" materials.
Negative refraction occurs at interfaces between materials at which one has an ordinary positive phase velocity (i.e., a positive refractive index), and the other has the more exotic negative phase velocity (a negative refractive index).
Negative phase velocity
Negative phase velocity (NPV) is a property of light propagation in a medium. There are different definitions of NPV; the most common is Victor Veselago's original proposal of opposition of the wave vector and (Abraham) the Poynting vector. Other definitions include the opposition of wave vector to group velocity, and energy to velocity. "Phase velocity" is used conventionally, as phase velocity has the same sign as the wave vector.
A typical criterion used to determine Veselago's NPV is that the dot product of the Poynting vector and wave vector is negative (i.e., that ), but this definition is not covariant. While this restriction is not practically significant, the criterion has been generalized into a covariant form. Veselago NPV media are also called "left-handed (meta)materials", as the components of plane waves passing through (electric field, magnetic field, and wave vector) follow the left-hand rule instead of the right-hand rule. The terms "left-handed" and "right-handed" are generally avoided as they are also used to refer to chiral media.
Negative refractive index
One can choose to avoid directly considering the Poynting vector and wave vector of a propagating light field, and instead directly conside |
https://en.wikipedia.org/wiki/Rotating-wave%20approximation | The rotating-wave approximation is an approximation used in atom optics and magnetic resonance. In this approximation, terms in a Hamiltonian that oscillate rapidly are neglected. This is a valid approximation when the applied electromagnetic radiation is near resonance with an atomic transition, and the intensity is low. Explicitly, terms in the Hamiltonians that oscillate with frequencies are neglected, while terms that oscillate with frequencies are kept, where is the light frequency, and is a transition frequency.
The name of the approximation stems from the form of the Hamiltonian in the interaction picture, as shown below. By switching to this picture the evolution of an atom due to the corresponding atomic Hamiltonian is absorbed into the system ket, leaving only the evolution due to the interaction of the atom with the light field to consider. It is in this picture that the rapidly oscillating terms mentioned previously can be neglected. Since in some sense the interaction picture can be thought of as rotating with the system ket only that part of the electromagnetic wave that approximately co-rotates is kept; the counter-rotating component is discarded.
The rotating-wave approximation is closely related to, but different from, the secular approximation.
Mathematical formulation
For simplicity consider a two-level atomic system with ground and excited states and , respectively (using the Dirac bracket notation). Let the energy difference between the states be so that is the transition frequency of the system. Then the unperturbed Hamiltonian of the atom can be written as
.
Suppose the atom experiences an external classical electric field of frequency , given by
; e.g., a plane wave propagating in space. Then under the dipole approximation the interaction Hamiltonian between the atom and the electric field can be expressed as
,
where is the dipole moment operator of the atom. The total Hamiltonian for the atom-light system is therefore |
https://en.wikipedia.org/wiki/Rudder%20ratio | Rudder ratio refers to a value that is monitored by the computerized flight control systems in modern aircraft. The ratio relates the aircraft airspeed to the rudder deflection setting that is in effect at the time. As an aircraft accelerates, the deflection of the rudder needs to be reduced proportionately within the range of the rudder pedal depression by the pilot. This automatic reduction process is needed because if the rudder is fully deflected when the aircraft is in high-speed flight, it will cause the plane to sharply and violently yaw, or swing from side to side, leading to loss of control and rudder, tail and other damages, even causing the aircraft to crash.
See also
American Airlines Flight 587
Aerospace engineering
Engineering ratios |
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Gravitational%20Physics | The Max Planck Institute for Gravitational Physics (Albert Einstein Institute) is a Max Planck Institute whose research is aimed at investigating Einstein's theory of relativity and beyond: Mathematics, quantum gravity, astrophysical relativity, and gravitational-wave astronomy. The institute was founded in 1995 and is located in the Potsdam Science Park in Golm, Potsdam and in Hannover where it closely collaborates with the Leibniz University Hannover. Both the Potsdam and the Hannover parts of the institute are organized in three research departments and host a number of independent research groups.
The institute conducts fundamental research in mathematics, data analysis, astrophysics and theoretical physics as well as research in laser physics, vacuum technology, vibration isolation and classical and quantum optics.
When the LIGO Scientific Collaboration announced the first detection of gravitational waves, researchers of the institute were involved in modeling, detecting, analysing and characterising the signals. The institute is part of a number of collaborations and projects: it is a main partner in the gravitational-wave detector GEO600; institute scientists are developing waveform-models that are applied in the gravitational-wave detectors for detecting and characterising gravitational waves. They are developing detector technology and are also analyzing data from the detectors of the LIGO Scientific Collaboration, the Virgo Collaboration and the KAGRA Collaboration. They also play a leading role in planning and preparing the space-based detector LISA (planned launch date: 2034) and are involved in developing the third generation of earth-bound gravitational-wave detectors (Einstein Telescope, Cosmic Explorer). The institute is also a major player in the Einstein@Home and PyCBC projects.
From 1998 to 2015, the institute has published the open access review journal Living Reviews in Relativity.
History
The newly founded institute started its work in Ap |
https://en.wikipedia.org/wiki/Electric%20catfish | Electric catfish or Malapteruridae is a family of catfishes (order Siluriformes). This family includes two genera, Malapterurus and Paradoxoglanis, with 21 species. Several species of this family have the ability to generate electricity, delivering a shock of up to 350 volts from its electric organ. Electric catfish are found in tropical Africa and the Nile River. Electric catfish are usually nocturnal and carnivorous. Some species feed primarily on other fish, incapacitating their prey with electric discharges, but others are generalist bottom foragers, feeding on things like invertebrates, fish eggs, and detritus. The largest can grow to about 1.2 meters (3 ft) long, but most species are far smaller.
Description
The Malapteruridae are the only group of catfish with well-developed electrogenic organs; however, electroreceptive systems are widespread in catfishes. The electrogenic organ is derived from anterior body musculature and lines the body cavity. Electric catfish do not have dorsal fins or fin spines. They have three pairs of barbels (the nasal pair is absent). The swim bladder has elongate posterior chambers, two chambers in Malapterurus and three in Paradoxoglanis.
Malapterurus have been conditioned by means of reward to discharge on signal. As reported in the New York Times, April 2, 1967, a researcher, Dr. Frank J. Mandriota of City College, New York, conditioned an M. electricus to discharge on a light signal for a reward of live worms delivered automatically. This is the first conditioning that modified neither glandular nor muscular responses.
The largest can grow to about 1.2 meters (3 ft) and . Most Malapterurus and all Paradoxoglanis species are much smaller, reaching less than long.
Relationship to humans
The electric catfish of the Nile was well known to the ancient Egyptians. The Egyptians reputedly used the electric shock from them when treating arthritis pain. They would use only smaller fish, as a large fish may generate an electric sh |
https://en.wikipedia.org/wiki/Electron%20optics | Electron optics is a mathematical framework for the calculation of electron trajectories in the presence of electromagnetic fields. The term optics is used because magnetic and electrostatic lenses act upon a charged particle beam similarly to optical lenses upon a light beam.
Electron optics calculations are crucial for the design of electron microscopes and particle accelerators. In the paraxial approximation, trajectory calculations can be carried out using ray transfer matrix analysis.
Electron properties
Electrons are charged particles (point charges with rest mass) with spin 1/2 (hence they are fermions). Electrons can be accelerated by suitable electric (or magnetic) fields, thereby acquiring kinetic energy. Given sufficient voltage, the electron can be accelerated sufficiently fast to exhibit measurable relativistic effects. According to wave particle duality, electrons can also be considered as matter waves with properties such as wavelength, phase and amplitude.
Geometric electron optics
The Hamilton's optico-mechanical analogy shows that electron beams can be modeled using concepts and mathematical formula of light beams. The electron particle trajectory formula matches the formula for geometrical optics with a suitable electron-optical index of refraction. This index of refraction functions like the material properties of glass in altering the direction ray propagation. In light optics, the refractive index changes abruptly at a surface between regions of constant index: the rays are controlled with the shape of the interface. In the electron-optics, the index varies throughout space and is controlled by electromagnetic fields created outside the electron trajectories.
Magnetic fields
Electrons interact with magnetic fields according to the second term of the Lorentz force: a cross product between the magnetic field and the electron velocity. In an infinite uniform field this results in a circular motion of the electron around the field directi |
https://en.wikipedia.org/wiki/Cycles%20and%20fixed%20points | In mathematics, the cycles of a permutation of a finite set S correspond bijectively to the orbits of the subgroup generated by acting on S. These orbits are subsets of S that can be written as , such that
for , and .
The corresponding cycle of is written as ( c1 c2 ... cn ); this expression is not unique since c1 can be chosen to be any element of the orbit.
The size of the orbit is called the length of the corresponding cycle; when , the single element in the orbit is called a fixed point of the permutation.
A permutation is determined by giving an expression for each of its cycles, and one notation for permutations consist of writing such expressions one after another in some order. For example, let
be a permutation that maps 1 to 2, 6 to 8, etc. Then one may write
= ( 1 2 4 3 ) ( 5 ) ( 6 8 ) (7) = (7) ( 1 2 4 3 ) ( 6 8 ) ( 5 ) = ( 4 3 1 2 ) ( 8 6 ) ( 5 ) (7) = ...
Here 5 and 7 are fixed points of , since (5) = 5 and (7)=7. It is typical, but not necessary, to not write the cycles of length one in such an expression. Thus, = (1 2 4 3)(6 8), would be an appropriate way to express this permutation.
There are different ways to write a permutation as a list of its cycles, but the number of cycles and their contents are given by the partition of S into orbits, and these are therefore the same for all such expressions.
Counting permutations by number of cycles
The unsigned Stirling number of the first kind, s(k, j) counts the number of permutations of k elements with exactly j disjoint cycles.
Properties
(1) For every k > 0 :
(2) For every k > 0 :
(3) For every k > j > 1,
Reasons for properties
(1) There is only one way to construct a permutation of k elements with k cycles: Every cycle must have length 1 so every element must be a fixed point.
(2.a) Every cycle of length k may be written as permutation of the number 1 to k; there are k! of these permutations.
(2.b) There are k different ways to write a given cycle of length k, e.g. ( 1 2 |
https://en.wikipedia.org/wiki/Perfect%20set%20property | In descriptive set theory, a subset of a Polish space has the perfect set property if it is either countable or has a nonempty perfect subset (Kechris 1995, p. 150). Note that having the perfect set property is not the same as being a perfect set.
As nonempty perfect sets in a Polish space always have the cardinality of the continuum, and the reals form a Polish space, a set of reals with the perfect set property cannot be a counterexample to the continuum hypothesis, stated in the form that every uncountable set of reals has the cardinality of the continuum.
The Cantor–Bendixson theorem states that closed sets of a Polish space X have the perfect set property in a particularly strong form: any closed subset of X may be written uniquely as the disjoint union of a perfect set and a countable set. In particular, every uncountable Polish space has the perfect set property, and can be written as the disjoint union of a perfect set and a countable open set.
The axiom of choice implies the existence of sets of reals that do not have the perfect set property, such as Bernstein sets. However, in Solovay's model, which satisfies all axioms of ZF but not the axiom of choice, every set of reals has the perfect set property, so the use of the axiom of choice is necessary. Every analytic set has the perfect set property. It follows from the existence of sufficiently large cardinals that every projective set has the perfect set property. |
https://en.wikipedia.org/wiki/Electroceramics | Electroceramics are a class of ceramic materials used primarily for their electrical properties.
While ceramics have traditionally been admired and used for their mechanical, thermal and chemical stability, their unique electrical, optical and magnetic properties have become of increasing importance in many key technologies including communications, energy conversion and storage, electronics and automation. Such materials are now classified under electroceramics, as distinguished from other functional ceramics such as advanced structural ceramics.
Historically, developments in the various subclasses of electroceramics have paralleled the growth of new technologies. Examples include: ferroelectrics - high dielectric capacitors, non-volatile memories; ferrites - data and information storage; solid electrolytes - energy storage and conversion; piezoelectrics - sonar; semiconducting oxides - environmental monitoring. Recent advances in these areas are described in the Journal of Electroceramics.
Dielectric ceramics
Dielectric materials used for construction of ceramic capacitors include: Lead Zirconate titanate (PZT), Barium titanate(BT), strontium titanate (ST), calcium titanate (CT), magnesium titanate (MT), calcium magnesium titanate (CMT), zinc titanate (ZT), lanthanum titanate (LT), and neodymium titanate (NT), barium zirconate (BZ), calcium zirconate (CZ), lead magnesium niobate (PMN), lead zinc niobate (PZN), lithium niobate (LN), barium stannate (BS), calcium stannate (CS), magnesium aluminium silicate, magnesium silicate, barium tantalate, titanium dioxide, niobium oxide, zirconia, silica, sapphire, beryllium oxide, and zirconium tin titanate
Some piezoelectric materials can be used as well; the EIA Class 2 dielectrics are based on mixtures rich on barium titanate. In turn, EIA Class 1 dielectrics contain little or no barium titanate.
Electronically conductive ceramics
Indium tin oxide (ITO), lanthanum-doped strontium titanate (SLT), yttrium-doped strontiu |
https://en.wikipedia.org/wiki/Solid%20solution | A solid solution, a term popularly used for metals, is a homogeneous mixture of two different kinds of atoms in solid state and having a single crystal structure. Many examples can be found in metallurgy, geology, and solid-state chemistry. The word "solution" is used to describe the intimate mixing of components at the atomic level and distinguishes these homogeneous materials from physical mixtures of components. Two terms are mainly associated with solid solutions – solvents and solutes, depending on the relative abundance of the atomic species.
In general if two compounds are isostructural then a solid solution will exist between the end members (also known as parents). For example sodium chloride and potassium chloride have the same cubic crystal structure so it is possible to make a pure compound with any ratio of sodium to potassium (Na1-xKx)Cl by dissolving that ratio of NaCl and KCl in water and then evaporating the solution. A member of this family is sold under the brand name Lo Salt which is (Na0.33K0.66)Cl, hence it contains 66% less sodium than normal table salt (NaCl). The pure minerals are called halite and sylvite; a physical mixture of the two is referred to as sylvinite.
Because minerals are natural materials they are prone to large variations in composition. In many cases specimens are members for a solid solution family and geologists find it more helpful to discuss the composition of the family than an individual specimen. Olivine is described by the formula (Mg, Fe)2SiO4, which is equivalent to (Mg1−xFex)2SiO4. The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: Mg2SiO4) and fayalite (Fe-endmember: Fe2SiO4) but the ratio in olivine is not normally defined. With increasingly complex compositions the geological notation becomes significantly easier to manage than the chemical notation.
Nomenclature
The IUPAC definition of a solid solution is a "solid in which components ar |
https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Radziszowski | Stanisław P. Radziszowski (born June 7, 1953) is a Polish-American mathematician and computer scientist, best known for his work in Ramsey theory.
Radziszowski was born in Gdańsk, Poland, and received his PhD from the Institute of Informatics of the University of Warsaw in 1980. His thesis topic was "Logic and Complexity of Synchronous Parallel Computations". From 1976 to 1980 he worked as a visiting professor in various universities in Mexico City. In 1984, he moved to the United States, where he took up a position in the Department of Computer Science at the Rochester Institute of Technology.
Radziszowski has published many papers in graph theory, Ramsey theory, block designs, number theory and computational complexity.
In a 1995 paper with Brendan McKay he determined the Ramsey number R(4,5)=25. His survey of Ramsey numbers, last updated in March 2017, is a standard reference on the subject and published at the Electronic Journal of Combinatorics. |
https://en.wikipedia.org/wiki/Queen%20Saovabha%20Memorial%20Institute | The Queen Saovabha Memorial Institute (QSMI) (; ) in Bangkok, Thailand, is an institute that specialises in the husbandry of venomous snakes, the extraction and research of snake venom, and vaccines, especially rabies vaccine. It houses the snake farm, a popular tourist attraction.
The origins of the institute can be traced back to 1912 when King Rama VI granted permission for a government institute to manufacture and distribute rabies vaccine at the suggestion of Prince Damrong, whose daughter, , died from rabies infection. It was officially opened on 26 October 1913 in the Luang Building on Bamrung Muang Road as the Pastura Institute after Louis Pasteur, who discovered the first vaccine against rabies. In 1917 it was renamed the Pasteur Institute and placed under the supervision of the Thai Red Cross Society. The institute also produced vaccine against smallpox. The Travel and Immunization Clinic is now located here. If offers vaccines and pre-travel consultation.
In the early-1920s the king offered his private property for the construction of a new home for the institute on Rama IV Road. The new buildings were officially opened on 7 December 1922, now named for the king's mother, Queen Saovabha Phongsri. At the same time, the institute's first director, Dr. Leopold Robert, requested contributions from foreigners living in Thailand for the establishment of a snake farm, which would enable the institute to manufacture antivenom for snake bites. Reportedly the second snake farm in the world after Instituto Butantan in São Paulo, Brazil, it was opened on 22 November 1923 by Queen Savang Vadhana, then President of the Thai Red Cross, on the institute's premises.
Research into snake venom is highly important, since many people fall victim to venomous snake bites. Normally only an antidote that is based from the same snake's venom can save the individual's life.
The snake farm houses thousands of some of the most venomous snakes in the world, such as the king cobra |
https://en.wikipedia.org/wiki/Plantronics%20Colorplus | The Plantronics Colorplus is a graphics card for IBM PC computers, first sold in 1982. It is a superset of the then-current CGA standard, using the same monitor standard (4-bit digital TTL RGBI monitor) and providing the same pixel resolutions. It was produced by Frederick Electronics (of Frederick, Maryland), a subsidiary of Plantronics since 1968, and sold by Plantronics' Enhanced Graphics Products division.
The Colorplus has twice the memory of a standard CGA board (32k, compared to 16k). The additional memory can be used in graphics modes to double the color depth, giving two additional graphics modes—16 colors at resolution, or 4 colors at resolution.
It uses the same Motorola MC6845 display controller as the previous MDA and CGA adapters.
The original card also includes a parallel printer port.
Output capabilities
CGA compatible modes:
16 color mode (actual a text mode using , ▌, ▐ and █)
in 4 colors from a 16 color hardware palette. Pixel aspect ratio of 1:1.2.
in 2 colors. Pixel aspect ratio of 1:2.4
with pixel font text mode (effective resolution of )
with pixel font text mode (effective resolution of )
In addition to the CGA modes, it offers:
with 16 colors
with 4 colors
"New high-resolution" text font, selectable by hardware jumper
The "new" font was actually the unused "thin" font already present in the IBM CGA ROMs, with 1-pixel wide vertical strokes. This offered greater clarity on RGB monitors, versus the default "thick" / 2-pixel font more suitable for output to composite monitors and over RF to televisions but, contrary to Plantronics' advertising claims, was drawn at the same pixel resolution.
Software support
Few software made use of the enhanced Plantronics modes, for which there was no BIOS support.
A 1984 advertisement listed the following software as compatible:
Color-It
UCSD P-system
Peachtree Graphics Language
Business Graphics System
Graph Power
The Draftsman
Videogram
Stock View
GSX
CompuShow ( mode)
Some contempora |
https://en.wikipedia.org/wiki/Society%20for%20Cryobiology | The Society for Cryobiology is an international scientific society that was founded in 1964. Its objectives are to promote research in low temperature biology, to improve scientific understanding in this field, and to disseminate and aid in the application of this knowledge. The Society also publishes a journal called Cryobiology. |
https://en.wikipedia.org/wiki/Atmospheric%20electricity | Atmospheric electricity describes the electrical charges in the Earth's atmosphere (or that of another planet). The movement of charge between the Earth's surface, the atmosphere, and the ionosphere is known as the global atmospheric electrical circuit. Atmospheric electricity is an interdisciplinary topic with a long history, involving concepts from electrostatics, atmospheric physics, meteorology and Earth science.
Thunderstorms act as a giant battery in the atmosphere, charging up the electrosphere to about 400,000 volts with respect to the surface. This sets up an electric field throughout the atmosphere, which decreases with increase in altitude. Atmospheric ions created by cosmic rays and natural radioactivity move in the electric field, so a very small current flows through the atmosphere, even away from thunderstorms. Near the surface of the Earth, the magnitude of the field is on average around 100 V/m, oriented such that it drives positive charges down.
Atmospheric electricity involves both thunderstorms, which create lightning bolts to rapidly discharge huge amounts of atmospheric charge stored in storm clouds, and the continual electrification of the air due to ionization from cosmic rays and natural radioactivity, which ensure that the atmosphere is never quite neutral.
History
Sparks drawn from electrical machines and from Leyden jars suggested to early experimenters Hauksbee, Newton, Wall, Nollet, and Gray that lightning was caused by electric discharges. In 1708, Dr. William Wall was one of the first to observe that spark discharges resembled miniature lightning, after observing the sparks from a charged piece of amber.
Benjamin Franklin's experiments showed that electrical phenomena of the atmosphere were not fundamentally different from those produced in the laboratory, by listing many similarities between electricity and lightning. By 1749, Franklin observed lightning to possess almost all the properties observable in electrical machines.
I |
https://en.wikipedia.org/wiki/Baryon%20asymmetry | In physical cosmology, the baryon asymmetry problem, also known as the matter asymmetry problem or the matter–antimatter asymmetry problem, is the observed imbalance in baryonic matter (the type of matter experienced in everyday life) and antibaryonic matter in the observable universe. Neither the standard model of particle physics nor the theory of general relativity provides a known explanation for why this should be so, and it is a natural assumption that the universe is neutral with all conserved charges. The Big Bang should have produced equal amounts of matter and antimatter. Since this does not seem to have been the case, it is likely some physical laws must have acted differently or did not exist for matter and antimatter. Several competing hypotheses exist to explain the imbalance of matter and antimatter that resulted in baryogenesis. However, there is as of yet no consensus theory to explain the phenomenon, which has been described as "one of the great mysteries in physics".
Sakharov conditions
In 1967, Andrei Sakharov proposed a set of three necessary conditions that a baryon-generating interaction must satisfy to produce matter and antimatter at different rates. These conditions were inspired by the recent discoveries of the cosmic background radiation and CP violation in the neutral kaon system. The three necessary "Sakharov conditions" are:
Baryon number violation.
C-symmetry and CP-symmetry violation.
Interactions out of thermal equilibrium.
Baryon number violation
Baryon number violation is a necessary condition to produce an excess of baryons over anti-baryons. But C-symmetry violation is also needed so that the interactions which produce more baryons than anti-baryons will not be counterbalanced by interactions which produce more anti-baryons than baryons. CP-symmetry violation is similarly required because otherwise equal numbers of left-handed baryons and right-handed anti-baryons would be produced, as well as equal numbers of left-hande |
https://en.wikipedia.org/wiki/SPARCstation | The SPARCstation, SPARCserver and SPARCcenter product lines are a series of SPARC-based computer workstations and servers in desktop, desk side (pedestal) and rack-based form factor configurations, that were developed and sold by Sun Microsystems.
The first SPARCstation was the SPARCstation 1 (also known as the Sun 4/60), introduced in 1989. The series was very popular and introduced the Sun-4c architecture, a variant of the Sun-4 architecture previously introduced in the Sun 4/260. Thanks in part to the delay in the development of more modern processors from Motorola, the SPARCstation series was very successful across the entire industry. The last model bearing the SPARCstation name was the SPARCstation 4. The workstation series was replaced by the Sun Ultra series in 1995; the next Sun server generation was the Sun Enterprise line introduced in 1996.
Models
Desktop and deskside SPARCstations and SPARCservers of the same model number were essentially identical systems, the only difference being that systems designated as servers were usually "headless" (that is, configured without a graphics card and monitor), and were sold with a "server" rather than a "desktop" OS license. For example, the SPARCstation 20 and SPARCserver 20 were almost identical in motherboard, CPU, case design and most other hardware specifications.
Most desktop SPARCstations and SPARCservers shipped in either "pizzabox" or "lunchbox" enclosures, a significant departure from earlier Sun and competing systems of the time. The SPARCstation 1, 2, 4, 5, 10 and 20 were "pizzabox" machines. The SPARCstation SLC and ELC were integrated into Sun monochrome monitor enclosures, and the SPARCstation IPC, IPX, SPARCclassic, SPARCclassic X and SPARCstation LX were "lunchbox" machines.
SPARCserver models ending in "30" or "70" were housed in deskside pedestal enclosures (respectively 5-slot and 12-slot VMEbus chassis); models ending in "90" and the SPARCcenter 2000 came in rackmount cabinet enclosures. T |
https://en.wikipedia.org/wiki/Mass%20flow%20rate | In physics and engineering, mass flow rate is the mass of a substance which passes per unit of time. Its unit is kilogram per second in SI units, and slug per second or pound per second in US customary units. The common symbol is (ṁ, pronounced "m-dot"), although sometimes μ (Greek lowercase mu) is used.
Sometimes, mass flow rate is termed mass flux or mass current, see for example Schaum's Outline of Fluid Mechanics. In this article, the (more intuitive) definition is used.
Mass flow rate is defined by the limit:
i.e., the flow of mass through a surface per unit time .
The overdot on the is Newton's notation for a time derivative. Since mass is a scalar quantity, the mass flow rate (the time derivative of mass) is also a scalar quantity. The change in mass is the amount that flows after crossing the boundary for some time duration, not the initial amount of mass at the boundary minus the final amount at the boundary, since the change in mass flowing through the area would be zero for steady flow.
Alternative equations
Mass flow rate can also be calculated by
where
The above equation is only true for a flat, plane area. In general, including cases where the area is curved, the equation becomes a surface integral:
The area required to calculate the mass flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface, e.g. for substances passing through a filter or a membrane, the real surface is the (generally curved) surface area of the filter, macroscopically - ignoring the area spanned by the holes in the filter/membrane. The spaces would be cross-sectional areas. For liquids passing through a pipe, the area is the cross-section of the pipe, at the section considered. The vector area is a combination of the magnitude of the area through which the mass passes through, A, and a unit vector normal to the area, . The relation is .
The reason for the dot product is as follows. The only mass flowing through the cross-section |
https://en.wikipedia.org/wiki/Somos%27%20quadratic%20recurrence%20constant | In mathematics, Somos' quadratic recurrence constant, named after Michael Somos, is the number
This can be easily re-written into the far more quickly converging product representation
which can then be compactly represented in infinite product form by:
The constant σ arises when studying the asymptotic behaviour of the sequence
with first few terms 1, 1, 2, 12, 576, 1658880, ... . This sequence can be shown to have asymptotic behaviour as follows:
Guillera and Sondow give a representation in terms of the derivative of the Lerch transcendent:
where ln is the natural logarithm and (z, s, q) is the Lerch transcendent.
Finally,
.
Notes |
https://en.wikipedia.org/wiki/Saethre%E2%80%93Chotzen%20syndrome | Saethre–Chotzen syndrome (SCS), also known as acrocephalosyndactyly type III, is a rare congenital disorder associated with craniosynostosis (premature closure of one or more of the sutures between the bones of the skull). This affects the shape of the head and face, resulting in a cone-shaped head and an asymmetrical face. Individuals with SCS also have droopy eyelids (ptosis), widely spaced eyes (hypertelorism), and minor abnormalities of the hands and feet (syndactyly). Individuals with more severe cases of SCS may have mild to moderate intellectual or learning disabilities. Depending on the level of severity, some individuals with SCS may require some form of medical or surgical intervention. Most individuals with SCS live fairly normal lives, regardless of whether medical treatment is needed or not.
Signs and symptoms
SCS presents in a variable fashion. The majority of individuals with SCS are moderately affected, with uneven facial features and a relatively flat face due to underdeveloped eye sockets, cheekbones, and lower jaw. In addition to the physical abnormalities, people with SCS also experience growth delays, which results in a relatively short stature. Although, most individuals with SCS are of normal intelligence, some individuals may have mild to moderate mental delays. More severe cases of SCS, with more serious facial deformities, occurs when multiple cranial sutures close prematurely.
Cranial defects
Flat, asymmetric head and face
Head is typically cone-shaped (acrocephaly) or flat (brachycephaly) but can also be long and narrow (dolichocephaly)
Head is short from front to back
Lopsided face
Low-set hairline causing forehead to appear tall and wide
Defects of the hands and feet
Webbing (syndactyly) between the second and third finger and between the second and third toes
Short fingers and toes (brachydactyly)
Broad thumb and/or a broad hallux (big toe) with a valgus deformity (outward angulation of the distal segment of a bone/joint)
|
https://en.wikipedia.org/wiki/Digi%20International | Digi International is an American Industrial Internet of Things (IIoT) technology company with headquarters based in Hopkins, Minnesota. The company was founded in 1985 and went public as Digi International in 1989. The company initially offered intelligent ISA/PCI boards (the 'DigiBoard') with multiple asynchronous serial interfaces for PCs. Multiport serial boards are still sold, but the company focuses on embedded and external network (wired and wireless) communications as well as scalable USB products. The company's products also include radio modems and embedded modules based on LTE (4G) communications platforms.
Acquisition history
Since going public, Digi International has acquired a number of companies.
2021 Digi acquired Ventus Holdings.
2021 Digi acquired Ctek, a company specializing in remote monitoring and industrial controls.
2021 Digi acquired Haxiot, a provider of wireless connection services.
2019 Digi acquired Opengear.
2018 Digi acquired Accelerated Concepts, a provider of secure, enterprise-grade, cellular (LTE) networking equipment for primary and backup connectivity.
2017 Digi acquired TempAlert, a provider of temperature and task management for retail pharmacy, food service, and industrial applications.
2017 Digi acquired SMART Temps, LLC, a provider of real-time food service temperature management for restaurant, grocery, education and hospital settings as well as real-time temperature management for healthcare.
2016 Digi acquired FreshTemps, temperature monitoring and task management for the food industry.
2015 Digi acquired Bluenica, Toronto-based company focused on temperature monitoring of perishable goods in the food industry.
2012 Digi acquired Etherios a Chicago-based salesforce.com Platinum Partner.
2009 Digi acquired Mobiapps a fabless manufacturer of satellite modems on the Orbcomm satellite network.
2008 Digi acquired Spectrum Design Solutions Inc. for $10 million, a design services company specializing in Wireless Design techn |
https://en.wikipedia.org/wiki/Strong%20cryptography | Strong cryptography or cryptographically strong are general terms used to designate the cryptographic algorithms that, when used correctly, provide a very high (usually unsurmountable) level of protection against any eavesdropper, including the government agencies. There is no precise definition of the boundary line between the strong cryptography and (breakable) weak cryptography, as this border constantly shifts due to improvements in hardware and cryptanalysis techniques. These improvements eventually place the capabilities once available only to the NSA within the reach of a skilled individual, so in practice there are only two levels of cryptographic security, "cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files" (Bruce Schneier).
The strong cryptography algorithms have high security strength, for practical purposes usually defined as a number of bits in the key. For example, the United States government, when dealing with export control of encryption, considers any implementation of the symmetric encryption algorithm with the key length above 56 bits or its public key equivalent to be strong and thus potentially a subject to the export licensing. To be strong, an algorithm needs to have a sufficiently long key and be free of known mathematical weaknesses, as exploitation of these effectively reduces the key size. At the beginning of the 21st century, the typical security strength of the strong symmetrical encryption algorithms is 128 bits (slightly lower values still can be strong, but usually there is little technical gain in using smaller key sizes).
Demonstrating the resistance of any cryptographic scheme to attack is a complex matter, requiring extensive testing and reviews, preferably in a public forum. Good algorithms and protocols are required, and good system design and implementation is needed as well. For instance, the operating system on which the cryptogra |
https://en.wikipedia.org/wiki/Spatial%20capacity | Spatial capacity is an indicator of "data intensity" in a transmission medium. It is usually used in conjunction with wireless transport mechanisms. This is analogous to the way that lumens per square meter determine illumination intensity.
Spatial capacity focuses not only on bit rates for data transfer but on bit rates available in confined spaces defined by short transmission ranges. It is measured in bits per second per square meter.
Among those leading research in spatial capacity are Jan Rabaey at the University of California, Berkeley. Some have suggested the term "spatial efficiency" as more descriptive. Marc Weiser, former chief technologist of Xerox PARC, was another contributor to the field who commented on the importance of spatial capacity.
The System spectral efficiency is the spatial capacity divided by the bandwidth in hertz of the available frequency band.
Relative spatial capacities
Engineers at Intel and elsewhere have reported the relative spatial capacities of various wireless technologies as follows:
IEEE 802.11b 1,000 (bit/s)/m²
Bluetooth 30,000 (bit/s)/m²
IEEE 802.11a 83,000 (bit/s)/m²
Ultra-wideband 1,000,000 (bit/s)/m²
IEEE 802.11g N/A
See also
System spectral efficiency |
https://en.wikipedia.org/wiki/Bitrate%20peeling | Bitrate peeling is a technique used in Ogg Vorbis audio encoded streams, wherein a stream can be encoded at one bitrate but can be served at that or any lower bitrate.
The purpose is to provide access to the clip for people with slower Internet connections, and yet still allow people with faster connections to enjoy the higher quality content. The server automatically chooses which stream to deliver to the user, depending on user's connection speed.
, Ogg Vorbis bitrate peeling existed only as a concept as there was not yet an encoder capable of producing peelable datastreams Bounties - XiphWiki.
Difference from other technologies
The difference between SureStream and bitrate peeling is that SureStream is limited to only a handful of pre-defined bitrates, with significant difference between them, and SureStream encoded files are big because they contain all of the bitrates used, while bitrate peeling uses much smaller steps to change the available bitrate and quality, and only the highest bitrate is used to encode the file/stream, which results in smaller files on servers.
A related technique to the SureStream approach is hierarchical modulation, used in broadcast, where severally different streams at different qualities (and bitrates) are all broadcast, with the higher quality stream used if possible, with the lower quality streams fallen back on if not.
Lossy and correction
A similar technology is to feature a combination of a lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats include MPEG-4 SLS (scalable to lossless), WavPack, DTS-HD Master Audio and OptimFROG DualStream.
SureStream example
A SureStream encoded file is encoded at bitrates of 16 kbit/s, 32 kbit/s and 96 kbit/s. The file will be about the same in size as three separate files encoded at those bitrates and put together, or one file encoded at the sum of those bitrates, which is about 144 kbit/s (16 + 32 + 96). When a dial-up |
https://en.wikipedia.org/wiki/Gasoline%20gallon%20equivalent | Gasoline gallon equivalent (GGE) or gasoline-equivalent gallon (GEG) is the amount of an alternative fuel it takes to equal the energy content of one liquid gallon of gasoline. GGE allows consumers to compare the energy content of competing fuels against a commonly known fuel, namely gasoline.
It is difficult to compare the cost of gasoline with other fuels if they are sold in different units and physical forms. GGE attempts to solve this. One GGE of CNG and one GGE of electricity have exactly the same energy content as one gallon of gasoline. In this way, GGE provides a direct comparison of gasoline with alternative fuels, including those sold as a gas (natural gas, propane, hydrogen) and as metered electricity.
Definition
In 1994, the US National Institute of Standards and Technology (NIST) defined "gasoline gallon equivalent (GGE) [as] 5.660 pounds of natural gas." Compressed natural gas (CNG), for example, is a gas rather than a liquid. It can be measured by its volume in standard cubic feet (ft3) at atmospheric conditions, by its weight in pounds (lb), or by its energy content in joules (J), British thermal units (BTU), or kilowatt-hours (kW·h). CNG sold at filling stations in the US is priced in dollars per GGE.
Using GGE as a measure to compare the stored energy of various fuels for use in an internal combustion engine is only one input for consumers, who typically are interested in the annual cost of driving a vehicle, which requires considering the amount of useful work that can be extracted from a given fuel. This is measured by the car's overall efficiency. In the context of GGE, a real world measure of overall efficiency is the fuel economy or fuel consumption advertised by motor vehicle manufacturers.
Efficiency and consumption
To start, only a fraction of the stored energy of a given fuel (measured in BTU or kW-hr) can be converted to useful work by the vehicle's engine. The measure of this is engine efficiency, often called thermal efficiency in |
https://en.wikipedia.org/wiki/International%20Society%20for%20the%20Interdisciplinary%20Study%20of%20Symmetry | The International Symmetry Society ("International Society for the Interdisciplinary Study of Symmetry"; abbreviated name SIS) is an international non-governmental, non-profit organization registered in Hungary (Budapest, Vármegye u. 7. II. 3., H-1052).
Its main objectives are:
to bring together artists and scientists, educators and students devoted to, or interested in, the research and understanding of the concept and application of symmetry (asymmetry, dissymmetry);
to provide regular information to the general public about events in symmetry studies;
to ensure a regular forum (including the organization of symposia, and the publication of a periodical) for all those interested in symmetry studies.
The topic was introduced for the first time by Russian and Polish scholars. Then in 1952, Hermann Weyl published his fascinating book Symmetry, which was later translated into 10 languages. Since then, it became an attractive subject of research in various fields. A variety of manifestations of the principle of symmetry in sculpture, painting, architecture, ornament, and design, in organic and inorganic nature, has been revealed; the philosophical and mathematical significance of this principle has been studied.
During the 1980s the discussions concerning the nature of the world, whether it was essentially probabilistic or naturally geometric, revived the interest of the researchers to the topic. The intellectual atmosphere of this period facilitated the idea of establishment of a new institution devoted to the study of all forms of complexity and patterns of symmetry and orderly structures pervading science, nature and society, which ultimately led to the establishment of the International Society for the Interdisciplinary Study of Symmetry.
The Society's community comprises several branches of science and art, while symmetry studies have gained the rank of an individual interdisciplinary field in the judgement of the scientific community. The Society has member |
https://en.wikipedia.org/wiki/Phytolith | Phytoliths (from Greek, "plant stone") are rigid, microscopic structures made of silica, found in some plant tissues and persisting after the decay of the plant. These plants take up silica from the soil, whereupon it is deposited within different intracellular and extracellular structures of the plant. Phytoliths come in varying shapes and sizes. Although some use "phytolith" to refer to all mineral secretions by plants, it more commonly refers to siliceous plant remains. In contrast, mineralized calcium secretions in cacti are composed of calcium oxalates.
The silica is absorbed in the form of monosilicic acid (Si(OH)4), and is carried by the plant's vascular system to the cell walls, cell lumen, and intercellular spaces. Depending on the plant taxa and soil condition, absorbed silica can range from 0.1% to 10% of the plant's total dry weight. When deposited, the silica replicates the structure of the cells, providing structural support to the plant. Phytoliths strengthen the plant against abiotic stressors such as salt runoff, metal toxicity, and extreme temperatures. Phytoliths can also protect the plant against biotic threats such as insects and fungal diseases.
Functions
There is still debate in the scientific community as to why plants form phytoliths, and whether silica should be considered an essential nutrient for plants. Studies that have grown plants in silica-free environments have typically found that plants lacking silica in the environment do not grow well. For example, the stems of certain plants will collapse when grown in soil lacking silica. In many cases, phytoliths appear to lend structure and support to the plant, much like the spicules in sponges and leather corals. Phytoliths may also provide plants with protection. These rigid silica structures help to make plants more difficult to consume and digest, lending the plant's tissues a grainy or prickly texture. Phytoliths also appear to provide physiologic benefits. Experimental studies h |
https://en.wikipedia.org/wiki/AMOLF | AMOLF is a research institute and part of the institutes organization of the Dutch Research Council (NWO). AMOLF carries out fundamental research on the physics and design principles of natural and man-made complex matter. AMOLF uses these insights to create novel functional materials and find new solutions to societal challenges in renewable energy, green ICT and healthcare. AMOLF is located at the Amsterdam Science Park.
AMOLF used to be part of the Dutch Foundation for Fundamental Research on Matter (FOM). On 31 December 2016 FOM integrated in NWO.
History
The institute was established in 1949 by the government as the FOM Laboratory for Mass Spectrography. In 1960, it was renamed to Laboratory for Mass Separation, and in 1966 it was reorganized into a research institute and renamed FOM Institute for Atomic and Molecular Physics (AMOLF).
The original research goal was to demonstrate the separation of uranium isotopes by electromagnetic separation methods, a topic of great strategic importance after World War II. To reach this goal, a number of novel analytical instruments were developed, starting with the development of mass-spectrometric tools. In 1953 AMOLF was the first European institute to successfully enrich Uranium. Soon after, research on thermal diffusion in gases followed, as did ultracentrifuge concepts, cathode dispersion, excitation of gases by using energetic ions and research on molecular beams. The gas-ultracentrifuge developed at AMOLF (under ) provided a base for the commercial enrichment of Uranium at the today well-known company of URENCO in Almelo.
Structure and organization
AMOLF functions as an incubator for Dutch science, both in terms of launching new research themes and in terms of training talented scientists. AMOLF is headed by its director Huib Bakker, who succeeded on 1 February 2016. The organization has 19 research groups headed by tenured or tenure-track group leaders. AMOLF employs about 130 researchers and 70 employees fo |
https://en.wikipedia.org/wiki/Nature%20Reviews%20Genetics | Nature Reviews Genetics is a monthly review journal published by Nature Portfolio. It was established in 2000 and covers the full breadth of modern genetics. The editor-in-chief is Linda Koch. The journal publishes review and perspective articles written by experts in the field subject to peer review and copy editing to provide authoritative coverage of topics. Each issue also contains Research Highlight articles – short summaries written by the editors that describe recent research papers.
According to the Journal Citation Reports, the journal has a 2021 impact factor of 59.943, ranking it 1st out of 175 journals in the category "Genetics & Heredity". |
https://en.wikipedia.org/wiki/C%20Traps%20and%20Pitfalls | C Traps and Pitfalls is a slim computer programming book by former AT&T Corporation researcher and programmer Andrew Koenig, its first edition still in print in 2017, which outlines the many ways in which beginners and even sometimes quite experienced C programmers can write poor, malfunctioning and dangerous source code.
It evolved from an earlier technical report, by the same name, published internally at Bell Labs. This, in turn was inspired by a prior paper given by Koenig on "PL/I Traps and Pitfalls" at a SHARE conference in 1977. Koenig wrote that this title was inspired by a 1968 science fiction anthology by Robert Sheckley, "The People Trap and other Pitfalls, Snares, Devices and Delusions, as Well as Two Sniggles and a Contrivance". |
https://en.wikipedia.org/wiki/Society%20finch | Known as the Society finch in North America and the Bengali finch or Bengalese finch elsewhere, Lonchura striata domestica is a domesticated finch not found in nature. It became a popular cage and trade bird after appearing in European zoos in the 1860s where it was imported from Japan. There have been many theories of the origin of domestication for the Bengalese finch, and we now know it took place primarily in Japan. Coloration and behavior were modified through centuries of selection in Asia, then later in Europe and North America.
Another aspect of the Bengali finch that evolved throughout the centuries is song production. Extensive research has been done and continues to be done on the different ways Bengali finch songs are produced, how they are processed in the brain, what characteristics of the songs are preferred by females, and how their songs compare to the also commonly studied zebra finch.
Evolution and origin
Although the English language literature on aviculture called these birds as Bengali finch, the German aviculturist Karl Russ called them in 1871 as Japanese (or mew, an old word for gull, possibly related to Chinese , a pigeon breed named in France and introduced to Germany around the same time for their resemblance to gulls). The birds are members of the estrildid finch family and most authorities consider them a domestic form of the white-rumped munia (known in aviculture as the striated finch) most likely derived from the subspecies Lonchura striata swinhoei although some have suggested a hybrid origin.
Behavior
Bengali finches are well adapted to captivity and the company of humans. They breed well and are good foster parents for other finch-like birds.
While two males may not get along without other company, it has been found the best "pairing" for fostering is to use two males, this works better than either two females or a male and female pairing. Two males will usually accept eggs or even partly grown young without any hesitation. |
https://en.wikipedia.org/wiki/J%C3%BAlio%20C%C3%A9sar%20de%20Mello%20e%20Souza | Júlio César de Mello e Souza (Rio de Janeiro, May 6, 1895 – Recife, June 18, 1974), was a Brazilian writer and mathematics teacher. He was well known in Brazil and abroad for his books on recreational mathematics, most of them published under the pen names of Malba Tahan and Breno de Alencar Bianco.
He wrote 69 novels and 51 books of mathematics and other subjects, with over than two million books sold by 1995. His most famous work, The Man Who Counted, saw its 54th printing in 2001.
Júlio César's most popular books, including The Man Who Counted, are collections of mathematical problems, puzzles, curiosities, and embedded in tales inspired by the Arabian Nights. He thoroughly researched his subject matters — not only the mathematics, but also the history, geography, and culture of the Islamic Empire which was the backdrop and connecting thread of his books. Yet Júlio César's travels outside Brazil were limited to short visits to Buenos Aires, Montevideo, and Lisbon: he never set foot in the deserts and cities which he so vividly described in his books.
Júlio César was very critical of the educational methods used in Brazilian classrooms, especially for mathematics. "The mathematics teacher is a sadist," he claimed, "who loves to make everything as complicated as possible." In education, he was decades ahead of his time, and his proposals are still more praised than implemented today.
For his books, Júlio César received a prize by the prestigious Brazilian Literary Academy and was made a member of the Pernambuco Literary Academy. The Malba Tahan Institute was founded in 2004 at Queluz to preserve his legacy. The State Legislature of Rio de Janeiro determined his birthday, May 6, to be commemorated as the Mathematician's Day.
Early life
Júlio César was born in Rio de Janeiro but spent most of his childhood in Queluz, a small rural town in the State of São Paulo. His father, João de Deus de Mello e Souza, was a civil servant with limited salary and eight (some |
https://en.wikipedia.org/wiki/1458%20%28number%29 | 1458 is the integer after 1457 and before 1459.
The maximum determinant of an 11 by 11 matrix of zeroes and ones is 1458.
1458 is one of three numbers which, when its base 10 digits are added together, produces a sum which, when multiplied by its reversed self, yields the original number:
1 + 4 + 5 + 8 = 18
18 × 81 = 1458
The only other non-trivial numbers with this property are 81 and 1729, as well as the trivial solutions 1 and 0. It was proven by Masahiko Fujiwara. |
https://en.wikipedia.org/wiki/Free%20and%20open-source%20graphics%20device%20driver | A free and open-source graphics device driver is a software stack which controls computer-graphics hardware and supports graphics-rendering application programming interfaces (APIs) and is released under a free and open-source software license. Graphics device drivers are written for specific hardware to work within a specific operating system kernel and to support a range of APIs used by applications to access the graphics hardware. They may also control output to the display if the display driver is part of the graphics hardware. Most free and open-source graphics device drivers are developed by the Mesa project. The driver is made up of a compiler, a rendering API, and software which manages access to the graphics hardware.
Drivers without freely (and legally) -available source code are commonly known as binary drivers. Binary drivers used in the context of operating systems that are prone to ongoing development and change (such as Linux) create problems for end users and package maintainers. These problems, which affect system stability, security and performance, are the main reason for the independent development of free and open-source drivers. When no technical documentation is available, an understanding of the underlying hardware is often gained by clean-room reverse engineering. Based on this understanding, device drivers may be written and legally published under any software license.
In rare cases, a manufacturer's driver source code is available on the Internet without a free license. This means that the code can be studied and altered for personal use, but the altered (and usually the original) source code cannot be freely distributed. Solutions to bugs in the driver cannot be easily shared in the form of modified versions of the driver. Therefore, the utility of such drivers is significantly reduced in comparison to free and open-source drivers.
Problems with proprietary drivers
Software developer's view
There are objections to binary-only drive |
https://en.wikipedia.org/wiki/Springbok%20Radio | Springbok Radio (spelled Springbokradio in Afrikaans, ) was a South African nationwide radio station that operated from 1950 to 1986.
History
SABC's decision in December 1945 to develop a commercial service was constrained by post-war financial issues. After almost five years of investigation and after consulting Lord Reith of the BBC and the South African government, it decided to introduce commercial radio to supplement the SABC's public service English and Afrikaans networks and help solve the SABC's financial problems. The SABC would build the equipment and facilities and would place them at the disposal of advertisers and their agencies at cost for productions and allow them to make use of SABC's production staff.
On 1 May 1950, the first commercial radio station in South Africa, Springbok Radio, took to the air. Bilingual in English and Afrikaans, it broadcast from the Johannesburg Centre for 113 and a half hours a week. The service proved so popular with advertisers at its launch that commercial time had been booked well in advance. The service started at 6:43am with the music Vat Jou Goed en Trek, Ferreira. The first voice on air was that of Eric Egan, well remembered for his daily "Corny Crack" and catch phrase "I Love You".
Many drama programmes during the 1950s were imported from Australia, but as more funding became available, Springbok Radio produced almost all its programmes within South Africa through a network of independent production houses. By the end of 1950, some 30 per cent of Springbok Radio shows were produced by South African talent or material and independent productions were sold to sponsors. At the same time all air time had been sold or used and transmission time was extended. By the end of the 1950s, the revenue of Springbok Radio was £205,439, in 1961 it had grown to over two million rand (£1 million) and by 1970 had reached R 6.5 million.
By 1985, Springbok Radio was operating at a heavy loss. Stiff competition from television, d |
https://en.wikipedia.org/wiki/Dendroarchaeology | Dendroarchaeology is a term used for the study of vegetation remains, old buildings, artifacts, furniture, art and musical instruments using the techniques of dendrochronology (tree-ring dating). It refers to dendrochronological research of wood from the past regardless of its current physical context (in or above the soil). This form of dating is the most accurate and precise absolute dating method available to archaeologists, as the last ring that grew is the first year the tree could have been incorporated into an archaeological structure.
Tree-ring dating is useful in that it can contribute to chronometric, environmental, and behavioral archaeological research.
The utility of tree-ring dating in an environmental sense is the most applicable of the three in today's world. Tree rings can be used to reconstruct numerous environmental variables such as temperature, precipitation, stream flow, drought society, fire frequency and intensity, insect infestation, atmospheric circulation patterns, among others.
History
At the beginning of the twentieth century, astronomer Andrew Ellicott Douglass first applied tree ring dating to prehistoric North American artifacts. Through applying dendrochronology (tree-ring dating), Douglass hoped for more expansive climate studies. Douglass theorized organic materials (trees and plant remains) could assist in visualizing past climates. Despite Dr. Douglass’s contributions, archaeology as a discipline did not begin applying tree-ring dating until 1970s with Dr. Edward Cook and Dr. Gordon Jacoby. In 1929, American Southwestern archaeologists had charted a non continuous historic and prehistoric chronologies for the Chaco Canyon Region. Tree ring laboratory scientists from Columbia University were some of the first to apply tree-ring dating to the colonial period, specifically architectural timbers in the eastern United States. For agencies like the National Park Service and other historical societies, Dr. Jacoby and Cook began dat |
https://en.wikipedia.org/wiki/Brocade%20Communications%20Systems | Brocade was an American technology company specializing in storage networking products, now a subsidiary of Broadcom Inc. The company is known for its Fibre Channel storage networking products and technology. Prior to the acquisition, the company expanded into adjacent markets including a wide range of IP/Ethernet hardware and software products. Offerings included routers and network switches for data center, campus and carrier environments, IP storage network fabrics; Network Functions Virtualization (NFV) and software-defined networking (SDN) markets such as a commercial edition of the OpenDaylight Project controller; and network management software that spans physical and virtual devices.
On November 2, 2016, Singapore-based chip maker Broadcom Limited announced it was buying Brocade for about $5.5 billion. As part of the acquisition, Broadcom divested all of the IP networking hardware and software-defined networking assets. Broadcom has since re-domesticated to the United States and is now known as Broadcom Inc.
History
Brocade was founded in August 1995, by Seth Neiman (a venture capitalist, a former executive from Sun Microsystems and a professional auto racer), Kumar Malavalli (a co-author of the Fibre Channel specification) and Paul R. Bonderson (a former executive from Intel Corporation and Sun). Neiman became the first CEO of the company. Brocade was incorporated on May 14, 1998, in Delaware.
The company's first product, SilkWorm, which was a Fibre Channel switch, was released in early 1997.
On May 25, 1999, the company went public at a split-adjusted price of $4.75. On initial public offering (IPO), the company offered 3,250,000 shares, with an additional 487,500 shares offered to the underwriters to cover over-allotments. The top three underwriters (based on number of shares) for Brocade's IPO were, in order, Morgan Stanley Dean Witter, BT Alex.Brown, and Dain Rauscher Wessels.
Brocade stock is traded in the National Market System of the NASDAQ GS |
https://en.wikipedia.org/wiki/Outside%20broadcasting | Outside broadcasting (OB) is the electronic field production (EFP) of television or radio programmes (typically to cover television news and sports television events) from a mobile remote broadcast television studio. Professional video camera and microphone signals come into the production truck for processing, recording and possibly transmission.
Some outside broadcasts use a mobile production control room (PCR) inside a production truck.
History
Outside radio broadcasts have been taking place since the early 1920s and television ones since the late 1920s. The first large-scale outside broadcast was the televising of the Coronation of George VI and Elizabeth in May 1937, done by the BBC's first Outside Broadcast truck, MCR 1 (short for Mobile Control Room).
After the Second World War, the first notable outside broadcast was of the 1948 Summer Olympics. The Coronation of Elizabeth II followed in 1953, with 21 cameras being used to cover the event.
In December 1963 instant replays were used for the first time. Director Tony Verna used the technique on the Army-Navy game which aired on CBS Sports on December 7, 1963.
The 1968 Summer Olympics was the first with competitions televised in colour. The 1972 Olympic Games were the first where all competitions were captured by outside broadcast cameras.
During the 1970s, ITV franchise holder Southern Television was unique in having an outside broadcast boat, named Southener.
The wedding of Prince Charles and Lady Diana Spencer in July 1981 was the biggest outside broadcast at the time, with an estimated 750 million viewers.
New technology
In 2008, the first 3D outside broadcast took place with the transmission of a Calcutta Cup rugby match, but only to an audience of industry professionals who had been invited by BBC Sport.
In March 2010, the first public 3D outside broadcast took place with an NHL game between the New York Rangers and New York Islanders.
The first commercial ultra-high definition outside broad |
https://en.wikipedia.org/wiki/Photoheterotroph | Photoheterotrophs (Gk: photo = light, hetero = (an)other, troph = nourishment) are heterotrophic phototrophs—that is, they are organisms that use light for energy, but cannot use carbon dioxide as their sole carbon source. Consequently, they use organic compounds from the environment to satisfy their carbon requirements; these compounds include carbohydrates, fatty acids, and alcohols. Examples of photoheterotrophic organisms include purple non-sulfur bacteria, green non-sulfur bacteria, and heliobacteria. These microorganisms are ubiquitous in aquatic habitats, occupy unique niche-spaces, and contribute to global biogeochemical cycling. Recent research has also indicated that the oriental hornet and some aphids may be able to use light to supplement their energy supply.
Research
Studies have shown that mammalian mitochondria can also capture light and synthesize ATP when mixed with pheophorbide, a light-capturing metabolite of chlorophyll. Research demonstrated that the same metabolite when fed to the worm Caenorhabditis elegans leads to increase in ATP synthesis upon light exposure, along with an increase in life span.
Furthermore, inoculation experiments suggest that mixotrophic Ochromonas danica (i.e., Golden algae)—and comparable eukaryotes—favor photoheterotrophy in oligotrophic (i.e., nutrient-limited) aquatic habitats. This preference may increase energy-use efficiency and growth by reducing investment in inorganic carbon fixation (e.g., production of autotrophic machineries such as RuBisCo and PSII).
Metabolism
Photoheterotrophs generate ATP using light, in one of two ways: they use a bacteriochlorophyll-based reaction center, or they use a bacteriorhodopsin. The chlorophyll-based mechanism is similar to that used in photosynthesis, where light excites the molecules in a reaction center and causes a flow of electrons through an electron transport chain (ETS). This flow of electrons through the proteins causes hydrogen ions to be pumped across a membrane |
https://en.wikipedia.org/wiki/Recursive%20data%20type | In computer programming languages, a recursive data type (also known as a recursively-defined, inductively-defined or inductive data type) is a data type for values that may contain other values of the same type. Data of recursive types are usually viewed as directed graphs.
An important application of recursion in computer science is in defining dynamic data structures such as Lists and Trees. Recursive data structures can dynamically grow to an arbitrarily large size in response to runtime requirements; in contrast, a static array's size requirements must be set at compile time.
Sometimes the term "inductive data type" is used for algebraic data types which are not necessarily recursive.
Example
An example is the list type, in Haskell:
data List a = Nil | Cons a (List a)
This indicates that a list of a's is either an empty list or a cons cell containing an 'a' (the "head" of the list) and another list (the "tail").
Another example is a similar singly linked type in Java:
class List<E> {
E value;
List<E> next;
}
This indicates that non-empty list of type E contains a data member of type E, and a reference to another List object for the rest of the list (or a null reference to indicate that this is the end of the list).
Mutually recursive data types
Data types can also be defined by mutual recursion. The most important basic example of this is a tree, which can be defined mutually recursively in terms of a forest (a list of trees). Symbolically:
f: [t[1], ..., t[k]]
t: v f
A forest f consists of a list of trees, while a tree t consists of a pair of a value v and a forest f (its children). This definition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as it expresses a tree in simple terms: a list of one type, and a pair of two types.
This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest:
t: v [t[1], ..., t[k]]
A tree t |
https://en.wikipedia.org/wiki/Langmuir%E2%80%93Blodgett%20film | A Langmuir–Blodgett (LB) film is a nanostructured system formed when Langmuir films—or Langmuir monolayers (LM)—are transferred from the liquid-gas interface to solid supports during the vertical passage of the support through the monolayers. LB films can contain one or more monolayers of an organic material, deposited from the surface of a liquid onto a solid by immersing (or emersing) the solid substrate into (or from) the liquid. A monolayer is adsorbed homogeneously with each immersion or emersion step, thus films with very accurate thickness can be formed. This thickness is accurate because the thickness of each monolayer is known and can therefore be added to find the total thickness of a Langmuir–Blodgett film.
The monolayers are assembled vertically and are usually composed either of amphiphilic molecules (see chemical polarity) with a hydrophilic head and a hydrophobic tail (example: fatty acids) or nowadays commonly of nanoparticles.
Langmuir–Blodgett films are named after Irving Langmuir and Katharine B. Blodgett, who invented this technique while working in Research and Development for General Electric Co.
Historical background
Advances to the discovery of LB and LM films began with Benjamin Franklin in 1773 when he dropped about a teaspoon of oil onto a pond. Franklin noticed that the waves were calmed almost instantly and that the calming of the waves spread for about half an acre. What Franklin did not realize was that the oil had formed a monolayer on top of the pond surface. Over a century later, Lord Rayleigh quantified what Benjamin Franklin had seen. Knowing that the oil, oleic acid, had spread evenly over the water, Rayleigh calculated that the thickness of the film was 1.6 nm by knowing the volume of oil dropped and the area of coverage.
With the help of her kitchen sink, Agnes Pockels showed that area of films can be controlled with barriers. She added that surface tension varies with contamination of water. She used different oils to dedu |
https://en.wikipedia.org/wiki/Infinity%20focus | In optics and photography, infinity focus is the state where a lens or other optical system forms an image of an object an infinite distance away. This corresponds to the point of focus for parallel rays. The image is formed at the focal point of the lens.
In simple two lens systems such as a refractor telescope, the object at infinity forms an image at the focal point of the objective lens, which is subsequently magnified by the eyepiece. The magnification is equal to the focal length of the objective lens divided by the focal length of the eyepiece.
In practice, not all photographic lenses are capable of achieving infinity focus by design. A lens used with an adapter for close-up focusing, for example, may not be able to focus to infinity. Failure of the human eye to achieve infinity focus is diagnosed as myopia.
All optics are subject to manufacturing tolerances; even with perfect manufacture, optical trains experience thermal expansion. Focus mechanisms must accommodate part variations; even custom-built systems may have some means of adjustment. For example, telescopes such as the Mars Orbiter Camera, which are nominally set to infinity, have thermal controls. Deviations from its operating temperature are actively compensated to prevent shifts of focus.
See also
Hyperfocal distance |
https://en.wikipedia.org/wiki/Network%20redirector | In DOS and Windows, a network redirector, or redirector, is an operating system driver that sends data to and receives data from a remote device. A network redirector provides mechanisms to locate, open, read, write, and delete files and submit print jobs.
It provides application services such as named pipes and MailSlots. When an application needs to send or receive data from a remote device, it sends a call to the redirector. The redirector provides the functionality of the presentation layer of the OSI model.
Networks Hosts communicate through use of this client software: Shells, Redirectors and Requesters.
In Microsoft Networking, the network redirectors are implemented as Installable File System (IFS) drivers.
See also
Universal Naming Convention (UNC) |
https://en.wikipedia.org/wiki/Image-forming%20optical%20system | In optics, an image-forming optical system is a system capable of being used for imaging. The diameter of the aperture of the main objective is a common criterion for comparison among optical systems, such as large telescopes.
The two traditional optical systems are mirror-systems (catoptrics) and lens-systems (dioptrics). However, in the late twentieth century, optical fiber was introduced as a technology for transmitting images over long distances. Catoptrics and dioptrics have a focal point that concentrates light onto a specific point, while optical fiber the transfer of an image from one plane to another without the need for an optical focus.
Isaac Newton is reported to have designed what he called a catadioptrical phantasmagoria, which can be interpreted to mean an elaborate structure of both mirrors and lenses.
Catoptrics and optical fiber have no chromatic aberration, while dioptrics need to have this error corrected. Newton believed that such correction was impossible, because he thought the path of the light depended only on its color. In 1757 John Dollond was able to create an achromatised dioptric, which was the forerunner of the lenses used in all popular photographic equipment today.
Lower-energy X-Rays are the highest energy electromagnetic radiation that can be formed into an image, using a Wolter telescope. There are three types of Wolter telescopes Near infrared is typically the longest wavelength that are handled optically, such as in some large telescopes. |
https://en.wikipedia.org/wiki/Prinny | are a fictional race of creatures primarily in Nippon Ichi's Disgaea series of role-playing games. First appearing in Disgaea: Hour of Darkness, they have appeared in all later titles by the company, as well as on various merchandise such as hats and plush toys. With a few notable exceptions, they are voiced by Junji Majima in Japanese releases and Grant George in the English releases from Disgaea: Hour of Darkness to Disgaea 4: A Promise Unforgotten. The Prinnies are regarded as the mascots for the Disgaea series and have received generally positive reception.
Concept and creation
The Prinny was created by Takehito Harada while he was trying to come up with a character whose thoughts players could not try to envision. He began by choosing to make it animal-based, eventually deciding on making something similar to a penguin. He was told that he could use this character as much as he wanted. While it initially started out as looking somewhat realistic, it eventually gained a doll-like form.
Appearances
A prinny is a small, usually blue, pouch-wearing penguin-like creature with disproportionately small bat wings, two peg legs where feet would normally be, and stitches at the top of the chest. When thrown, they explode on impact. A common trait of prinnies is their upbeat attitude and tendency to end their sentences with "-ssu". In the English translation, they frequently use "dood" as an interjection. Prinnies stand roughly tall, though the weight can vary. Prinnies attack with knives, bombs, and occasionally other weapons stored in their bags. While rarely mentioned in the game, prinnies have been known to dispense a beverage known as prinny juice, which according to a Nippon Ichi Software America interview is produced from "the most flavorful and delicious parts of our fresh, vine-grown Prinnies".
In Disgaea: Hour of Darkness, humans who have led a worthless life, such as thieves or murderers, or have committed a mortal sin such as suicide, have their souls sewn |
https://en.wikipedia.org/wiki/Catoptrics | Catoptrics (from katoptrikós, "specular", from katoptron "mirror") deals with the phenomena of reflected light and image-forming optical systems using mirrors. A catoptric system is also called a catopter (catoptre).
Ancient texts
Catoptrics is the title of two texts from ancient Greece:
The Pseudo-Euclidean Catoptrics. This book is attributed to Euclid, although the contents are a mixture of work dating from Euclid's time together with work which dates to the Roman period. It has been argued that the book may have been compiled by the 4th century mathematician Theon of Alexandria. The book covers the mathematical theory of mirrors, particularly the images formed by plane and spherical concave mirrors.
Hero's Catoptrics. Written by Hero of Alexandria, this work concerns the practical application of mirrors for visual effects. In the Middle Ages, this work was falsely ascribed to Ptolemy. It only survives in a Latin translation.
The Latin translation of Alhazen's (Ibn al-Haytham) main work, Book of Optics (Kitab al-Manazir), exerted a great influence on Western science: for example, on the work of Roger Bacon, who cites him by name. His research in catoptrics (the study of optical systems using mirrors) centred on spherical and parabolic mirrors and spherical aberration. He made the observation that the ratio between the angle of incidence and refraction does not remain constant, and investigated the magnifying power of a lens. His work on catoptrics also contains the problem known as "Alhazen's problem". Alhazen's work influenced Averroes' writings on optics, and his legacy was further advanced through the 'reforming' of his Optics by Persian scientist Kamal al-Din al-Farisi (d. ca. 1320) in the latter's Kitab Tanqih al-Manazir (The Revision of [Ibn al-Haytham's] Optics).
Catoptric telescopes
The first practical catoptric telescope (the "Newtonian reflector") was built by Isaac Newton as a solution to the problem of chromatic aberration exhibited in telescopes |
https://en.wikipedia.org/wiki/Subtribe | Subtribe is a taxonomic category ranking which is below the rank of tribe and above genus. The standard suffix for a subtribe is -ina (in animals) or -inae (in plants). The early use of this word is from 19th century. An example of a subtribe is Hyptidinae, a group of flowering plants that contains approximately 400 accepted species distributed in 19 genera. |
https://en.wikipedia.org/wiki/Looking%20Glass%20server | Looking Glass servers (LG servers) are servers on the Internet running one of a variety of publicly available Looking Glass software implementations. They are commonly deployed by autonomous systems (AS) to offer access to their routing infrastructure in order to facilitate debugging network issues. A Looking Glass server is accessed remotely for the purpose of viewing routing information. Essentially, the server acts as a limited, read-only portal to routers of whatever organization is running the LG server.
Typically, Looking Glass servers are run by autonomous systems like Internet service providers (ISPs), Network Service Providers (NSPs), and Internet exchange points (IXPs).
Implementation
Looking glasses are web scripts directly connected to routers' admin interfaces such as telnet and SSH. These scripts are designed to relay textual commands from the web to the router and print back the response. They are often implemented in Perl PHP, and Python, and are publicly available on GitHub.
Security concerns
A 2014 paper demonstrated the potential security concerns of Looking Glass servers, noting that even an "attacker with very limited resources can exploit such flaws in operators' networks and gain access to core Internet infrastructure", resulting in anything from traffic disruption to global Border Gateway Protocol (BGP) route injection. This is due in part because looking glass servers are "an often overlooked critical part of an operator infrastructure" because it sits at the intersection of the public internet and "restricted admin consoles". As of 2014, most Looking Glass software was small and old, having last been updated in the early 2000's.
See also
Autonomous system (Internet)
Internet backbone |
https://en.wikipedia.org/wiki/Mains%20hum | Mains hum, electric hum, cycle hum, or power line hum is a sound associated with alternating current which is twice the frequency of the mains electricity. The fundamental frequency of this sound is usually double that of fundamental 50/60Hz, i.e.100/120Hz, depending on the local power-line frequency. The sound often has heavy harmonic content above 50/60Hz. Because of the presence of mains current in mains-powered audio equipment as well as ubiquitous AC electromagnetic fields from nearby appliances and wiring, 50/60Hz electrical noise can get into audio systems, and is heard as mains hum from their speakers. Mains hum may also be heard coming from powerful electric power grid equipment such as utility transformers, caused by mechanical vibrations induced by magnetostriction in magnetic core. Onboard aircraft (or spacecraft) the frequency heard is often higher pitched, due to the use of 400 Hz AC power in these settings because 400 Hz transformers are much smaller and lighter.
Causes
Electric hum around transformers is caused by stray magnetic fields causing the enclosure and accessories to vibrate. Magnetostriction is a second source of vibration, in which the core iron changes shape minutely when exposed to magnetic fields. The intensity of the fields, and thus the "hum" intensity, is a function of the applied voltage. Because the magnetic flux density is strongest twice every electrical cycle, the fundamental "hum" frequency will be twice the electrical frequency. Additional harmonics above 100/120Hz will be caused by the non-linear behavior of most common magnetic materials.
Around high-voltage power lines, hum may be produced by corona discharge.
In the realm of sound reinforcement (as in public address systems and loudspeakers), electric hum is often caused by induction. This hum is generated by oscillating electric currents induced in sensitive (high gain or high impedance) audio circuitry by the alternating electromagnetic fields emanating from nearby |
https://en.wikipedia.org/wiki/Quadrupole%20ion%20trap | In experimental physics, a quadrupole ion trap or paul trap is a type of ion trap that uses dynamic electric fields to trap charged particles. They are also called radio frequency (RF) traps or Paul traps in honor of Wolfgang Paul, who invented the device and shared the Nobel Prize in Physics in 1989 for this work. It is used as a component of a mass spectrometer or a trapped ion quantum computer.
Overview
A charged particle, such as an atomic or molecular ion, feels a force from an electric field. It is not possible to create a static configuration of electric fields that traps the charged particle in all three directions (this restriction is known as Earnshaw's theorem). It is possible, however, to create an average confining force in all three directions by use of electric fields that change in time. To do so, the confining and anti-confining directions are switched at a rate faster than it takes the particle to escape the trap. The traps are also called "radio frequency" traps because the switching rate is often at a radio frequency.
The quadrupole is the simplest electric field geometry used in such traps, though more complicated geometries are possible for specialized devices. The electric fields are generated from electric potentials on metal electrodes. A pure quadrupole is created from hyperbolic electrodes, though cylindrical electrodes are often used for ease of fabrication. Microfabricated ion traps exist where the electrodes lie in a plane with the trapping region above the plane. There are two main classes of traps, depending on whether the oscillating field provides confinement in three or two dimensions. In the two-dimension case (a so-called "linear RF trap"), confinement in the third direction is provided by static electric fields.
Theory
The 3D trap itself generally consists of two hyperbolic metal electrodes with their foci facing each other and a hyperbolic ring electrode halfway between the other two electrodes. The ions are trapped in th |
https://en.wikipedia.org/wiki/Genetic%20redundancy | Genetic redundancy is a term typically used to describe situations where a given biochemical function is redundantly encoded by two or more genes. In these cases, mutations (or defects) in one of these genes will have a smaller effect on the fitness of the organism than expected from the genes’ function. Characteristic examples of genetic redundancy include (Enns, Kanaoka et al. 2005) and (Pearce, Senis et al. 2004). Many more examples are thoroughly discussed in (Kafri, Levy & Pilpel. 2006).
The main source of genetic redundancy is the process of gene duplication which generates multiplicity in gene copy number. A second and less frequent source of genetic redundancy are convergent evolutionary processes leading to genes that are close in function but unrelated in sequence (Galperin, Walker & Koonin 1998). Genetic redundancy is typically associated with signaling networks, in which many proteins act together to accomplish teleological functions. In contrast to expectations, genetic redundancy is not associated with gene duplications [Wagner, 2007], neither do redundant genes mutate faster than essential genes [Hurst 1999]. Therefore, genetic redundancy has classically aroused much debate in the context of evolutionary biology (Nowak et al., 1997; Kafri, Springer & Pilpel . 2009).
From an evolutionary standpoint, genes with overlapping functions imply minimal, if any, selective pressures acting on these genes. One therefore expects that the genes participating in such buffering of mutations will be subject to severe mutational drift diverging their functions and/or expression patterns with considerably high rates. Indeed it has been shown that the functional divergence of paralogous pairs in both yeast and human is an extremely rapid process. Taking these notions into account, the very existence of genetic buffering, and the functional redundancies required for it, presents a paradox in light of the evolutionary concepts. On one hand, for genetic buffering to take |
https://en.wikipedia.org/wiki/Surface%20charge | A surface charge is an electric charge present on a two-dimensional surface. These electric charges are constrained on this 2-D surface, and surface charge density, measured in coulombs per square meter (C•m−2), is used to describe the charge distribution on the surface. The electric potential is continuous across a surface charge and the electric field is discontinuous, but not infinite; this is unless the surface charge consists of a dipole layer. In comparison, the potential and electric field both diverge at any point charge or linear charge.
In physics, at equilibrium, an ideal conductor has no charge on its interior; instead, the entirety of the charge of the conductor resides on the surface. However, this only applies to the ideal case of infinite electrical conductivity; the majority of the charge of an actual conductor resides within the skin depth of the conductor's surface. For dielectric materials, upon the application of an external electric field, the positive charges and negative charges in the material will slightly move in opposite directions, resulting in polarization density in the bulk body and bound charge at the surface.
In chemistry, there are many different processes which can lead to a surface being charged, including adsorption of ions, protonation or deprotonation, and, as discussed above, the application of an external electric field. Surface charge emits an electric field, which causes particle repulsion and attraction, affecting many colloidal properties.
Surface charge practically always appears on the particle surface when it is placed into a fluid. Most fluids contain ions, positive (cations) and negative (anions). These ions interact with the object surface. This interaction might lead to the adsorption of some of them onto the surface. If the number of adsorbed cations exceeds the number of adsorbed anions, the surface would have a net positive electric charge.
Dissociation of the surface chemical group is another possible mech |
https://en.wikipedia.org/wiki/Factored%20language%20model | The factored language model (FLM) is an extension of a conventional language model introduced by Jeff Bilmes and Katrin Kirchoff in 2003. In an FLM, each word is viewed as a vector of k factors: An FLM provides the probabilistic model where the prediction of a factor is based on parents . For example, if represents a word token and represents a Part of speech tag for English, the expression gives a model for predicting current word token based on a traditional Ngram model as well as the Part of speech tag of the previous word.
A major advantage of factored language models is that they allow users to specify linguistic knowledge such as the relationship between word tokens and Part of speech in English, or morphological information (stems, root, etc.) in Arabic.
Like N-gram models, smoothing techniques are necessary in parameter estimation. In particular, generalized back-off is used in training an FLM. |
https://en.wikipedia.org/wiki/Chief%20Moccanooga | Chief Moccanooga was the former athletic mascot for the University of Tennessee at Chattanooga, until 1996, when the university abandoned the mascot as potentially offensive at the request of the Chattanooga InterTribal Association. Chief Moccanooga was replaced with a mockingbird, the state bird of Tennessee, and the nickname for Chattanooga athletics was changed from 'Moccasins' to simply 'Mocs'.
Chattanooga's decision to remove Chief Moccanooga as mascot was similar to the actions of several other athletic franchises. In 1986, Chief Noc-A-Homa, the former drum-thumping mascot of Major League Baseball's the Atlanta Braves, was removed. The Cleveland Indians continued to employ a potentially offensive Native American mascot, Chief Wahoo until 2018. |
https://en.wikipedia.org/wiki/Stirling%20numbers%20of%20the%20second%20kind | In mathematics, particularly in combinatorics, a Stirling number of the second kind (or Stirling partition number) is the number of ways to partition a set of n objects into k non-empty subsets and is denoted by or . Stirling numbers of the second kind occur in the field of mathematics called combinatorics and the study of partitions. They are named after James Stirling.
The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices. This article is devoted to specifics of Stirling numbers of the second kind. Identities linking the two kinds appear in the article on Stirling numbers.
Definition
The Stirling numbers of the second kind, written or or with other notations, count the number of ways to partition a set of labelled objects into nonempty unlabelled subsets. Equivalently, they count the number of different equivalence relations with precisely equivalence classes that can be defined on an element set. In fact, there is a bijection between the set of partitions and the set of equivalence relations on a given set. Obviously,
for n ≥ 0, and for n ≥ 1,
as the only way to partition an n-element set into n parts is to put each element of the set into its own part, and the only way to partition a nonempty set into one part is to put all of the elements in the same part. Unlike Stirling numbers of the first kind, they can be calculated using a one-sum formula:
The Stirling numbers of the second kind may also be characterized as the numbers that arise when one expresses powers of an indeterminate x in terms of the falling factorials
(In particular, (x)0 = 1 because it is an empty product.)
In other words
Notation
Various notations have been used for Stirling numbers of the second kind. The brace notation was used by Imanuel Marx and Antonio Salmeri in 1962 for variants of these numbers.<ref>Antonio Salmeri, Introduzione alla teoria dei coefficienti fattoriali, Giornale di Matema |
https://en.wikipedia.org/wiki/Stirling%20numbers%20of%20the%20first%20kind | In mathematics, especially in combinatorics, Stirling numbers of the first kind arise in the study of permutations. In particular, the Stirling numbers of the first kind count permutations according to their number of cycles (counting fixed points as cycles of length one).
The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices. This article is devoted to specifics of Stirling numbers of the first kind. Identities linking the two kinds appear in the article on Stirling numbers.
Definitions
Stirling numbers of the first kind are the coefficients in the expansion of the falling factorial
into powers of the variable :
For example, , leading to the values , , and .
Subsequently, it was discovered that the absolute values of these numbers are equal to the number of permutations of certain kinds. These absolute values, which are known as unsigned Stirling numbers of the first kind, are often denoted or . They may be defined directly to be the number of permutations of elements with disjoint cycles. For example, of the permutations of three elements, there is one permutation with three cycles (the identity permutation, given in one-line notation by or in cycle notation by ), three permutations with two cycles (, , and ) and two permutations with one cycle ( and ). Thus, , and . These can be seen to agree with the previous calculation of for .
It was observed by Alfréd Rényi that the unsigned Stirling number also count the number
of permutations of size with left-to-right maxima.
The unsigned Stirling numbers may also be defined algebraically, as the coefficients of the rising factorial:
.
The notations used on this page for Stirling numbers are not universal, and may conflict with notations in other sources. (The square bracket notation is also common notation for the Gaussian coefficients.)
Definition by permutation
can be defined as the number of permutations on elem |
https://en.wikipedia.org/wiki/Stack%20search | Stack search (also known as Stack decoding algorithm) is a search algorithm similar to beam search. It can be used to explore tree-structured search spaces and is often employed in Natural language processing applications, such as parsing of natural languages, or for decoding of error correcting codes where the technique goes under the name of sequential decoding.
Stack search keeps a list of the best n candidates seen so far. These candidates are incomplete solutions to the search problems, e.g. partial parse trees. It then iteratively expands the best partial solution, putting all resulting partial solutions onto the stack and then trimming the resulting list of partial solutions to the top n candidates, until a real solution (i.e. complete parse tree) has been found.
Stack search is not guaranteed to find the optimal solution to the search problem. The quality of the result depends on the quality of the search heuristic. |
https://en.wikipedia.org/wiki/Pound%E2%80%93Rebka%20experiment | The Pound–Rebka experiment monitored frequency shifts in gamma rays as they rose and fell in the gravitational field of the Earth. The experiment tested Einstein's 1907 and 1911 predictions, based on the equivalence principle, that photons would gain energy when descending a gravitational potential, and would lose energy when rising through a gravitational potential. It was proposed by Robert Pound and his graduate student Glen A. Rebka Jr. in 1959, and was the last of the classical tests of general relativity to be verified. The measurement of gravitational redshift and blueshift by this experiment validated the prediction of the equivalence principle that clocks should be measured as running at different rates in different places of a gravitational field. It is considered to be the experiment that ushered in an era of precision tests of general relativity.
Background
Equivalence principle argument predicting gravitational red- and blueshift
In the decade preceding Einstein's publication of the definitive version of his theory of general relativity, he anticipated several of the results of his final theory with heuristic arguments, not all of which were to prove to be correct.
To show that the equivalence principle implies that light is Doppler-shifted in a gravitational field, Einstein considered a light source separated along the z-axis by a distance above a receiver in a homogeneous gravitational field having a force per unit mass of 1 A continuous beam of electromagnetic energy with frequency is emitted by towards According to the equivalence principle, this system is equivalent to a gravitation-free system which moves with uniform acceleration in the direction of the positive z-axis, with separated by a constant distance from
In the accelerated system, light emitted from takes (to a first approximation) to arrive at But in this time, the velocity of will have increased by from its velocity when the light was emitted. The frequency of lig |
https://en.wikipedia.org/wiki/List%20of%20computer%20algebra%20systems | The following tables provide a comparison of computer algebra systems (CAS). A CAS is a package comprising a set of algorithms for performing symbolic manipulations on algebraic objects, a language to implement them, and an environment in which to use the language. A CAS may include a user interface and graphics capability; and to be effective may require a large library of algorithms, efficient data structures and a fast kernel.
General
These computer algebra systems are sometimes combined with "front end" programs that provide a better user interface, such as the general-purpose GNU TeXmacs.
Functionality
Below is a summary of significantly developed symbolic functionality in each of the systems.
via SymPy
<li> via qepcad optional package
Those which do not "edit equations" may have a GUI, plotting, ASCII graphic formulae and math font printing. The ability to generate plaintext files is also a sought-after feature because it allows a work to be understood by people who do not have a computer algebra system installed.
Operating system support
The software can run under their respective operating systems natively without emulation. Some systems must be compiled first using an appropriate compiler for the source language and target platform. For some platforms, only older releases of the software may be available.
Graphing calculators
Some graphing calculators have CAS features.
See also
:Category:Computer algebra systems
Comparison of numerical-analysis software
Comparison of statistical packages
List of information graphics software
List of numerical-analysis software
List of numerical libraries
List of statistical software
Mathematical software
Web-based simulation |
https://en.wikipedia.org/wiki/Lambda%20lifting | Lambda lifting is a meta-process that restructures a computer program so that functions are defined independently of each other in a global scope. An individual "lift" transforms a local function into a global function. It is a two step process, consisting of;
Eliminating free variables in the function by adding parameters.
Moving functions from a restricted scope to broader or global scope.
The term "lambda lifting" was first introduced by Thomas Johnsson around 1982 and was historically considered as a mechanism for implementing functional programming languages. It is used in conjunction with other techniques in some modern compilers.
Lambda lifting is not the same as closure conversion. It requires all call sites to be adjusted (adding extra arguments to calls) and does not introduce a closure for the lifted lambda expression. In contrast, closure conversion does not require call sites to be adjusted but does introduce a closure for the lambda expression mapping free variables to values.
The technique may be used on individual functions, in code refactoring, to make a function usable outside the scope in which it was written. Lambda lifts may also be repeated, in order to transform the program. Repeated lifts may be used to convert a program written in lambda calculus into a set of recursive functions, without lambdas. This demonstrates the equivalence of programs written in lambda calculus and programs written as functions. However it does not demonstrate the soundness of lambda calculus for deduction, as the eta reduction used in lambda lifting is the step that introduces cardinality problems into the lambda calculus, because it removes the value from the variable, without first checking that there is only one value that satisfies the conditions on the variable (see Curry's paradox).
Lambda lifting is expensive on processing time for the compiler. An efficient implementation of lambda lifting is on processing time for the compiler.
In the untyped |
https://en.wikipedia.org/wiki/Zeng%20Liansong | Zeng Liansong (; 17 December 1917–19 October 1999) was a Chinese supply chain manager and former secret agent of the Chinese Communist Party. He designed the National Flag of the People's Republic of China and previously served as deputy manager of Shanghai City Daily Necessities Company.
Early life and education
Zeng Liansong was born in Rui'an, Zhejiang. He studied in Rui'an County Primary School (now Rui'an City Experimental Primary School) and Rui'an High School in his youth.
In 1936, Zeng was admitted to the Department of Economics of the National Central University. He later joined the Anti-Japanese and National Salvation Federation at the university and devoted himself to the Chinese Communist Revolution. He joined the Chinese Communist Party (CCP) in May 1938, engaged in underground activities, and served as secretary of the underground student party branch of the CCP at the National Central University.
Career
After his graduation in 1940, he served as an underground worker (secret agent) for the CCP. In 1949, he worked as a secretary at the Shanghai Modern Economic News Agency (上海现代经济通讯社), a secret economic news stronghold and intelligence agency led by the Shanghai Underground Party of the CCP (中国共产党上海地下党). In May 1949, the People's Liberation Army gained control of Shanghai, and Shanghai Modern Economic News Agency completed its historical mission and was disbanded. Soon after, Zeng Liansong saw the solicitation notice for the national flag of the People's Republic of China and devoted himself to the design work.
In mid-August 1949, Zeng Liansong submitted the pattern drawing to the preparatory meeting of the new Chinese People's Political Consultative Conference (CPPCC). It depicted a field of Chinese red with four gold stars around a larger star in the canton. The larger star contained the hammer and sickle symbol of communism. Zeng's design was very similar to the design that the nation adopted, the only difference being the removal of the hamme |
https://en.wikipedia.org/wiki/Supersonic%20fracture | Supersonic fractures are fractures where the fracture propagation velocity is higher than the speed of sound in the material. This phenomenon was first discovered by scientists from the Max Planck Institute for Metals Research in Stuttgart (Markus J. Buehler and Huajian Gao) and IBM Almaden Research Center in San Jose, California (Farid F. Abraham).
The issues of intersonic and supersonic fracture become the frontier of dynamic fracture mechanics. The work of Burridge initiated the exploration for intersonic crack growth (when the crack tip velocity V is between the shear in wave speed C^8 and the longitudinal wave speed C^1.
Supersonic fracture was a phenomenon totally unexplained by the classical theories of fracture. Molecular dynamics simulations by the group around Abraham and Gao have shown the existence of intersonic mode I and supersonic mode II cracks. This motivated a continuum mechanics analysis of supersonic mode III cracks by Yang. Recent progress in the theoretical understanding of hyperelasticity in dynamic fracture has shown that supersonic crack propagation can only be understood by introducing a new length scale, called χ; which governs the process of energy transport near a crack tip. The crack dynamics is completely dominated by material properties inside a zone surrounding the crack tip with characteristic size equal to χ. When the material inside this characteristic zone is stiffened due to hyperelastic properties, cracks propagate faster than the longitudinal wave speed. The research group of Gao has used this concept to simulate the Broberg problem of crack propagation inside a stiff strip embedded in a soft elastic matrix. These simulations confirmed the existence of an energy characteristic length. This study also had implications for dynamic crack propagation in composite materials. If the characteristic size of the composite microstructure is larger than the energy characteristic length, χ; models that homogenize the materials into an e |
https://en.wikipedia.org/wiki/Frontotemporal%20lobar%20degeneration | Frontotemporal lobar degeneration (FTLD) is a pathological process that occurs in frontotemporal dementia. It is characterized by atrophy in the frontal lobe and temporal lobe of the brain, with sparing of the parietal and occipital lobes.
Common proteinopathies that are found in FTLD include the accumulation of tau proteins and TAR DNA-binding protein 43 (TDP-43). Mutations in the C9orf72 gene have been established as a major genetic contribution of FTLD, although defects in the granulin (GRN) and microtubule-associated proteins (MAPs) are also associated with it.
Classification
There are 3 main histological subtypes found at post-mortem:
FTLD-tau is characterised by tau positive inclusion bodies often referred to as Pick-bodies. Examples of FTLD-tau include; Pick's disease, corticobasal degeneration, progressive supranuclear palsy.
FTLD-TDP (or FTLD-U ) is characterised by ubiquitin and TDP-43 positive, tau negative, FUS negative inclusion bodies. The pathological histology of this subtype is so diverse it is subdivided into four subtypes based on the detailed histological findings:
Type A presents with many small neurites and neuronal cytoplasmic inclusion bodies in the upper (superficial) cortical layers. Bar-like neuronal intranuclear inclusions can also be seen they are fewer in number.
Type B presents with many neuronal and glial cytoplasmic inclusions in both the upper (superficial) and lower (deep) cortical layers, and lower motor neurons. However neuronal intranuclear inclusions are rare or absent. This is often associated with ALS and C9ORF72 mutations (see next section).
Type C presents many long neuritic profiles found in the superficial cortical laminae, very few or no neuronal cytoplasmic inclusions, neuronal intranuclear inclusions or glial cytoplasmic inclusions. This is often associated with semantic dementia.
Type D presents with many neuronal intranuclear inclusions and dystrophic neurites, and an unusual absence of inclusions in the granul |
https://en.wikipedia.org/wiki/Characteristic%20energy%20length%20scale | The characteristic energy length scale describes the size of the region from which energy flows to a rapidly moving crack. If material properties change within the characteristic energy length scale, local wave speeds can dominate crack dynamics. This can lead to supersonic fracture.
Materials science |
https://en.wikipedia.org/wiki/Shark%20repellent | A shark repellent is any method of driving sharks away from an area. Shark repellents are a category of animal repellents. Shark repellent technologies include magnetic shark repellent, electropositive shark repellents, electrical repellents, and semiochemicals. Shark repellents can be used to protect people from sharks by driving the sharks away from areas where they are likely to kill human beings. In other applications, they can be used to keep sharks away from areas they may be a danger to themselves due to human activity. In this case, the shark repellent serves as a shark conservation method. There are some naturally occurring shark repellents; modern artificial shark repellents date to at least the 1940s, with the United States Navy using them in the Pacific Ocean theater of World War II.
Natural repellents
It has traditionally been believed that sharks are repelled by the smell of a dead shark; however, modern research has had mixed results.
The Pardachirus marmoratus fish (finless sole, Red Sea Moses sole) repels sharks through its secretions. The best-understood factor is pardaxin, acting as an irritant to the sharks' gills, but other chemicals have been identified as contributing to the repellent effect.
In 2017, the US Navy announced that it was developing a synthetic analog of hagfish slime with potential application as a shark repellent.
History
Some of the earliest research on shark repellents took place during the Second World War when military services sought to minimize the risk to stranded aviators and sailors in the water. Research has continued to the present, with notable researchers including Americans Eugenie Clark, and later Samuel H. Gruber, who has conducted tests at the Bimini Sharklab in Bimini, and the Japanese scientist Kazuo Tachibana. Future celebrity chef Julia Child developed shark repellent while working for the Office of Strategic Services
Initial work, which was based on historical research and studies at the time, focused |
https://en.wikipedia.org/wiki/Superheavy%20element | Superheavy elements, also known as transactinide elements, transactinides, or super-heavy elements, are the chemical elements with atomic number greater than 103. The superheavy elements are those beyond the actinides in the periodic table; the last actinide is lawrencium (atomic number 103). By definition, superheavy elements are also transuranium elements, i.e., having atomic numbers greater than that of uranium (92). Depending on the definition of group 3 adopted by authors, lawrencium may also be included to complete the 6d series.
Glenn T. Seaborg first proposed the actinide concept, which led to the acceptance of the actinide series. He also proposed a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153 (although more recent work suggests the end of the superactinide series to occur at element 157 instead). The transactinide seaborgium was named in his honor.
Superheavy elements are radioactive and have only been obtained synthetically in laboratories. No macroscopic sample of any of these elements have ever been produced. Superheavy elements are all named after physicists and chemists or important locations involved in the synthesis of the elements.
IUPAC defines an element to exist if its lifetime is longer than 10−14 second, which is the time it takes for the atom to form an electron cloud.
The known superheavy elements form part of the 6d and 7p series in the periodic table. Except for rutherfordium and dubnium (and lawrencium if it is included), even the longest-lasting isotopes of superheavy elements have half-lives of minutes or less. The element naming controversy involved elements 102–109. Some of these elements thus used systematic names for many years after their discovery was confirmed. (Usually the systematic names are replaced with permanent names proposed by the discoverers relatively shortly after a discovery has been confirmed.)
Introduction
Synthesis of superheavy nu |
https://en.wikipedia.org/wiki/Universally%20measurable%20set | In mathematics, a subset of a Polish space is universally measurable if it is measurable with respect to every complete probability measure on that measures all Borel subsets of . In particular, a universally measurable set of reals is necessarily Lebesgue measurable (see below).
Every analytic set is universally measurable. It follows from projective determinacy, which in turn follows from sufficient large cardinals, that every projective set is universally measurable.
Finiteness condition
The condition that the measure be a probability measure; that is, that the measure of itself be 1, is less restrictive than it may appear. For example, Lebesgue measure on the reals is not a probability measure, yet every universally measurable set is Lebesgue measurable. To see this, divide the real line into countably many intervals of length 1; say, N0=[0,1), N1=[1,2), N2=[-1,0), N3=[2,3), N4=[-2,-1), and so on. Now letting μ be Lebesgue measure, define a new measure ν by
Then easily ν is a probability measure on the reals, and a set is ν-measurable if and only if it is Lebesgue measurable. More generally a universally measurable set must be measurable with respect to every sigma-finite measure that measures all Borel sets.
Example contrasting with Lebesgue measurability
Suppose is a subset of Cantor space ; that is, is a set of infinite sequences of zeroes and ones. By putting a binary point before such a sequence, the sequence can be viewed as a real number between 0 and 1 (inclusive), with some unimportant ambiguity. Thus we can think of as a subset of the interval [0,1], and evaluate its Lebesgue measure, if that is defined. That value is sometimes called the coin-flipping measure of , because it is the probability of producing a sequence of heads and tails that is an element of upon flipping a fair coin infinitely many times.
Now it follows from the axiom of choice that there are some such without a well-defined Lebesgue measure (or coin-flipping m |
https://en.wikipedia.org/wiki/Distribution%20ensemble | In cryptography, a distribution ensemble or probability ensemble is a family of distributions or random variables where is a (countable) index set, and each is a random variable, or probability distribution. Often and it is required that each have a certain property for n sufficiently large.
For example, a uniform ensemble is a distribution ensemble where each is uniformly distributed over strings of length n. In fact, many applications of probability ensembles implicitly assume that the probability spaces for the random variables all coincide in this way, so every probability ensemble is also a stochastic process.
See also
Provable security
Statistically close
Pseudorandom ensemble
Computational indistinguishability |
https://en.wikipedia.org/wiki/Pseudorandom%20ensemble | In cryptography, a pseudorandom ensemble is a family of variables meeting the following criteria:
Let be a uniform ensemble
and be an ensemble. The ensemble is called pseudorandom if and
are indistinguishable in polynomial time. |
https://en.wikipedia.org/wiki/Bohm%20diffusion | The diffusion of plasma across a magnetic field was conjectured to follow the Bohm diffusion scaling as indicated from the early plasma experiments of very lossy machines. This predicted that the rate of diffusion was linear with temperature and inversely linear with the strength of the confining magnetic field.
The rate predicted by Bohm diffusion is much higher than the rate predicted by classical diffusion, which develops from a random walk within the plasma. The classical model scaled inversely with the square of the magnetic field. If the classical model is correct, small increases in the field lead to much longer confinement times. If the Bohm model is correct, magnetically confined fusion would not be practical.
Early fusion energy machines appeared to behave according to Bohm's model, and by the 1960s there was a significant stagnation within the field. The introduction of the tokamak in 1968 was the first evidence that the Bohm model did not hold for all machines. Bohm predicts rates that are too fast for these machines, and classical too slow; studying these machines has led to the neoclassical diffusion concept.
Description
Bohm diffusion is characterized by a diffusion coefficient equal to
where B is the magnetic field strength, T is the electron gas temperature, e is the elementary charge, kB is the Boltzmann constant.
History
It was first observed in 1949 by David Bohm, E. H. S. Burhop, and Harrie Massey while studying magnetic arcs for use in isotope separation. It has since been observed that many other plasmas follow this law. Fortunately there are exceptions where the diffusion rate is lower, otherwise there would be no hope of achieving practical fusion energy. In Bohm's original work he notes that the fraction 1/16 is not exact; in particular "the exact value of [the diffusion coefficient] is uncertain within a factor of 2 or 3." Lyman Spitzer considered this fraction as a factor related to plasma instability.
Approximate derivation
Gene |
https://en.wikipedia.org/wiki/Ibn%20Sahl%20%28mathematician%29 | Ibn Sahl (full name: Abū Saʿd al-ʿAlāʾ ibn Sahl ; c. 940–1000) was a Persian mathematician and physicist of the Islamic Golden Age, associated with the Buyid court of Baghdad.
Nothing in his name allows us to glimpse his country of origin.
He is known to have written an optical treatise around 984. The text of this treatise was reconstructed by Roshdi Rashed from two manuscripts (edited 1993).: Damascus, al-Ẓāhirīya MS 4871, 3 fols., and Tehran, Millī MS 867, 51 fols.
The Tehran manuscript is much longer, but it is badly damaged, and the Damascus manuscript contains a section missing entirely from the Tehran manuscript.
The Damascus manuscript has the title Fī al-'āla al-muḥriqa "On the burning instruments", the Tehran manuscript has a title added in a later hand Kitāb al-harrāqāt "The book of burners".
Ibn Sahl is the first Muslim scholar known to have studied Ptolemy's Optics, and as such an important precursor to the Book of Optics by Ibn Al-Haytham (Alhazen), written some thirty years later.
Ibn Sahl dealt with the optical properties of curved mirrors and lenses and has been described as the discoverer of the law of refraction (Snell's law).
Ibn Sahl uses this law to derive lens shapes that focus light with no geometric aberrations, known as anaclastic lenses.
In the remaining parts of the treatise, Ibn Sahl dealt with parabolic mirrors, ellipsoidal mirrors, biconvex lenses, and techniques for drawing hyperbolic arcs.
Ibn Sahl designed convex lenses that focus lights rays that are parallel, which can cause an object to burn at a specific distance. A biconvex lens has the ability to focus at a specific point at an infinite distance. Ibn Sahl has made many contributions to optics, he also wrote an article about the celestial sphere. A constant ratio is the main focus point of his study, and it allows for a better understanding of refraction lenses. Ibn Sahl did an experiment where a piece of crystal was used to propagate a ray of light through the crystal w |
https://en.wikipedia.org/wiki/Acentric%20factor | The acentric factor is a conceptual number introduced by Kenneth Pitzer in 1955, proven to be useful in the description of fluids. It has become a standard for the phase characterization of single & pure components, along with other state description parameters such as molecular weight, critical temperature, critical pressure, and critical volume (or critical compressibility).
Pitzer defined from the relationship
where
is the reduced saturation vapor pressure and
is the reduced temperature.
The acentric factor is said to be a measure of the non-sphericity (centricity) of molecules. As it increases, the vapor curve is "pulled" down, resulting in higher boiling points. For many monatomic fluids,
is close to 0.1, which leads to . In many cases, lies above the boiling temperature of liquids at atmosphere pressure.
Values of can be determined for any fluid from accurate experimental vapor pressure data. The definition of gives values which are close to zero for the noble gases argon, krypton, and xenon. is also very close to zero for molecules which are nearly spherical. Values of correspond to vapor pressures above the critical pressure, and are non-physical.
The acentric factor can be predicted analytically from some equations of state. For example, it can be easily shown from the above definition that a van der Waals fluid has an acentric factor of about −0.302024, which if applied to a real system would indicate a small, ultra-spherical molecule.
Values of some common gases
See also
Equation of state
Reduced pressure
Reduced temperature |
https://en.wikipedia.org/wiki/Lutein | Lutein (; from Latin luteus meaning "yellow") is a xanthophyll and one of 600 known naturally occurring carotenoids. Lutein is synthesized only by plants, and like other xanthophylls is found in high quantities in green leafy vegetables such as spinach, kale and yellow carrots. In green plants, xanthophylls act to modulate light energy and serve as non-photochemical quenching agents to deal with triplet chlorophyll, an excited form of chlorophyll which is overproduced at very high light levels during photosynthesis. See xanthophyll cycle for this topic.
Animals obtain lutein by ingesting plants. In the human retina, lutein is absorbed from blood specifically into the macula lutea, although its precise role in the body is unknown. Lutein is also found in egg yolks and animal fats.
Lutein is isomeric with zeaxanthin, differing only in the placement of one double bond. Lutein and zeaxanthin can be interconverted in the body through an intermediate called meso-zeaxanthin. The principal natural stereoisomer of lutein is (3R,3R,6R)-beta,epsilon-carotene-3,3-diol. Lutein is a lipophilic molecule and is generally insoluble in water. The presence of the long chromophore of conjugated double bonds (polyene chain) provides the distinctive light-absorbing properties. The polyene chain is susceptible to oxidative degradation by light or heat and is chemically unstable in acids.
Lutein is present in plants as fatty-acid esters, with one or two fatty acids bound to the two hydroxyl-groups. For this reason, saponification (de-esterification) of lutein esters to yield free lutein may yield lutein in any ratio from 1:1 to 1:2 molar ratio with the saponifying fatty acid.
As a pigment
This xanthophyll, like its sister compound zeaxanthin, has primarily been used in food and supplement manufacturing as a colorant due to its yellow-red color. Lutein absorbs blue light and therefore appears yellow at low concentrations and orange-red at high concentrations.
Many songbirds (like gold |
https://en.wikipedia.org/wiki/Psychological%20adaptation | A psychological adaptation is a functional, cognitive or behavioral trait that benefits an organism in its environment. Psychological adaptations fall under the scope of evolved psychological mechanisms (EPMs), however, EPMs refer to a less restricted set. Psychological adaptations include only the functional traits that increase the fitness of an organism, while EPMs refer to any psychological mechanism that developed through the processes of evolution. These additional EPMs are the by-product traits of a species’ evolutionary development (see spandrels), as well as the vestigial traits that no longer benefit the species’ fitness. It can be difficult to tell whether a trait is vestigial or not, so some literature is more lenient and refers to vestigial traits as adaptations, even though they may no longer have adaptive functionality. For example, xenophobic attitudes and behaviors, some have claimed, appear to have certain EPM influences relating to disease aversion, however, in many environments these behaviors will have a detrimental effect on a person's fitness. The principles of psychological adaptation rely on Darwin's theory of evolution and are important to the fields of evolutionary psychology, biology, and cognitive science.
Darwinian theory
Charles Darwin proposed his theory of evolution in On the Origin of Species (1859). His theory dictates that adaptations are traits that arise from the selective pressures a species faces in its environment. Adaptations must benefit either an organism's chance of survival or reproduction to be considered adaptive, and are then passed down to the next generation through this process of natural selection. Psychological adaptations are those adaptive traits that we consider cognitive or behavioral. These can include conscious social strategies, subconscious emotional responses (guilt, fear, etc.), or the most innate instincts. Evolutionary psychologists consider a number of factors in what determines a psychological a |
https://en.wikipedia.org/wiki/CESU-8 | The Compatibility Encoding Scheme for UTF-16: 8-Bit (CESU-8) is a variant of UTF-8 that is described in Unicode Technical Report #26. A Unicode code point from the Basic Multilingual Plane (BMP), i.e. a code point in the range U+0000 to U+FFFF, is encoded in the same way as in UTF-8. A Unicode supplementary character, i.e. a code point in the range U+10000 to U+10FFFF, is first represented as a surrogate pair, like in UTF-16, and then each surrogate code point is encoded in UTF-8. Therefore, CESU-8 needs six bytes (3 bytes per surrogate) for each Unicode supplementary character while UTF-8 needs only four. Though not specified in the technical report, unpaired surrogates are also encoded as 3 bytes each, and CESU-8 is exactly the same as applying an older UCS-2 to UTF-8 converter to UTF-16 data.
The encoding of Unicode non-BMP characters works out to 11101101 1010yyyy 10xxxxxx 11101101 1011xxxx 10xxxxxx (yyyy represents the top five bits of the character minus one). The byte values 0xF0—0xF4 will not appear in CESU-8, as they start the 4-byte encodings used by UTF-8.
CESU-8 is not an official part of the Unicode Standard, because Unicode Technical Reports are informative documents only. It should be used exclusively for internal processing and never for external data exchange.
Supporting CESU-8 in HTML documents is prohibited by the W3C and WHATWG HTML standards, as it would present a cross-site scripting vulnerability.
Java's Modified UTF-8 is CESU-8 with a special overlong encoding of the NUL character (U+0000) as the two-byte sequence C0 80.
The Oracle database uses CESU-8 for its "UTF8" character set. Standard UTF-8 can be obtained using the character set "AL32UTF8" (since Oracle version 9.0).
Examples |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.