source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Joseph%20Engelberger | Joseph Frederick Engelberger (July 26, 1925 – December 1, 2015) was an American physicist, engineer and entrepreneur. Licensing the original patent awarded to inventor George Devol, Engelberger developed the first industrial robot in the United States, the Unimate, in the 1950s. Later, he worked as entrepreneur and vocal advocate of robotic technology beyond the manufacturing plant in a variety of fields, including service industries, health care, and space exploration.
Biography
Early life and education
Joseph Frederick Engelberger was born on July 26, 1925, in Brooklyn, New York. He grew up in Connecticut during the Great Depression, but later returned to New York City for his college education.
Engelberger received his B.S. in physics in 1946, and M.S. in Electrical Engineering in 1949 from Columbia University. He worked as an engineer with Manning, Maxwell and Moore, where he met inventor George Devol at a Westport cocktail party in 1956, two years after Devol had designed and patented a rudimentary industrial robotic arm. However, Manning, Maxwell and Moore was sold and Engelberger's division was closed that year.
Unimation
Finding himself jobless but with a business partner and an idea, Engelberger co-founded Unimation with Devol, creating the world's first robotics company. In 1957, he also founded Consolidated Controls Corporation. As president of Unimation, Engelberger collaborated with Devol to engineer and produce an industrial robot under the brand name Unimate. The first Unimate robotic arm was installed at a General Motors Plant in Ewing Township, New Jersey, in 1961.
The introduction of robotics to the manufacturing process effectively transformed the automotive industry, with Chrysler and the Ford Motor Company soon following General Motors' lead and installing Unimates in their manufacturing facilities. The rapid adoption of the technology also provided Unimation with a working business model: after selling the first Unimate at a $35,000 loss, |
https://en.wikipedia.org/wiki/Grothendieck%27s%20relative%20point%20of%20view | Grothendieck's relative point of view is a heuristic applied in certain abstract mathematical situations, with a rough meaning of taking for consideration families of 'objects' explicitly depending on parameters, as the basic field of study, rather than a single such object. It is named after Alexander Grothendieck, who made extensive use of it in treating foundational aspects of algebraic geometry. Outside that field, it has been influential particularly on category theory and categorical logic.
In the usual formulation, the point of view treats, not objects X of a given category C, but morphisms
f: X → S
where S is a fixed object. This idea is made formal in the idea of the slice category of objects of C 'above' S. To move from one slice to another requires a base change; from a technical point of view base change becomes a major issue for the whole approach (see for example Beck–Chevalley conditions).
A base change 'along' a given morphism
g: T → S
is typically given by the fiber product, producing an object over T from one over S. The 'fiber' terminology is significant: the underlying heuristic is that X over S is a family of fibers, one for each 'point' of S; the fiber product is then the family on T, which described by fibers is for each point of T the fiber at its image in S. This set-theoretic language is too naïve to fit the required context, certainly, from algebraic geometry. It combines, though, with the use of the Yoneda lemma to replace the 'point' idea with that of treating an object, such as S, as 'as good as' the representable functor it sets up.
The Grothendieck–Riemann–Roch theorem from about 1956 is usually cited as the key moment for the introduction of this circle of ideas. The more classical types of Riemann–Roch theorem are recovered in the case where S is a single point (i.e. the final object in the working category C). Using other S is a way to have versions of theorems 'with parameters', i.e. allowing for continuous variation, for |
https://en.wikipedia.org/wiki/Thabit%20number | In number theory, a Thabit number, Thâbit ibn Qurra number, or 321 number is an integer of the form for a non-negative integer n.
The first few Thabit numbers are:
2, 5, 11, 23, 47, 95, 191, 383, 767, 1535, 3071, 6143, 12287, 24575, 49151, 98303, 196607, 393215, 786431, 1572863, ...
The 9th century mathematician, physician, astronomer and translator Thābit ibn Qurra is credited as the first to study these numbers and their relation to amicable numbers.
Properties
The binary representation of the Thabit number 3·2n−1 is n+2 digits long, consisting of "10" followed by n 1s.
The first few Thabit numbers that are prime (Thabit primes or 321 primes):
2, 5, 11, 23, 47, 191, 383, 6143, 786431, 51539607551, 824633720831, ...
, there are 67 known prime Thabit numbers. Their n values are:
0, 1, 2, 3, 4, 6, 7, 11, 18, 34, 38, 43, 55, 64, 76, 94, 103, 143, 206, 216, 306, 324, 391, 458, 470, 827, 1274, 3276, 4204, 5134, 7559, 12676, 14898, 18123, 18819, 25690, 26459, 41628, 51387, 71783, 80330, 85687, 88171, 97063, 123630, 155930, 164987, 234760, 414840, 584995, 702038, 727699, 992700, 1201046, 1232255, 2312734, 3136255, 4235414, 6090515, 11484018, 11731850, 11895718, 16819291, 17748034, 18196595, 18924988, 20928756, ...
The primes for 234760 ≤ n ≤ 3136255 were found by the distributed computing project 321 search.
In 2008, PrimeGrid took over the search for Thabit primes. It is still searching and has already found all currently known Thabit primes with n ≥ 4235414. It is also searching for primes of the form 3·2n+1, such primes are called Thabit primes of the second kind or 321 primes of the second kind.
The first few Thabit numbers of the second kind are:
4, 7, 13, 25, 49, 97, 193, 385, 769, 1537, 3073, 6145, 12289, 24577, 49153, 98305, 196609, 393217, 786433, 1572865, ...
The first few Thabit primes of the second kind are:
7, 13, 97, 193, 769, 12289, 786433, 3221225473, 206158430209, 6597069766657, 221360928884514619393, ...
Their n values are:
1, 2, |
https://en.wikipedia.org/wiki/Equivalent%20carbon%20content | The equivalent carbon content concept is used on ferrous materials, typically steel and cast iron, to determine various properties of the alloy when more than just carbon is used as an alloyant, which is typical. The idea is to convert the percentage of alloying elements other than carbon to the equivalent carbon percentage, because the iron-carbon phases are better understood than other iron-alloy phases. Most commonly this concept is used in welding, but it is also used when heat treating and casting cast iron.
Steel
In welding, equivalent carbon content (C.E) is used to understand how the different alloying elements affect hardness of the steel being welded. This is then directly related to hydrogen-induced cold cracking, which is the most common weld defect for steel, thus it is most commonly used to determine weldability. Higher concentrations of carbon and other alloying elements such as manganese, chromium, silicon, molybdenum, vanadium, copper, and nickel tend to increase hardness and decrease weldability. Each of these elements tends to influence the hardness and weldability of the steel to different magnitudes, however, making a method of comparison necessary to judge the difference in hardness between two alloys made of different alloying elements. There are two commonly used formulas for calculating the equivalent carbon content. One is from the American Welding Society (AWS) and recommended for structural steels and the other is the formula based on the International Institute of Welding (IIW).
The AWS states that for an equivalent carbon content above 0.40% there is a potential for cracking in the heat-affected zone (HAZ) on flame cut edges and welds. However, structural engineering standards rarely use CE, but rather limit the maximum percentage of certain alloying elements. This practice started before the CE concept existed, so just continues to be used. This has led to issues because certain high strength steels are now being used that have a CE |
https://en.wikipedia.org/wiki/Convex%20polytope | A convex polytope is a special case of a polytope, having the additional property that it is also a convex set contained in the -dimensional Euclidean space . Most texts use the term "polytope" for a bounded convex polytope, and the word "polyhedron" for the more general, possibly unbounded object. Others (including this article) allow polytopes to be unbounded. The terms "bounded/unbounded convex polytope" will be used below whenever the boundedness is critical to the discussed issue. Yet other texts identify a convex polytope with its boundary.
Convex polytopes play an important role both in various branches of mathematics and in applied areas, most notably in linear programming.
In the influential textbooks of Grünbaum and Ziegler on the subject, as well as in many other texts in discrete geometry, convex polytopes are often simply called "polytopes". Grünbaum points out that this is solely to avoid the endless repetition of the word "convex", and that the discussion should throughout be understood as applying only to the convex variety (p. 51).
A polytope is called full-dimensional if it is an -dimensional object in .
Examples
Many examples of bounded convex polytopes can be found in the article "polyhedron".
In the 2-dimensional case the full-dimensional examples are a half-plane, a strip between two parallel lines, an angle shape (the intersection of two non-parallel half-planes), a shape defined by a convex polygonal chain with two rays attached to its ends, and a convex polygon.
Special cases of an unbounded convex polytope are a slab between two parallel hyperplanes, a wedge defined by two non-parallel half-spaces, a polyhedral cylinder (infinite prism), and a polyhedral cone (infinite cone) defined by three or more half-spaces passing through a common point.
Definitions
A convex polytope may be defined in a number of ways, depending on what is more suitable for the problem at hand. Grünbaum's definition is in terms of a convex set of points in spac |
https://en.wikipedia.org/wiki/Quantum%20fluid | A quantum fluid refers to any system that exhibits quantum mechanical effects at the macroscopic level such as superfluids, superconductors, ultracold atoms, etc. Typically, quantum fluids arise in situations where both quantum mechanical effects and quantum statistical effects are significant.
Most matter is either solid or gaseous (at low densities) near absolute zero. However, for the cases of helium-4 and its isotope helium-3, there is a pressure range where they can remain liquid down to absolute zero because the amplitude of the quantum fluctuations experienced by the helium atoms is larger than the inter-atomic distances.
In the case of solid quantum fluids, it is only a fraction of its electrons or protons that behave like a “fluid”. One prominent example is that of superconductivity where quasi-particles made up of pairs of electrons and a phonon act as bosons which are then capable of collapsing into the ground state to establish a supercurrent with a resistivity near zero.
Derivation
Quantum mechanical effects become significant for physics in the range of the de Broglie wavelength. For condensed matter, this is when the de Broglie wavelength of a particle is greater than the spacing between the particles in the lattice that comprises the matter.
The de Broglie wavelength associated with a massive particle is
where h is the Planck constant. The momentum can be found from the kinetic theory of gases, where
Here, the temperature can be found as
Of course, we can replace the momentum here with the momentum derived from the de Broglie wavelength like so:
Hence, we can say that quantum fluids will manifest at approximate temperature regions where , where d is the lattice spacing (or inter-particle spacing). Mathematically, this is stated like so:
It is easy to see how the above definition relates to the particle density, n. We can write
as for a three dimensional lattice
The above temperature limit has different meaning depending on the quantum st |
https://en.wikipedia.org/wiki/Hamiltonian%20vector%20field | In mathematics and physics, a Hamiltonian vector field on a symplectic manifold is a vector field defined for any energy function or Hamiltonian. Named after the physicist and mathematician Sir William Rowan Hamilton, a Hamiltonian vector field is a geometric manifestation of Hamilton's equations in classical mechanics. The integral curves of a Hamiltonian vector field represent solutions to the equations of motion in the Hamiltonian form. The diffeomorphisms of a symplectic manifold arising from the flow of a Hamiltonian vector field are known as canonical transformations in physics and (Hamiltonian) symplectomorphisms in mathematics.
Hamiltonian vector fields can be defined more generally on an arbitrary Poisson manifold. The Lie bracket of two Hamiltonian vector fields corresponding to functions f and g on the manifold is itself a Hamiltonian vector field, with the Hamiltonian given by the
Poisson bracket of f and g.
Definition
Suppose that is a symplectic manifold. Since the symplectic form is nondegenerate, it sets up a fiberwise-linear isomorphism
between the tangent bundle and the cotangent bundle , with the inverse
Therefore, one-forms on a symplectic manifold may be identified with vector fields and every differentiable function determines a unique vector field , called the Hamiltonian vector field with the Hamiltonian , by defining for every vector field on ,
Note: Some authors define the Hamiltonian vector field with the opposite sign. One has to be mindful of varying conventions in physical and mathematical literature.
Examples
Suppose that is a -dimensional symplectic manifold. Then locally, one may choose canonical coordinates on , in which the symplectic form is expressed as:
where denotes the exterior derivative and denotes the exterior product. Then the Hamiltonian vector field with Hamiltonian takes the form:
where is a square matrix
and
The matrix is frequently denoted with .
Suppose that M = R2n is the 2n-dimens |
https://en.wikipedia.org/wiki/Reproductive%20biology | Reproductive biology includes both sexual and asexual reproduction.
Reproductive biology includes a wide number of fields:
Reproductive systems
Endocrinology
Sexual development (Puberty)
Sexual maturity
Reproduction
Fertility
Human reproductive biology
Endocrinology
Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands.
Reproductive systems
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring.
Female reproductive system
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth.
These structures include:
Ovaries
Oviducts
Uterus
Vagina
Mammary Glands
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female.
Male reproductive system
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia.
Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract.
Animal Reproductive Biology
Animal reproduction oc |
https://en.wikipedia.org/wiki/WLIW%20%28TV%29 | WLIW (channel 21) is a secondary PBS member television station licensed to Garden City, New York, United States, serving the New York City television market. It is owned by The WNET Group alongside the area's primary PBS member, Newark, New Jersey–licensed WNET (channel 13); two Class A stations which share spectrum with WNET, WNDT-CD (channel 14) and WMBQ-CD (channel 46); and WLIW-FM (88.3) in Southampton. Through an outsourcing agreement, The WNET Group also operates New Jersey's PBS state network NJ PBS and the website NJ Spotlight.
WLIW and WNET share studios at One Worldwide Plaza in Midtown Manhattan with an auxiliary street-level studio in the Lincoln Center complex on Manhattan's Upper West Side. WLIW's transmitter is located at One World Trade Center; the station also maintains a production studio at its former transmitter site in Plainview, New York. WLIW's multiplex is New York's high-power ATSC 3.0 (NextGen TV) television station and also broadcasts WMBQ-CD.
WLIW was established in 1969 as the first television station on Long Island. Originally operated on a tight budget, the station had no permanent studio facilities for nearly a decade. In the 1980s and 1990s, increasing cable television coverage led to the expansion of WLIW into a regional service that was the smaller competitor to WNET, the nation's largest public TV station, and the station increased its own programming efforts. However, some critics felt that this shift deemphasized the station's Long Island identity. In 2003, WLIW and WNET merged, completing an 18-month process. As part of the WNET Group, WLIW maintains a separate vice president and general manager, Diane Masciale, who is in charge of the entire group's locally oriented television production.
History
Early history
The Nassau County Board of Supervisors voted on February 14, 1968, to provide funding to set up an educational television station on Long Island, thereby also accessing matching funds from the New York state governme |
https://en.wikipedia.org/wiki/Inner%20loop | In computer programs, an important form of control flow is the loop which causes a block of code to be executed more than once. A common idiom is to have a loop nested inside another loop, with the contained loop being commonly referred to as the inner loop.
Background
Two main types of loop exist and they can be nested within each other to, possibly, any depth as required. The two types are for loop and while loop. Both are slightly different but may be interchanged. Research has shown that performance of the complete structure of a loop with an inner loop is different when compared with a loop without an inner loop. Indeed, even the performance of two loops with different types of inner loop, where one is a for loop and the other a while loop, are different.
It was observed that more computations are performed per unit time when an inner for loop is involved than otherwise. This implies, given the same number of computations to perform, the one with an inner for loop will finish faster than the one without it. This is a machine- or platform-independent technique of loop optimization and was observed across several programming languages and compilers or interpreters tested. The case of a while loop as the inner loop performed badly, performing even slower than a loop without any inner loop in some cases. Two examples below written in python present a while loop with an inner for loop and a while loop without any inner loop. Although both have the same terminating condition for their while loops, the first example will finish faster because of the inner for loop. The variable innermax is a fraction of the maxticketno variable in the first example.
while ticket_no * innermax < max_ticket_no:
for j in range(0, innermax):
if (ticket_no * innermax + j) == jackpot_no:
return
ticket_no += 1
while ticket_no < max_ticket_no:
if ticket_no == jackpot_no:
return
ticket_no += 1 |
https://en.wikipedia.org/wiki/Marine%20geology | Marine geology or geological oceanography is the study of the history and structure of the ocean floor. It involves geophysical, geochemical, sedimentological and paleontological investigations of the ocean floor and coastal zone. Marine geology has strong ties to geophysics and to physical oceanography.
Marine geological studies were of extreme importance in providing the critical evidence for sea floor spreading and plate tectonics in the years following World War II. The deep ocean floor is the last essentially unexplored frontier and detailed mapping in support of both military (submarine) objectives and economic (petroleum and metal mining) objectives drives the research.–
Overview
The Ring of Fire around the Pacific Ocean with its attendant intense volcanism and seismic activity poses a major threat for disastrous earthquakes, tsunamis and volcanic eruptions. Any early warning systems for these disastrous events will require a more detailed understanding of marine geology of coastal and island arc environments.
The study of littoral and deep sea sedimentation and the precipitation and dissolution rates of calcium carbonate in various marine environments has important implications for global climate change.
The discovery and continued study of mid-ocean rift zone volcanism and hydrothermal vents, first in the Red Sea and later along the East Pacific Rise and the Mid-Atlantic Ridge systems were and continue to be important areas of marine geological research. The extremophile organisms discovered living within and adjacent to those hydrothermal systems have had a pronounced impact on our understanding of life on Earth and potentially the origin of life within such an environment.
Oceanic trenches are hemispheric-scale long but narrow topographic depressions of the sea floor. They also are the deepest parts of the ocean floor.
Mariana Trench
The Mariana Trench (or Marianas Trench) is the deepest known submarine trench, and the deepest location in the Eart |
https://en.wikipedia.org/wiki/139%20%28number%29 | 139 (one hundred [and] thirty-nine) is the natural number following 138 and preceding 140.
In mathematics
139 is the 34th prime number. It is a twin prime with 137. Because 141 is a semiprime, 139 is a Chen prime. 139 is the smallest prime before a prime gap of length 10.
This number is the sum of five consecutive prime numbers (19 + 23 + 29 + 31 + 37).
It is the smallest factor of 64079 which is the smallest Lucas number with prime index which is not prime. It is also the smallest factor of the first nine terms of the Euclid–Mullin sequence, making it the tenth term.
139 is a happy number and a strictly non-palindromic number.
In the military
RUM-139 VL-ASROC is a United States Navy ASROC anti-submarine missile
was a United States Navy Admirable-class minesweeper during World War II
was a United States Navy Haskell-class attack transport during World War II
was a United States Navy destroyer during World War II
was a United States Navy transport ship during World War I and World War II
was a tanker loaned to the Soviet Union during World War II, then returned to the United States in 1944
was a United States Navy cargo ship during World War II
was a United States Navy Des Moines-class heavy cruiser following World War II
was a United States Navy Wickes-class destroyer during World War II
In transportation
British Rail Class 139 is the TOPS classification assigned to the lightweight railcars by West Midlands Trains on the Stourbridge Town Branch Line
Fiat M139 platform is the next-generation premium rear wheel drive automobile platform from Fiat
London Buses route 139 is a Transport for London contracted bus route in London
In other fields
139 is also:
The year AD 139 or 139 BC
139 AH is a year in the Islamic calendar that corresponds to 756 – 757 CE.
139 Juewa is a large and dark main belt asteroid discovered in 1874
The atomic number of untriennium, an unsynthesized chemical element
Gull Lake No. 139 is a rural municipality in Saska |
https://en.wikipedia.org/wiki/146%20%28number%29 | 146 (one hundred [and] forty-six) is the natural number following 145 and preceding 147.
In mathematics
146 is an octahedral number, the number of spheres that can be packed into in a regular octahedron with six spheres along each edge. For an octahedron with seven spheres along each edge, the number of spheres on the surface of the octahedron is again 146. It is also possible to arrange 146 disks in the plane into an irregular octagon with six disks on each side, making 146 an octo number.
There is no integer with exactly 146 coprimes less than it, so 146 is a nontotient. It is also never the difference between an integer and the total of coprimes below it, so it is a noncototient. And it is not the sum of proper divisors of any number, making it an untouchable number.
There are 146 connected partially ordered sets with four labeled elements.
See also
146 (disambiguation) |
https://en.wikipedia.org/wiki/Envenomation | Envenomation is the process by which venom is injected by the bite or sting of a venomous animal.
Many kinds of animals, including mammals (e.g., the northern short-tailed shrew, Blarina brevicauda), reptiles (e.g., the king cobra), spiders (e.g., black widows), insects (e.g., wasps), and fish (e.g., stone fish) employ venom for hunting and for self-defense.
In particular, snakebite envenoming is considered a neglected tropical disease resulting in >100,000 deaths and maiming >400,000 people per year.
Mechanisms
Some venoms are applied externally, especially to sensitive tissues such as the eyes, but most venoms are administered by piercing the skin of the victim. Venom in the saliva of the Gila monster and some other reptiles enters prey through bites of grooved teeth. More commonly animals have specialized organs such as hollow teeth (fangs) and tubular stingers that penetrate the prey's skin, whereupon muscles attached to the attacker's venom reservoir squirt venom deep within the victim's body tissue. For example, the fangs of venomous snakes are connected to a venom gland by means of a duct. Death may occur as a result of bites or stings. The rate of envenoming is described as the likelihood of venom successfully entering a system upon bite or sting.
Mechanisms of snake envenomation
Snakes administer venom to their target by piercing the target's skin with specialized organs known as fangs. Snakebites can be broken into four stages; strike launch, fang erection, fang penetration, and fang withdrawal. Snakes have a venom gland connected to a duct and subsequent fangs. The fangs have hollow tubes with grooved sides that allow venom to flow within them. During snake bites, the fangs penetrate the skin of the target and the fang sheath, a soft tissue organ surrounding the fangs, is retracted. The fang sheath retraction causes an increase in internal pressures. This pressure differential initiates venom flow in the venom delivery system. Larger snakes have been |
https://en.wikipedia.org/wiki/Gajim | Gajim is an instant messaging client for the XMPP protocol which uses the GTK toolkit. The name Gajim is a recursive acronym for Gajim's a jabber instant messenger. Gajim runs on Linux, BSD, macOS, and Microsoft Windows. Released under the GPL-3.0-only license, Gajim is free software. A 2009 round-up of similar software on Tom's Hardware found version 0.12.1 "the lightest and fastest jabber IM client".
Features
Gajim aims to be an easy to use and fully-featured XMPP client. Gajim uses GTK (PyGObject) as GUI library, which makes it cross-platform compatible. Some of its features:
Group chat support
Emojis, Avatars, File transfer
Systray icon, Spell checking
TLS, OpenPGP and end-to-end encryption support (OpenPGP not available under Windows until version 0.15),
Transport Registration support
Service Discovery including Nodes
Wikipedia, dictionary and search engine lookup
Multiple accounts support
D-Bus Capabilities
XML Console
Jingle voice and video support (using the "python-farstream" library, no support in Windows yet)
OMEMO encryption
HTTP file upload
Gajim is available in Basque, Bulgarian, Chinese, Croatian, Czech, English, Esperanto, French, German, Italian, Norwegian (Bokmål), Polish, Russian, Spanish, Slovak, Swedish, Ukrainian and others.
Third-party plugins
Gajim supports various third-party plugins (official list).
See also
Comparison of instant messaging clients |
https://en.wikipedia.org/wiki/Natural%20density | In number theory, natural density, also referred to as asymptotic density or arithmetic density, is one method to measure how "large" a subset of the set of natural numbers is. It relies chiefly on the probability of encountering members of the desired subset when combing through the interval as grows large.
Intuitively, it is thought that there are more positive integers than perfect squares, since every perfect square is already positive, and many other positive integers exist besides. However, the set of positive integers is not in fact larger than the set of perfect squares: both sets are infinite and countable and can therefore be put in one-to-one correspondence. Nevertheless if one goes through the natural numbers, the squares become increasingly scarce. The notion of natural density makes this intuition precise for many, but not all, subsets of the naturals (see Schnirelmann density, which is similar to natural density but defined for all subsets of ).
If an integer is randomly selected from the interval , then the probability that it belongs to is the ratio of the number of elements of in to the total number of elements in . If this probability tends to some limit as tends to infinity, then this limit is referred to as the asymptotic density of . This notion can be understood as a kind of probability of choosing a number from the set . Indeed, the asymptotic density (as well as some other types of densities) is studied in probabilistic number theory.
Definition
A subset of positive integers has natural density if the proportion of elements of among all natural numbers from 1 to converges to as tends to infinity.
More explicitly, if one defines for any natural number the counting function as the number of elements of less than or equal to , then the natural density of being exactly means that
It follows from the definition that if a set has natural density then .
Upper and lower asymptotic density
Let be a subset of the set of nat |
https://en.wikipedia.org/wiki/Dysgenics | Dysgenics (also known as cacogenics) is the decrease in prevalence of traits deemed to be either socially desirable or well adapted to their environment due to selective pressure disfavoring the reproduction of those traits.
The adjective "dysgenic" is the antonym of "eugenic". In 1915 the term was used by David Starr Jordan to describe the supposed deleterious effects of modern warfare on group-level genetic fitness because of its tendency to kill physically healthy men while preserving the disabled at home. Similar concerns had been raised by early eugenicists and social Darwinists during the 19th century, and continued to play a role in scientific and public policy debates throughout the 20th century. More recent concerns about supposed dysgenic effects in human populations have been advanced by the controversial psychologist Richard Lynn, notably in his 1996 book Dysgenics: Genetic Deterioration in Modern Populations, which argued that a reduction in selection pressures and decreased infant mortality since the Industrial Revolution have resulted in an increased propagation of deleterious traits and genetic disorders.
Despite these concerns, genetic studies have shown no evidence for dysgenic effects in human populations.
In fiction
Cyril M. Kornbluth's 1951 short story "The Marching Morons" is an example of dysgenic fiction, describing a man who accidentally ends up in the distant future and discovers that dysgenics has resulted in mass stupidity. Mike Judge's 2006 film Idiocracy has the same premise, with the main character the subject of a military hibernation experiment that goes awry, taking him 500 years into the future. While in "The Marching Morons", civilization is kept afloat by a small group of dedicated geniuses, in Idiocracy, voluntary childlessness among high-IQ couples leaves only automated systems to fill that role.
See also
Devolution (biology)
Flynn effect
Heritability of IQ
List of congenital disorders
List of biological development diso |
https://en.wikipedia.org/wiki/Teen%20Age%20Message | The Teen Age Message (TAM) was a series of interstellar radio transmissions sent from the Yevpatoria Planetary Radar to six solar-type stars during August–September 2001. The structure of the TAM was suggested by Alexander Zaitsev, Chief Scientist at Russia's Institute of Radio-engineering and Electronics. The message's content and target stars were selected by a group of teens from four Russian cities, who collaborated in person and via the Internet. Each transmission comprised three sections: a sounding, a live theremin concert, and digital data including images and text. TAM was humanity's fourth Active SETI broadcast and the first musical interstellar radio message.
Overview
Zaitsev's proposal for a musical message – the "First Theremin Concert for Extraterrestrials" – was submitted to the Arecibo Observatory in July 2000. It was rejected amid concerns over the dangers posed by advertising the presence of humanity to unknown and possibly highly advanced civilizations. After another unsuccessful attempt to garner support, the project was backed by the Yevpatoria RT-70 radio telescope with funding from the Education Department of Moscow. Unlike the previous digital-only messages Arecibo-1974 and Cosmic Call 1, TAM had a three-part structure, each containing different forms of information. Such structure was suggested by Alexander Zaitsev, and was intended to make the message easier to detect and interpret. The three elements of each transmission were:
A coherent sounding signal with slow Doppler wavelength tuning to imitate transmission from the Sun's center. This signal was transmitted in order to help extraterrestrials detect the TAM and diagnose the radio propagation effect of the interstellar medium.
Analog sound output from a theremin. This electric musical instrument produces a quasi-sinusoidal signal which is easily extracted from background noise. There were seven musical compositions in the "First Theremin Concert for Extraterrestrials".
Binary digit |
https://en.wikipedia.org/wiki/RS-449 | The RS-449 specification, also known as EIA-449 or TIA-449, defines the functional and mechanical characteristics of the interface between data terminal equipment, typically a computer, and data communications equipment, typically a modem or terminal server. The full title of the standard is EIA-449 General Purpose 37-Position and 9-Position Interface for Data Terminal Equipment and Data Circuit-Terminating Equipment Employing Serial Binary Data Interchange.
449 was part of an effort to replace RS-232C, offering much higher performance and longer cable lengths while using the same DB-25 connectors. This was initially split into two closely related efforts, RS-422 and RS-423. As feature creep set in, the number of required pins began to grow beyond what a DB-25 could handle, and the RS-449 effort started to define a new connector.
449 emerged as an unwieldy system using a large DC-37 connector along with a separate DE-9 connector if the 422 protocol was used. The resulting cable mess was already dismissed as hopeless before the standard was even finalized. The effort was eventually abandoned in favor of RS-530, which used a single DB-25 connector.
Background
During the late 1970s, the EIA began developing two new serial data standards to replace RS-232. RS-232 had a number of issues that limited its performance and practicality. Among these was the relatively large voltages used for signalling, +5 and -5V for mark and space. To supply these, a +12V power supply was typically required, which made it somewhat difficult to implement in a market that was rapidly being dominated by +5/0V transistor-transistor logic (TTL) circuitry and even lower-voltage CMOS implementations. These high voltages and unbalanced communications also resulted in relatively short cable lengths, nominally set to a maximum of , although in practice they could be somewhat longer if running at slower speeds.
The reason for the large voltages was due to ground voltages. RS-232 included both a pr |
https://en.wikipedia.org/wiki/Services%20computing | Services Computing has become a cross-discipline that covers the science and technology of bridging the gap between business services and IT services. The underlying technology suite includes Web services and service-oriented architecture (SOA), cloud computing, business consulting methodology and utilities, business process modeling, transformation and integration. This scope of Services Computing covers the whole life-cycle of service provision that includes business componentization, services modeling, services creation, services realization, services annotation, services deployment, services discovery, services composition, services delivery, service-to-service collaboration, services monitoring, services optimization, as well as services management. The goal of Services Computing is to enable IT services and computing technology to perform business services more efficiently and effectively. |
https://en.wikipedia.org/wiki/Pyroelectric%20fusion | Pyroelectric fusion refers to the technique of using pyroelectric crystals to generate high strength electrostatic fields to accelerate deuterium ions (tritium might also be used someday) into a metal hydride target also containing deuterium (or tritium) with sufficient kinetic energy to cause these ions to undergo nuclear fusion. It was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from −34 to 7 °C (−29 to 45 °F), combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. Though the energy of the deuterium ions generated by the crystal has not been directly measured, the authors used 100 keV (a temperature of about 109 K) as an estimate in their modeling. At these energy levels, two deuterium nuclei can fuse to produce a helium-3 nucleus, a 2.45 MeV neutron and bremsstrahlung. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces.
History
The process of light ion acceleration using electrostatic fields and deuterium ions to produce fusion in solid deuterated targets was first demonstrated by Cockcroft and Walton in 1932 (see Cockcroft–Walton generator). That process is used in miniaturized versions of their original accelerator, in the form of small sealed tube neutron generators, for petroleum exploration.
The process of pyroelectricity has been known from ancient times. The first use of a pyroelectric field to accelerate deuterons was in a 1997 experiment conducted by Drs. V.D. Dougar Jabon, G.V. Fedorovich, and N.V. Samsonenko. This group was the first to utilize a lithium tantalate () pyroelectric crystal in fusion experiments.
The novel idea with the pyroelectric approach to fusion is in its application of the pyroelectric effect to generate accelerating electric fields. This is done by heating the crystal from − |
https://en.wikipedia.org/wiki/LaCie | LaCie (; English: "The Company") is an American-French computer hardware company specializing in external hard drives, RAID arrays, optical drives, Flash Drives, and computer monitors. The company markets several lines of hard drives with a capacity of up to many terabytes of data, with a choice of interfaces (FireWire 400, FireWire 800, eSATA, USB 2.0, USB 3.0, Thunderbolt, and Ethernet). LaCie also has a series of mobile bus-powered hard drives.
LaCie's computer display product line is targeted specifically to graphics professionals, with an emphasis on color matching.
Company history
LaCie began life as two separate computer storage companies: in 1989 as électronique d2 in Paris, France, and in 1987 as LaCie in Tigard, Oregon (later Portland, Oregon), U.S.
In 1995, électronique d2 acquired La Cie, and later adopted the name 'LaCie' for all of its operations. At the early founding stages of both companies, both focused their businesses on IT storage solutions, based on the SCSI interface standard for connecting external devices to computers. SCSI was adopted by Apple Computer as its main peripheral interface standard and the market for both LaCie and d2 became closely, but not exclusively, associated with the Macintosh platform.
In Europe, the French company électronique d2 was founded in 1989 by Pierre Fournier and Philippe Spruch, working from their apartment in the 14th arrondissement of Paris. d2's main activity was assembling hard drives in external SCSI casings and selling them as peripheral devices.
By 1990, the company had outgrown its small beginnings and moved to new 900 square meter premises in rue Watt, also in Paris. By this stage, designing casings was no longer sufficient for d2 to maintain a competitive edge, and so the company began to develop its own products and invest in R&D. d2 began to open subsidiaries around Europe, the first in London in 1991, followed by offices in Brussels and Copenhagen. The company began to expand its bus |
https://en.wikipedia.org/wiki/Slide%20attack | The slide attack is a form of cryptanalysis designed to deal with the prevailing idea that even weak ciphers can become very strong by increasing the number of rounds, which can ward off a differential attack. The slide attack works in such a way as to make the number of rounds in a cipher irrelevant. Rather than looking at the data-randomizing aspects of the block cipher, the slide attack works by analyzing the key schedule and exploiting weaknesses in it to break the cipher. The most common one is the keys repeating in a cyclic manner.
The attack was first described by David Wagner and Alex Biryukov. Bruce Schneier first suggested the term slide attack to them, and they used it in their 1999 paper describing the attack.
The only requirements for a slide attack to work on a cipher is that it can be broken down into multiple rounds of an identical F function. This probably means that it has a cyclic key schedule. The F function must be vulnerable to a known-plaintext attack. The slide attack is closely related to the related-key attack.
The idea of the slide attack has roots in a paper published by Edna Grossman and Bryant Tuckerman in an IBM Technical Report in 1977. Grossman and Tuckerman demonstrated the attack on a weak block cipher named New Data Seal (NDS). The attack relied on the fact that the cipher has identical subkeys in each round, so the cipher had a cyclic key schedule with a cycle of only one key, which makes it an early version of the slide attack. A summary of the report, including a description of the NDS block cipher and the attack, is given in Cipher Systems (Beker & Piper, 1982).
The actual attack
First, to introduce some notation. In this section assume the cipher takes n bit blocks and has a key-schedule using as keys of any length.
The slide attack works by breaking the cipher up into identical permutation
functions, F. This F function may consist of more than one round
of the cipher; it is defined by the key-schedule. For example, i |
https://en.wikipedia.org/wiki/Charles%20Parsons%20%28philosopher%29 | Charles Dacre Parsons (born April 13, 1933) is an American philosopher best known for his work in the philosophy of mathematics and the study of the philosophy of Immanuel Kant. He is professor emeritus at Harvard University.
Life and career
Parsons is a son of the famous Harvard sociologist Talcott Parsons. He earned his Ph.D. in philosophy at Harvard University in 1961, under the direction of Burton Dreben and Willard Van Orman Quine. He taught for many years at Columbia University before moving to Harvard University in 1989. He retired in 2005 as the Edgar Pierce professor of philosophy, a position formerly held by Quine.
He is an elected Fellow of the American Academy of Arts and Sciences and the Norwegian Academy of Science and Letters.
Among his former doctoral students are Michael Levin, James Higginbotham, Peter Ludlow, Gila Sher, Øystein Linnebo, Richard Tieszen, and Mark van Atten.
In 2017, Parsons held the Gödel Lecture titled Gödel and the universe of sets.
Philosophical work
In addition to his work in logic and the philosophy of mathematics, Parsons was an editor, with Solomon Feferman and others, of the posthumous works of Kurt Gödel. He has also written on historical figures, especially Immanuel Kant, Gottlob Frege, Kurt Gödel, and Willard Van Orman Quine.
Works
Books
1983. Mathematics in Philosophy: Selected Essays. Ithaca, N.Y.: Cornell Univ. Press.
2008. Mathematical Thought and its Objects. Cambridge Univ. Press.
2012. From Kant to Husserl: Selected Essays. Cambridge, Massachusetts, and London: Harvard Univ. Press.
2014a. Philosophy of Mathematics in the Twentieth Century: Selected Essays. Cambridge, Massachusetts, and London: Harvard Univ. Press.
Selected articles
1987. "Developing Arithmetic in Set Theory without infinity: Some Historical Remarks". History and Philosophy of Logic, vol. 8, pp. 201–213.
1990a. "The Uniqueness of the Natural Numbers". Iyyun, vol. 39, pp. 13–44. ISSN 0021-3306.
1990b. "The Structuralist |
https://en.wikipedia.org/wiki/Diamond-like%20carbon | Diamond-like carbon (DLC) is a class of amorphous carbon material that displays some of the typical properties of diamond. DLC is usually applied as coatings to other materials that could benefit from such properties.
DLC exists in seven different forms. All seven contain significant amounts of sp3 hybridized carbon atoms. The reason that there are different types is that even diamond can be found in two crystalline polytypes. The more common one uses a cubic lattice, while the less common one, lonsdaleite, has a hexagonal lattice. By mixing these polytypes at the nanoscale, DLC coatings can be made that at the same time are amorphous, flexible, and yet purely sp3 bonded "diamond". The hardest, strongest, and slickest is tetrahedral amorphous carbon (ta-C). Ta-C can be considered to be the "pure" form of DLC, since it consists almost entirely of sp3 bonded carbon atoms. Fillers such as hydrogen, graphitic sp2 carbon, and metals are used in the other 6 forms to reduce production expenses or to impart other desirable properties.
The various forms of DLC can be applied to almost any material that is compatible with a vacuum environment.
History
In 2006, the market for outsourced DLC coatings was estimated as about €30,000,000 in the European Union.
In 2011, researchers at Stanford University announced a super-hard amorphous diamond under conditions of ultrahigh pressure. The diamond lacks the crystalline structure of diamond but has the light weight characteristic of carbon.
In 2021, Chinese researchers announced AM-III, a super-hard, fullerene-based form of amorphous carbon. It is also a semi-conductor with a bandgap range of 1.5 to 2.2 eV. The material demonstrated a hardness of 113 GPa on a Vickers hardness test vs diamonds rate at around 70 to 100 GPa. It was hard enough to scratch the surface of a diamond.
Distinction from natural and synthetic diamond
Naturally occurring diamond is almost always found in the crystalline form with a purely cubic orientati |
https://en.wikipedia.org/wiki/Hilbert%27s%20eleventh%20problem | Hilbert's eleventh problem is one of David Hilbert's list of open mathematical problems posed at the Second International Congress of Mathematicians in Paris in 1900. A furthering of the theory of quadratic forms, he stated the problem as follows:
Our present knowledge of the theory of quadratic number fields puts us in a position to attack successfully the theory of quadratic forms with any number of variables and with any algebraic numerical coefficients. This leads in particular to the interesting problem: to solve a given quadratic equation with algebraic numerical coefficients in any number of variables by integral or fractional numbers belonging to the algebraic realm of rationality determined by the coefficients.
As stated by Kaplansky, "The 11th Problem is simply this: classify quadratic forms over algebraic number fields." This is exactly what Minkowski did for quadratic form with fractional coefficients. A quadratic form (not quadratic equation) is any polynomial in which each term has variables appearing exactly twice. The general form of such an equation is ax2 + bxy + cy2. (All coefficients must be whole numbers.)
A given quadratic form is said to represent a natural number if substituting specific numbers for the variables gives the number. Gauss and those who followed found that if we change variables in certain ways, the new quadratic form represented the same natural numbers as the old, but in a different, more easily interpreted form. He used this theory of equivalent quadratic forms to prove number theory results. Lagrange, for example, had shown that any natural number can be expressed as the sum of four squares. Gauss proved this using his theory of equivalence relations by showing that the quadratic represents all natural numbers. As mentioned earlier, Minkowski created and proved a similar theory for quadratic forms that had fractions as coefficients. Hilbert's eleventh problem asks for a similar theory. That is, a mode of classificati |
https://en.wikipedia.org/wiki/Specific%20rotation | In chemistry, specific rotation ([α]) is a property of a chiral chemical compound. It is defined as the change in orientation of monochromatic plane-polarized light, per unit distance–concentration product, as the light passes through a sample of a compound in solution. Compounds which rotate the plane of polarization of a beam of plane polarized light clockwise are said to be dextrorotary, and correspond with positive specific rotation values, while compounds which rotate the plane of polarization of plane polarized light counterclockwise are said to be levorotary, and correspond with negative values. If a compound is able to rotate the plane of polarization of plane-polarized light, it is said to be “optically active”.
Specific rotation is an intensive property, distinguishing it from the more general phenomenon of optical rotation. As such, the observed rotation (α) of a sample of a compound can be used to quantify the enantiomeric excess of that compound, provided that the specific rotation ([α]) for the enantiopure compound is known. The variance of specific rotation with wavelength—a phenomenon known as optical rotatory dispersion—can be used to find the absolute configuration of a molecule. The concentration of bulk sugar solutions is sometimes determined by comparison of the observed optical rotation with the known specific rotation.
Definition
The CRC Handbook of Chemistry and Physics defines specific rotation as:
For an optically active substance, defined by [α]θλ = α/γl, where α is the angle through which plane polarized light is rotated by a solution of mass concentration γ and path length l. Here θ is the Celsius temperature and λ the wavelength of the light at which the measurement is carried out.
Values for specific rotation are reported in units of deg·mL·g−1·dm−1, which are typically shortened to just degrees, wherein the other components of the unit are tacitly assumed. These values should always be accompanied by information about the temperat |
https://en.wikipedia.org/wiki/Enantiomeric%20excess | In stereochemistry, enantiomeric excess (ee) is a measurement of purity used for chiral substances. It reflects the degree to which a sample contains one enantiomer in greater amounts than the other. A racemic mixture has an ee of 0%, while a single completely pure enantiomer has an ee of 100%. A sample with 70% of one enantiomer and 30% of the other has an ee of 40% (70% − 30%).
Definition
Enantiomeric excess is defined as the absolute difference between the mole fraction of each enantiomer:
where
In practice, it is most often expressed as a percent enantiomeric excess.
The enantiomeric excess can be determined in another way if we know the amount of each enantiomer produced. If one knows the moles of each enantiomer produced then:
Enantiomeric excess is used as one of the indicators of the success of an asymmetric synthesis. For mixtures of diastereomers, there are analogous definitions and uses for diastereomeric excess and percent diastereomeric excess.
As an example, a sample with 70 % of isomer and 30 % of will have a percent enantiomeric excess of 40. This can also be thought of as a mixture of 40 % pure with 60 % of a racemic mixture (which contributes half 30 % and the other half 30 % to the overall composition).
If given the enantiomeric excess of a mixture, the fraction of the main isomer, say , can be determined using and the lesser isomer .
A non-racemic mixture of two enantiomers will have a net optical rotation. It is possible to determine the specific rotation of the mixture and, with knowledge of the specific rotation of the pure enantiomer, the optical purity can be determined.
optical purity (%) = · 100
Ideally, the contribution of each component of the mixture to the total optical rotation is directly proportional to its mole fraction, and as a result the numerical value of the optical purity is identical to the enantiomeric excess. This has led to informal use the two terms as interchangeable, especially because optical purity |
https://en.wikipedia.org/wiki/Mid-ocean%20ridge | A mid-ocean ridge (MOR) is a seafloor mountain system formed by plate tectonics. It typically has a depth of about and rises about above the deepest portion of an ocean basin. This feature is where seafloor spreading takes place along a divergent plate boundary. The rate of seafloor spreading determines the morphology of the crest of the mid-ocean ridge and its width in an ocean basin.
The production of new seafloor and oceanic lithosphere results from mantle upwelling in response to plate separation. The melt rises as magma at the linear weakness between the separating plates, and emerges as lava, creating new oceanic crust and lithosphere upon cooling.
The first discovered mid-ocean ridge was the Mid-Atlantic Ridge, which is a spreading center that bisects the North and South Atlantic basins; hence the origin of the name 'mid-ocean ridge'. Most oceanic spreading centers are not in the middle of their hosting ocean basis but regardless, are traditionally called mid-ocean ridges. Mid-ocean ridges around the globe are linked by plate tectonic boundaries and the trace of the ridges across the ocean floor appears similar to the seam of a baseball. The mid-ocean ridge system thus is the longest mountain range on Earth, reaching about .
Global system
The mid-ocean ridges of the world are connected and form the Ocean Ridge, a single global mid-oceanic ridge system that is part of every ocean, making it the longest mountain range in the world. The continuous mountain range is long (several times longer than the Andes, the longest continental mountain range), and the total length of the oceanic ridge system is long.
Description
Morphology
At the spreading center on a mid-ocean ridge, the depth of the seafloor is approximately . On the ridge flanks, the depth of the seafloor (or the height of a location on a mid-ocean ridge above a base-level) is correlated with its age (age of the lithosphere where depth is measured). The depth-age relation can be modeled by th |
https://en.wikipedia.org/wiki/Fusion%20gene | A fusion gene is a hybrid gene formed from two previously independent genes. It can occur as a result of translocation, interstitial deletion, or chromosomal inversion. Fusion genes have been found to be prevalent in all main types of human neoplasia. The identification of these fusion genes play a prominent role in being a diagnostic and prognostic marker.
History
The first fusion gene was described in cancer cells in the early 1980s. The finding was based on the discovery in 1960 by Peter Nowell and David Hungerford in Philadelphia of a small abnormal marker chromosome in patients with chronic myeloid leukemia—the first consistent chromosome abnormality detected in a human malignancy, later designated the Philadelphia chromosome. In 1973, Janet Rowley in Chicago showed that the Philadelphia chromosome had originated through a translocation between chromosomes 9 and 22, and not through a simple deletion of chromosome 22 as was previously thought. Several investigators in the early 1980s showed that the Philadelphia chromosome translocation led to the formation of a new BCR::ABL1 fusion gene, composed of the 3' part of the ABL1 gene in the breakpoint on chromosome 9 and the 5' part of a gene called BCR in the breakpoint in chromosome 22. In 1985 it was clearly established that the fusion gene on chromosome 22 produced an abnormal chimeric BCR::ABL1 protein with the capacity to induce chronic myeloid leukemia.
Oncogenes
It has been known for 30 years that the corresponding gene fusion plays an important role in tumorigenesis. Fusion genes can contribute to tumor formation because fusion genes can produce much more active abnormal protein than non-fusion genes. Often, fusion genes are oncogenes that cause cancer; these include BCR-ABL, TEL-AML1 (ALL with t(12 ; 21)), AML1-ETO (M2 AML with t(8 ; 21)), and TMPRSS2-ERG with an interstitial deletion on chromosome 21, often occurring in prostate cancer.
In the case of TMPRSS2-ERG, by disrupting androgen receptor (AR) |
https://en.wikipedia.org/wiki/Male | Male (symbol: ♂) is the sex of an organism that produces the gamete (sex cell) known as sperm, which fuses with the larger female gamete, or ovum, in the process of fertilization.
A male organism cannot reproduce sexually without access to at least one ovum from a female, but some organisms can reproduce both sexually and asexually. Most male mammals, including male humans, have a Y chromosome, which codes for the production of larger amounts of testosterone to develop male reproductive organs.
In humans, the word male can also be used to refer to gender, in the social sense of gender role or gender identity. The use of "male" in regard to sex and gender has been subject to discussion.
Overview
The existence of separate sexes has evolved independently at different times and in different lineages, an example of convergent evolution. The repeated pattern is sexual reproduction in isogamous species with two or more mating types with gametes of identical form and behavior (but different at the molecular level) to anisogamous species with gametes of male and female types to oogamous species in which the female gamete is very much larger than the male and has no ability to move. There is a good argument that this pattern was driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction.
Accordingly, sex is defined across species by the type of gametes produced (i.e.: spermatozoa vs. ova) and differences between males and females in one lineage are not always predictive of differences in another.
Male/female dimorphism between organisms or reproductive organs of different sexes is not limited to animals; male gametes are produced by chytrids, diatoms and land plants, among others. In land plants, female and male designate not only the female and male gamete-producing organisms and structures but also the structures of the sporophytes that give rise to male and female plants.
Evolution
The evolution of ani |
https://en.wikipedia.org/wiki/Thermodynamic%20versus%20kinetic%20reaction%20control | Thermodynamic reaction control or kinetic reaction control in a chemical reaction can decide the composition in a reaction product mixture when competing pathways lead to different products and the reaction conditions influence the selectivity or stereoselectivity. The distinction is relevant when product A forms faster than product B because the activation energy for product A is lower than that for product B, yet product B is more stable. In such a case A is the kinetic product and is favoured under kinetic control and B is the thermodynamic product and is favoured under thermodynamic control.
The conditions of the reaction, such as temperature, pressure, or solvent, affect which reaction pathway may be favored: either the kinetically controlled or the thermodynamically controlled one. Note this is only true if the activation energy of the two pathways differ, with one pathway having a lower Ea (energy of activation) than the other.
Prevalence of thermodynamic or kinetic control determines the final composition of the product when these competing reaction pathways lead to different products. The reaction conditions as mentioned above influence the selectivity of the reaction - i.e., which pathway is taken.
Asymmetric synthesis is a field in which the distinction between kinetic and thermodynamic control is especially important. Because pairs of enantiomers have, for all intents and purposes, the same Gibbs free energy, thermodynamic control will produce a racemic mixture by necessity. Thus, any catalytic reaction that provides product with nonzero enantiomeric excess is under at least partial kinetic control. (In many stoichiometric asymmetric transformations, the enantiomeric products are actually formed as a complex with the chirality source before the workup stage of the reaction, technically making the reaction a diastereoselective one. Although such reactions are still usually kinetically controlled, thermodynamic control is at least possible, in prin |
https://en.wikipedia.org/wiki/Cointegration | Cointegration is a statistical property of a collection of time series variables. First, all of the series must be integrated of order d (see Order of integration). Next, if a linear combination of this collection is integrated of order less than d, then the collection is said to be co-integrated. Formally, if (X,Y,Z) are each integrated of order d, and there exist coefficients a,b,c such that is integrated of order less than d, then X, Y, and Z are cointegrated. Cointegration has become an important property in contemporary time series analysis. Time series often have trends—either deterministic or stochastic. In an influential paper
, Charles Nelson and Charles Plosser (1982) provided statistical evidence that many US macroeconomic time series (like GNP, wages, employment, etc.) have stochastic trends.
Introduction
If two or more series are individually integrated (in the time series sense) but some linear combination of them has a lower order of integration, then the series are said to be cointegrated. A common example is where the individual series are first-order integrated () but some (cointegrating) vector of coefficients exists to form a stationary linear combination of them. For instance, a stock market index and the price of its associated futures contract move through time, each roughly following a random walk. Testing the hypothesis that there is a statistically significant connection between the futures price and the spot price could now be done by testing for the existence of a cointegrated combination of the two series.
History
The first to introduce and analyse the concept of spurious—or nonsense—regression was Udny Yule in 1926.
Before the 1980s, many economists used linear regressions on non-stationary time series data, which Nobel laureate Clive Granger and Paul Newbold showed to be a dangerous approach that could produce spurious correlation, since standard detrending techniques can result in data that are still non-stationary. Granger's 19 |
https://en.wikipedia.org/wiki/DNA%20fragmentation | DNA fragmentation is the separation or breaking of DNA strands into pieces. It can be done intentionally by laboratory personnel or by cells, or can occur spontaneously. Spontaneous or accidental DNA fragmentation is fragmentation that gradually accumulates in a cell. It can be measured by e.g. the Comet assay or by the TUNEL assay.
Its main units of measurement is the DNA Fragmentation Index (DFI). A DFI of 20% or more significantly reduces the success rates after ICSI.
DNA fragmentation was first documented by Williamson in 1970 when he observed discrete oligomeric fragments occurring during cell death in primary neonatal liver cultures. He described the cytoplasmic DNA isolated from mouse liver cells after culture as characterized by DNA fragments with a molecular weight consisting of multiples of 135 kDa. This finding was consistent with the hypothesis that these DNA fragments were a specific degradation product of nuclear DNA.
Intentional
DNA fragmentation is often necessary prior to library construction or subcloning for DNA sequences. A variety of methods involving the mechanical breakage of DNA have been employed where DNA is fragmented by laboratory personnel. Such methods include sonication, needle shear, nebulisation, point-sink shearing and passage through a pressure cell.
Restriction digest is the intentional laboratory breaking of DNA strands. It is an enzyme-based treatment used in biotechnology to cut DNA into smaller strands in order to study fragment length differences among individuals or for gene cloning. This method fragments DNA either by the simultaneous cleavage of both strands, or by generation of nicks on each strand of dsDNA to produce dsDNA breaks.
Acoustic shearing of the transmission of high-frequency acoustic energy waves delivered to a DNA library. The transducer is bowl shaped so that the waves converge at the target of interest.
Nebulization forces DNA through a small hole in a nebulizer unit, which results in the formation |
https://en.wikipedia.org/wiki/Girdling | Girdling, also called ring-barking, is the circumferential removal or injury of the bark (consisting of cork cambium or "phellogen", phloem, cambium and sometimes also the xylem) of a branch or trunk of a woody plant. Girdling prevents the tree from sending nutrients from its foliage to its roots, resulting in the death of the tree over time, and can also prevent flow of nutrients in the other direction depending on how much of the xylem is removed. A branch completely girdled will fail and when the main trunk of a tree is girdled, the entire tree will die, if it cannot regrow from above to bridge the wound. Human practices of girdling include forestry, horticulture, and vandalism. Foresters use the practice of girdling to thin forests. Extensive cankers caused by certain fungi, bacteria or viruses can girdle a trunk or limb. Animals such as rodents will girdle trees by feeding on outer bark, often during winter under snow. Girdling can also be caused by herbivorous mammals feeding on plant bark and by birds and insects, both of which can effectively girdle a tree by boring rows of adjacent holes.
Orchardists use girdling as a cultural technique to yield larger fruit or to set fruit. In viniculture (grape cultivation) the technique is also called cincturing.
Forestry and horticulture
Like all vascular plants, trees use two vascular tissues for transportation of water and nutrients: the xylem (also known as the wood) and the phloem (the innermost layer of the bark). Girdling results in the removal of the phloem, and death occurs from the inability of the leaves to transport sugars (primarily sucrose) to the roots. In this process, the xylem is left untouched, and the tree can usually still temporarily transport water and minerals from the roots to the leaves. Trees normally sprout shoots below the wound; if not, the roots die. Death occurs when the roots can no longer produce ATP and transport nutrients upwards through the xylem. The formation of new shoots |
https://en.wikipedia.org/wiki/Electron%20crystallography | Electron crystallography is a method to determine the arrangement of atoms in solids using a transmission electron microscope (TEM). It can involve the use of high-resolution transmission electron microscopy images, electron diffraction patterns including convergent-beam electron diffraction or combinations of these. It has been successful in determining some bulk structures, and also surface structures. Two related methods are low-energy electron diffraction which has solved the structure of many surfaces, and reflection high-energy electron diffraction which is used to monitor surfaces often during growth.
Comparison with X-ray crystallography
It can complement X-ray crystallography for studies of very small crystals (<0.1 micrometers), both inorganic, organic, and proteins, such as membrane proteins, that cannot easily form the large 3-dimensional crystals required for that process. Protein structures are usually determined from either 2-dimensional crystals (sheets or helices), polyhedrons such as viral capsids, or dispersed individual proteins. Electrons can be used in these situations, whereas X-rays cannot, because electrons interact more strongly with atoms than X-rays do. Thus, X-rays will travel through a thin 2-dimensional crystal without diffracting significantly, whereas electrons can be used to form an image. Conversely, the strong interaction between electrons and protons makes thick (e.g. 3-dimensional > 1 micrometer) crystals impervious to electrons, which only penetrate short distances.
One of the main difficulties in X-ray crystallography is determining phases in the diffraction pattern. Because of the complexity of X-ray lenses, it is difficult to form an image of the crystal being diffracted, and hence phase information is lost. Fortunately, electron microscopes can resolve atomic structure in real space and the crystallographic structure factor phase information can be experimentally determined from an image's Fourier transform. The Fourier |
https://en.wikipedia.org/wiki/Clock%20skew | Clock skew (sometimes called timing skew) is a phenomenon in synchronous digital circuit systems (such as computer systems) in which the same sourced clock signal arrives at different components at different times due to gate or, in more advanced semiconductor technology, wire signal propagation delay. The instantaneous difference between the readings of any two clocks is called their skew.
The operation of most digital circuits is synchronized by a periodic signal known as a "clock" that dictates the sequence and pacing of the devices on the circuit. This clock is distributed from a single source to all the memory elements of the circuit, which for example could be registers or flip-flops. In a circuit using edge-triggered registers, when the clock edge or tick arrives at a register, the register transfers the register input to the register output, and these new output values flow through combinational logic to provide the values at register inputs for the next clock tick.
Ideally, the input to each memory element reaches its final value in time for the next clock tick so that the behavior of the whole circuit can be predicted exactly. The maximum speed at which a system can run must account for the variance that occurs between the various elements of a circuit due to differences in physical composition, temperature, and path length.
In a synchronous circuit, two registers, or flip-flops, are said to be "sequentially adjacent" if a logic path connects them. Given two sequentially adjacent registers Ri and Rj with clock arrival times at the source and destination register clock pins equal to TCi and TCj respectively, clock skew can be defined as: .
In circuit design
Clock skew can be caused by many different things, such as wire-interconnect length, temperature variations, variation in intermediate devices, capacitive coupling, material imperfections, and differences in input capacitance on the clock inputs of devices using the clock. As the clock rate of a ci |
https://en.wikipedia.org/wiki/Diffusion%20capacitance | Diffusion Capacitance is the capacitance that happens due to transport of charge carriers between two terminals of a device, for example, the diffusion of carriers from anode to cathode in a forward biased diode or from emitter to base in a forward-biased junction of a transistor. In a semiconductor device with a current flowing through it (for example, an ongoing transport of charge by diffusion) at a particular moment there is necessarily some charge in the process of transit through the device. If the applied voltage changes to a different value and the current changes to a different value, a different amount of charge will be in transit in the new circumstances. The change in the amount of transiting charge divided by the change in the voltage causing it is the diffusion capacitance. The adjective "diffusion" is used because the original use of this term was for junction diodes, where the charge transport was via the diffusion mechanism. See Fick's laws of diffusion.
To implement this notion quantitatively, at a particular moment in time let the voltage across the device be . Now assume that the voltage changes with time slowly enough that at each moment the current is the same as the DC current that would flow at that voltage, say (the quasistatic approximation). Suppose further that the time to cross the device is the forward transit time . In this case the amount of charge in transit through the device at this particular moment, denoted , is given by
.
Consequently, the corresponding diffusion capacitance:. is
.
In the event the quasi-static approximation does not hold, that is, for very fast voltage changes occurring in times shorter than the transit time , the equations governing time-dependent transport in the device must be solved to find the charge in transit, for example the Boltzmann equation. That problem is a subject of continuing research under the topic of non-quasistatic effects. See Liu
, and Gildenblat et al.
Notes |
https://en.wikipedia.org/wiki/Stag%20film | A stag film (also blue movie or smoker) is a type of pornographic film produced secretly in the first two-thirds of the 20th century. Typically, stag films had certain traits. They were brief in duration (about 12 minutes at most), were silent, depicted hardcore pornography and were produced clandestinely due to censorship laws. Stag films were screened for all-male audiences in fraternities or similar locations; observers offered a raucous collective response to the film, exchanging sexual banter and achieving sexual arousal. Stag films were often screened in brothels.
Film historians describe stag films as a primitive form of cinema because they were produced by anonymous and amateur artists. Today, many of these films have been archived by the Kinsey Institute; however, most stag films are in a state of decay and have no copyright, credits, or acknowledged authorship. The stag film era ended due to the beginnings of the sexual revolution in the 1960s in combination with the new home movie technologies of the post-World War I decades, such as 16 mm, 8 mm, and Super 8 film. Scholars at the Kinsey Institute believe there were approximately 2,000 films produced between 1915 and 1968.
American stag cinema in general has received scholarly attention first in the mid-seventies by mainstream scholars, such as in Di Lauro and Gerald Rabkin's Dirty Movies (1976), and more recently by feminist and gay cultural historians, such as in Linda Williams' Hard Core: Power Pleasure, and the "Frenzy of the Visible" (1999) and Thomas Waugh's Homosociality in the Classical American Stag Film: Off-Screen, On-screen (2001).
History
Before the abolition of film censorship in the United States and a general acceptance of the production of pornography, porn was an underground phenomenon. Public presentation of such films were itinerant and were via secret exhibitions in brothels or smoker houses; or the films might be rented for private usage. Stag films were an entirely clandestine p |
https://en.wikipedia.org/wiki/Bremermann%27s%20limit | Bremermann's limit, named after Hans-Joachim Bremermann, is a limit on the maximum rate of computation that can be achieved in a self-contained system in the material universe. It is derived from Einstein's mass-energy equivalency and the Heisenberg uncertainty principle, and is c2/h ≈ 1.3563925 × 1050 bits per second per kilogram.
This value establishes an asymptotic bound on adversarial resources when designing cryptographic algorithms, as it can be used to determine the minimum size of encryption keys or hash values required to create an algorithm that could never be cracked by a brute-force search. For example, a computer with the mass of the entire Earth operating at Bremermann's limit could perform approximately 1075 mathematical computations per second. If one assumes that a cryptographic key can be tested with only one operation, then a typical 128-bit key could be cracked in under 10−36 seconds. However, a 256-bit key (which is already in use in some systems) would take about two minutes to crack. Using a 512-bit key would increase the cracking time to approaching 1072 years, without increasing the time for encryption by more than a constant factor (depending on the encryption algorithms used).
The limit has been further analysed in later literature as the maximum rate at which a system with energy spread can evolve into an orthogonal and hence distinguishable state to another,
In particular, Margolus and Levitin have shown that a quantum system with average energy E takes at least time to evolve into an orthogonal state.
However, it has been shown that access to quantum memory in principle allows computational algorithms that require arbitrarily small amount of energy/time per one elementary computation step.
See also
Margolus–Levitin theorem
Landauer's principle
Bekenstein bound
Kolmogorov complexity
Transcomputational problem
Limits of computation
Ultrafinitism |
https://en.wikipedia.org/wiki/Cartesian%20tensor | In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation.
The most familiar coordinate systems are the two-dimensional and three-dimensional Cartesian coordinate systems. Cartesian tensors may be used with any Euclidean space, or more technically, any finite-dimensional vector space over the field of real numbers that has an inner product.
Use of Cartesian tensors occurs in physics and engineering, such as with the Cauchy stress tensor and the moment of inertia tensor in rigid body dynamics. Sometimes general curvilinear coordinates are convenient, as in high-deformation continuum mechanics, or even necessary, as in general relativity. While orthonormal bases may be found for some such coordinate systems (e.g. tangent to spherical coordinates), Cartesian tensors may provide considerable simplification for applications in which rotations of rectilinear coordinate axes suffice. The transformation is a passive transformation, since the coordinates are changed and not the physical system.
Cartesian basis and related terminology
Vectors in three dimensions
In 3D Euclidean space, , the standard basis is , , . Each basis vector points along the x-, y-, and z-axes, and the vectors are all unit vectors (or normalized), so the basis is orthonormal.
Throughout, when referring to Cartesian coordinates in three dimensions, a right-handed system is assumed and this is much more common than a left-handed system in practice, see orientation (vector space) for details.
For Cartesian tensors of order 1, a Cartesian vector can be written algebraically as a linear combination of the basis vectors , , :
where the coordinates of the vector with respect to the Cartesian basis are denoted , , . It is common and helpful to display the basis vectors as column vectors
when we have a coor |
https://en.wikipedia.org/wiki/Silverman%E2%80%93Toeplitz%20theorem | In mathematics, the Silverman–Toeplitz theorem, first proved by Otto Toeplitz, is a result in summability theory characterizing matrix summability methods that are regular. A regular matrix summability method is a matrix transformation of a convergent sequence which preserves the limit.
An infinite matrix with complex-valued entries defines a regular summability method if and only if it satisfies all of the following properties:
An example is Cesaro summation, a matrix summability method with |
https://en.wikipedia.org/wiki/Orifice%20plate | An orifice plate is a device used for measuring flow rate, for reducing pressure or for restricting flow (in the latter two cases it is often called a ).
Description
An orifice plate is a thin plate with a hole in it, which is usually placed in a pipe. When a fluid (whether liquid or gaseous) passes through the orifice, its pressure builds up slightly upstream of the orifice but as the fluid is forced to converge to pass through the hole, the velocity increases and the fluid pressure decreases. A little downstream of the orifice the flow reaches its point of maximum convergence, the vena contracta (see drawing to the right) where the velocity reaches its maximum and the pressure reaches its minimum. Beyond that, the flow expands, the velocity falls and the pressure increases. By measuring the difference in fluid pressure across tappings upstream and downstream of the plate, the flow rate can be obtained from Bernoulli's equation using coefficients established from extensive research.
In general, the mass flow rate measured in kg/s across an orifice can be described as
The overall pressure loss in the pipe due to an orifice plate is lower than the measured pressure, typically by a factor of .
Application
Orifice plates are most commonly used to measure flow rates in pipes, when the fluid is single-phase (rather than being a mixture of gases and liquids, or of liquids and solids) and well-mixed, the flow is continuous rather than pulsating, the fluid occupies the entire pipe (precluding silt or trapped gas), the flow profile is even and well-developed and the fluid and flow rate meet certain other conditions. Under these circumstances and when the orifice plate is constructed and installed according to appropriate standards, the flow rate can easily be determined using published formulae based on substantial research and published in industry, national and international standards.
An orifice plate is called a calibrated orifice if it has been calibrated wit |
https://en.wikipedia.org/wiki/Triangle%20of%20U | The triangle of U ( ) is a theory about the evolution and relationships among the six most commonly known members of the plant genus Brassica. The theory states that the genomes of three ancestral diploid species of Brassica combined to create three common tetraploid vegetables and oilseed crop species. It has since been confirmed by studies of DNA and proteins.
The theory is summarized by a triangular diagram that shows the three ancestral genomes, denoted by AA, BB, and CC, at the corners of the triangle, and the three derived ones, denoted by AABB, AACC, and BBCC, along its sides.
The theory was first published in 1935 by Woo Jang-choon, a Korean-Japanese botanist (writing under the Japanized name "U Nagaharu"). Woo made synthetic hybrids between the diploid and tetraploid species and examined how the chromosomes paired in the resulting triploids.
U's theory
The six species are
The code in the "Chr.count" column specifies the total number of chromosomes in each somatic cell, and how it relates to the number of chromosomes in each full genome set (which is also the number found in the pollen or ovule), and the number of chromosomes in each component genome. For example, each somatic cell of the tetraploid species Brassica napus, with letter tags AACC and count "2=4=38", contains two copies of the A genome, each with 10 chromosomes, and two copies of the C genome, each with 9 chromosomes, which is 38 chromosomes in total. That is two full genome sets (one A and one C), hence "2=38" which means "=19" (the number of chromosomes in each gamete). It is also four component genomes (two A and two C), hence "4=38".
The three diploid species exist in nature, but can easily interbreed because they are closely related. This interspecific breeding allowed for the creation of three new species of tetraploid Brassica. (Critics, however, consider the geological separation too large.) These are said to be allotetraploid (containing four genomes from two or more different |
https://en.wikipedia.org/wiki/Neutron%20probe | A neutron probe is a device used to measure the quantity of water present in soil.
A typical neutron probe contains a pellet of americium-241 and beryllium. The alpha particles emitted by the decay of the americium collide with the light beryllium nuclei, producing fast neutrons. When these fast neutrons collide with hydrogen nuclei present in the soil being studied, they lose much of their energy. The detection of slow neutrons returning to the probe allows an estimate of the amount of hydrogen present. Since water contains two atoms of hydrogen per molecule, this therefore gives a measure of soil moisture.
See also
Frequency domain sensor
Time-domain reflectometer
Neutron detection |
https://en.wikipedia.org/wiki/Self-healing | Self-healing refers to the process of recovery (generally from psychological disturbances, trauma, etc.), motivated by and directed by the patient, guided often only by instinct. Such a process encounters mixed fortunes due to its amateur nature, although self-motivation is a major asset. The value of self-healing lies in its ability to be tailored to the unique experience and requirements of the individual. The process can be helped and accelerated with introspection techniques such as Meditation.
The different meanings of self-healing
Self-healing is the ultimate phase of Gestalt Therapy.
Self-healing may refer to automatic, homeostatic processes of the body that are controlled by physiological mechanisms inherent in the organism. Disorders of the spirit and the absence of faith can be self-healed.
In a figurative sense, self-healing properties can be ascribed to systems or processes, which by nature or design tend to correct any disturbances brought into them. Such as the regeneration of the skin after a cut or scrape, or of an entire limb. The injured party (the living body) repairs the damaged part by itself.
Beyond the innate restorative capacities of the physical body, there are many factors of psychological nature that can influence self-healing. Hippocrates, considered by many to be the father of medical treatment, observed: "The physician must be ready, not only to do his duty himself, but also to secure the co-operation of the patient, of the attendants and of externals."
Self-healing may also be achieved through deliberately applied psychological mechanisms. These approaches may improve the psychological and physical conditions of a person. Research confirms that this can be achieved through numerous mechanisms, including relaxation, breathing exercises, fitness exercises, imagery, Meditation, Yoga, qigong, t'ai chi, biofeedback, and various forms of psychotherapy, among other approaches.
Varieties of mechanisms for self-healing have been proposed, |
https://en.wikipedia.org/wiki/Identity%20document%20forgery | Identity document forgery is the process by which identity documents issued by governing bodies are copied and/or modified by persons not authorized to create such documents or engage in such modifications, for the purpose of deceiving those who would view the documents about the identity or status of the bearer. The term also encompasses the activity of acquiring identity documents from legitimate bodies by falsifying the required supporting documentation in order to create the desired identity.
Identity documents differ from other credentials in that they are intended to be usable by only the person holding the card. Unlike other credentials, they may be used to restrict the activities of the holder as well as to expand them.
Documents that have been forged in this way include driver's licenses (historically forged or altered as an attempt to conceal the fact that persons desiring to purchase alcohol are under the legal drinking age); birth certificates and Social Security cards (likely used in identity theft schemes or to defraud the government); and passports (used to evade restrictions on entry into a particular country). At the beginning of 2010, there were 11 million stolen or lost passports listed in the global database of Interpol.
Such falsified documents can be used for identity theft, age deception, illegal immigration, organized crime, and espionage.
Use scenarios, forgery techniques and security countermeasures
A distinction needs to be made between the different uses of an identity document. In some cases, the fake ID may only have to pass a cursory inspection, such as flashing a plastic ID card for a security guard. At the other extreme, a document may have to resist scrutiny by a trained document examiner, who may be equipped with technical tools for verifying biometrics and reading hidden security features within the card. To make forgery more difficult, most modern IDs contain numerous security features that require specialised and expensive |
https://en.wikipedia.org/wiki/The%20Mind%20of%20an%20Ape | The Mind of an Ape is a 1983 book by David Premack and his wife Ann James Premack. The authors argue that it is possible to teach language to (non-human) great apes. They write: "We now know that someone who comprehends speech must know language, even if he or she cannot produce it."
The authors
David Premack, emeritus professor of psychology at the University of Pennsylvania, and Ann James Premack, a science writer, began teaching language to apes in 1964. Premack started his work at the Yerkes Laboratories of Primate Biology in Orange Park, Florida, a program at the University of Florida, continued it at the University of Missouri, then at the University of California, Santa Barbara and the University of Pennsylvania.
The apes
The subjects of the program, nine chimpanzees, were reared in a laboratory environment specifically designed to stimulate their intellect, as animals raised otherwise fail to thrive. This was in contrast to the traditional psychology lab where the animals are caged and remain in solitude. Sarah, born in 1959, demonstrated use of an invented language. Gussie failed to learn any words. Elizabeth and Peony were trained in the language. Walnut, a late arrival, also was trained in the language, but failed to learn any words. Jessie, Sadie, Bert, and Luvie, 1975 controls, were not trained in the language, but demonstrated pointing.
Language suitable for an ape
The language designed by Premack for an ape was not verbal; Premack's chimpanzee program differed from that of a separate research program in which other chimpanzees were raised in a human family in parallel with human babies, and taught words. Eventually, the chimpanzees might get to a two-year-old human's list of words, but no further. Vicki was eventually trained to speak four words. The experiments with those chimpanzees did not demonstrate the existence of the faculties shown by Sarah discussed below, in her command of a language, for example. In other experiments, other chimpanzee |
https://en.wikipedia.org/wiki/Sarah%20%28chimpanzee%29 | Sarah (full name Sarah Anne) (August 1959 – July 2019) was an enculturated research chimpanzee whose cognitive skills were documented in the 1983 book The Mind of an Ape, by David Premack and Ann James Premack. Sarah was one of nine chimpanzees in David Premack's psychology laboratory in Pennsylvania. Sarah was born in Africa in 1959. She first worked in Missouri, then in Santa Barbara, and then Pennsylvania. She first was exposed to language token training in 1967.
Sarah was the subject, along with three other chimpanzees which were exposed to language token training. One of the chimpanzees failed to learn a single word, but Sarah, Elizabeth, and Peony were able to parse and also produce streams of tokens which obeyed a grammar.
She used a special board with plastic symbols to correctly parse various syntactic expressions including if-then-else. Sarah was even able to recognize colors and connect them with matching objects, a talent that Premack noted doubting that a pigeon could have.
When the Premacks decided they no longer wanted to work with chimpanzees in 1987, Sarah was sent to Sarah Boysen's Chimp Center at the Ohio State University, where she lived and worked with other enculturated chimpanzees: Kermit, Darrell, Bobby, Sheba, Keeli, Ivy, Harper, and Emma. In February 2006, the Chimp Center was closed and OSU sent the chimps to a private animal collection in Texas, and subsequently transferred to another chimpanzee sanctuary, Chimp Haven, in Louisiana, where she was well known for her blanket nesting techniques. Her favorite treat was M&M's.
Sarah died in July 2019, just before her 60th birthday.
See also
List of individual apes |
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20of%20Biochemistry | The Max Planck Institute of Biochemistry (MPIB) is a research institute of the Max Planck Society located in Martinsried, a suburb of Munich. The institute was founded in 1973 by the merger of three formerly independent institutes: the Max Planck Institute of Biochemistry, the Max Planck Institute of Protein and Leather Research (founded 1954 in Regensburg), and the Max Planck Institute of Cell Chemistry (founded 1956 in Munich).
With 800 employees in currently seven research departments and about 26 research groups, the MPIB is one of the largest biologically medically oriented institutes of the Max Planck Society.
Departments
There are seven departments currently in the institute.
Cellular Biochemistry (Franz-Ulrich Hartl)
Cellular and Molecular Biophysics (Petra Schwille)
Molecular Machines and Signaling (Brenda Schulman)
Molecular Medicine (Reinhard Fässler)
Molecular Structural Biology (Wolfgang Baumeister)
Proteomics and Signal Transduction (Matthias Mann)
Structural Cell Biology (Elena Conti)
Research groups
There are 26 research groups currently based at the MPIB, including 3 emeritus research groups:
Molecular Structural Biology (Wolfgang Baumeister / Cryo-Electron Tomography, Electron Microscopical Structure Research, Protein and Cell Structure, Protein Degradation)
Molecular Mechanisms of DNA Repair (Christian Biertümpfel / Structural Biology, DNA Repair, DNA Replication, DNA Recombination, Protein-DNA-Interactions)
Systems Biology of Membrane Trafficking (Georg Borner / Proteomic Microscope, Membrane Trafficking, AP-4 Mediated Protein Transport, Quantitative Mass Spectrometry, Dynamic Organellar Maps)
Structural Cell Biology (Elena Conti / Structural Studies, RNA Transport, RNA Surveillance, RNA Degradation)
Computational Systems Biochemistry (Jürgen Cox / Systems Biology, Proteomics, Mass Spectrometry, Bioinformatics)
Structure and Dynamics of Molecular Machines (Karl Duderstadt / DNA Replication Dynamics, Structural Biology, Single-Mole |
https://en.wikipedia.org/wiki/InnoDB | InnoDB is a storage engine for the database management system MySQL and MariaDB. Since the release of MySQL 5.5.5 in 2010, it replaced MyISAM as MySQL's default table type. It provides the standard ACID-compliant transaction features, along with foreign key support (Declarative Referential Integrity). It is included as standard in most binaries distributed by MySQL AB, the exception being some OEM versions.
Description
InnoDB became a product of Oracle Corporation after its acquisition of the Finland-based company Innobase in October 2005. The software is dual licensed; it is distributed under the GNU General Public License, but can also be licensed to parties wishing to combine InnoDB in proprietary software.
InnoDB supports:
Both SQL and XA transactions
Tablespaces
Foreign keys
Full text search indexes, since MySQL 5.6 (February 2013) and MariaDB 10.0
Spatial operations, following the OpenGIS standard
Virtual columns, in MariaDB
See also
Comparison of MySQL database engines |
https://en.wikipedia.org/wiki/George%20Huntington | George Huntington (April 9, 1850 – March 3, 1916) was an American physician who contributed a classic clinical description of the disease that bears his name—Huntington's disease.
Huntington described this condition in the first of only two scientific papers he ever wrote. He wrote this paper when he was 22, a year after receiving his medical degree from Columbia University in New York. He first read the paper before the Meigs and Mason Academy of Medicine in Middleport, Ohio on February 15, 1872, and then published it in the Medical and Surgical Reporter of Philadelphia on April 13, 1872.
Huntington's father and grandfather, George Lee Huntington (1811–1881) and Abel Huntington (1778–1858), were also physicians in the same family practice. Their longitudinal observations combined with his own were invaluable in precisely describing this hereditary disease in multiple generations of a family in East Hampton on Long Island.
In a 1908 review, the eminent physician William Osler said of this paper: "In the history of medicine, there are few instances in which a disease has been more accurately, more graphically or more briefly described."
In 1874 George Huntington returned to Dutchess County, New York, to practice medicine. He joined a number of medical associations and started working for the Matteawan General Hospital. In 1908 the scientific journal Neurograph dedicated him a special edition.
George Huntington should not be confused with George Sumner Huntington (1861–1927), the anatomist (both men attended the College of Physicians and Surgeons of Columbia University).
Biography
Huntington's father and grandfather, George Lee Huntington (1811–1881) and Abel Huntington (1778–1858), were also physicians. Their family had lived in Long Island since 1797. That same year his grandfather, Dr. Abel Huntington (1778-1858), opened his general practice in East Hampton, on the Atlantic coast. He married Frances Lee in the same year he opened his practice. His son, Georg |
https://en.wikipedia.org/wiki/Male%20reproductive%20system | The male reproductive system consists of a number of sex organs that play a role in the process of human reproduction. These organs are located on the outside of the body, and within the pelvis.
The main male sex organs are the penis and the scrotum which contains the testicles that produce semen and sperm, which, as part of sexual intercourse, fertilize an ovum in the female's body; the fertilized ovum (zygote) develops into a fetus, which is later born as an infant.
The corresponding system in females is the female reproductive system.
External genital organs
Penis
The penis is an intromittent organ with a long shaft, an enlarged bulbous-shaped tip called the glans and its foreskin for protection. Inside the penis is the urethra, which is used to ejaculate semen and to excrete urine. Both substances exit through the meatus.
When the male becomes sexually aroused, the penis becomes erect and ready for sexual activity. Erection occurs because sinuses within the erectile tissue of the penis become filled with blood. The arteries of the penis are dilated while the veins are compressed so that blood flows into the erectile cartilage under pressure. The penis is supplied by the pudendal artery.
Scrotum
The scrotum is a sac of skin that hangs behind the penis. It holds and protects the testicles. It also contains numerous nerves and blood vessels. During times of lower temperatures, the cremaster muscle contracts and pulls the scrotum closer to the body, while the dartos muscle gives it a wrinkled appearance; when the temperature increases, the cremaster and dartos muscles relax to bring down the scrotum away from the body and remove the wrinkles respectively.
The scrotum remains connected with the abdomen or pelvic cavity through the inguinal canal. (The spermatic cord, formed from spermatic artery, vein and nerve bound together with connective tissue passes into the testis through inguinal canal.)
Internal genital organs
Testicles
The testicles have two |
https://en.wikipedia.org/wiki/The%20Swallow%27s%20Tail | The Swallow's Tail — Series of Catastrophes () was Salvador Dalí's last painting. It was completed in May 1983, as the final part of a series based on the mathematical catastrophe theory of René Thom.
Thom suggested that in four-dimensional phenomena, there are seven possible equilibrium surfaces, and therefore seven possible discontinuities, or "elementary catastrophes": fold, cusp, swallowtail, butterfly, hyperbolic umbilic, elliptic umbilic, and parabolic umbilic. "The shape of Dalí's Swallow's Tail is taken directly from Thom's four-dimensional graph of the same title, combined with a second catastrophe graph, the s-curve that Thom dubbed, 'the cusp'. Thom's model is presented alongside the elegant curves of a cello and the instrument's f-holes, which, especially as they lack the small pointed side-cuts of a traditional f-hole, equally connote the mathematical symbol for an integral in calculus: ∫."
In his 1979 speech, Gala, Velázquez and the Golden Fleece, presented upon his 1979 induction into the prestigious Académie des Beaux-Arts of the Institut de France, Dalí described Thom's theory of catastrophes as "the most beautiful aesthetic theory in the world". He also recollected his first and only meeting with René Thom, at which Thom purportedly told Dalí that he was studying tectonic plates; this provoked Dalí to question Thom about the railway station at Perpignan, France (near the Spanish border), which the artist had declared in the 1960s to be the center of the universe.
Thom reportedly replied, "I can assure you that Spain pivoted precisely — not in the area of — but exactly there where the Railway Station in Perpignan stands today". Dalí was immediately enraptured by Thom's statement, influencing his painting Topological Abduction of Europe — Homage to René Thom, the lower left corner of which features an equation closely linked to the "swallow's tail": an illustration of the graph, and the term queue d'aronde. The seismic fracture that transver |
https://en.wikipedia.org/wiki/Medion | Medion AG is a German consumer electronics company, and a subsidiary of Chinese multinational technology company Lenovo. The company operates in Europe, Turkey, Asia-Pacific, United States and Australia regions. The company's main products are computers and notebooks, but also smartphones, tablet computers, digital cameras, TVs, refrigerators, toasters, and fitness equipment.
Products
Medion products in Australia and the United States are available exclusively at Aldi and Super Billing Computers, with some products (such as DVD players) branded as Tevion (Aldi's own brand). Some of Medion's formal laptops were sold in North America at Best Buy stores and were sold in Canada at Future Shop as Cicero Computers.
In the United Kingdom, Medion products, including laptop and desktop computers, have been sold by Aldi, Sainsbury's, Somerfield, Woolworths, and Tesco, as well as being sold direct through Medion's own Web site and various other online retailers.
Medion launched Aldi Talk in Germany and MEDIONMobile as ALDIMobile in Australia in an agreement with Aldi Stores. Medion Australia Pty Limited remains as the owner of ALDIMobile.
In China Medion products are sold under the Lenovo brand, but not all Lenovo branded products are Medion products.
In Germany, Medion has launched a Cloud Gaming service in partnership with Gamestream in April 2020.
Sponsorship
Medion sponsored Sahara Force India through Formula One Team driver Adrian Sutil in Formula One in 2007 to 2011, until Sutil left the team. In 2013 Sutil returned to Sahara Force India, and Medion returned as a sponsor. Sutil and Medion left the sport at the end of 2013.
Medion brands
Other brands used on Medion products:
Cybercom
Cybermaxx
Life
Lifetec
Micromaxx
Essenitel b
Ordissimo
ERAZER
PEAQ
QUIGG
These Medion products can be recognized by the serial number starting with "MD" or "LT".
Acquisition
On 1 June 2011, the Chinese multinational Lenovo Group (LNVGY) announced plans to acquire Medion AG. Since Au |
https://en.wikipedia.org/wiki/Davisson%E2%80%93Germer%20experiment | The Davisson–Germer experiment was a 1923-27 experiment by Clinton Davisson and Lester Germer at Western Electric (later Bell Labs), in which electrons, scattered by the surface of a crystal of nickel metal, displayed a diffraction pattern. This confirmed the hypothesis, advanced by Louis de Broglie in 1924, of wave-particle duality, and also the wave mechanics approach of the Schrödinger equation. It was an experimental milestone in the creation of quantum mechanics.
History and overview
According to Maxwell's equations in the late 19th century, light was thought to consist of waves of electromagnetic fields and matter was thought to consist of localized particles. However, this was challenged in Albert Einstein's 1905 paper on the photoelectric effect, which described light as discrete and localized quanta of energy (now called photons), which won him the Nobel Prize in Physics in 1921. In 1924 Louis de Broglie presented his thesis concerning the wave–particle duality theory, which proposed the idea that all matter displays the wave–particle duality of photons. According to de Broglie, for all matter and for radiation alike, the energy of the particle was related to the frequency of its associated wave by the Planck relation:
And that the momentum of the particle was related to its wavelength by what is now known as the de Broglie relation:
where is Planck's constant.
An important contribution to the Davisson–Germer experiment was made by Walter M. Elsasser in Göttingen in the 1920s, who remarked that the wave-like nature of matter might be investigated by electron scattering experiments on crystalline solids, just as the wave-like nature of X-rays had been confirmed through X-ray scattering experiments on crystalline solids.
This suggestion of Elsasser was then communicated by his senior colleague (and later Nobel Prize recipient) Max Born to physicists in England. When the Davisson and Germer experiment was performed, the results of the experiment were |
https://en.wikipedia.org/wiki/Subclass%20%28set%20theory%29 | In set theory and its applications throughout mathematics, a subclass is a class contained in some other class in the same way that a subset is a set contained in some other set.
That is, given classes A and B, A is a subclass of B if and only if every member of A is also a member of B.
If A and B are sets, then of course A is also a subset of B.
In fact, when using a definition of classes that requires them to be first-order definable, it is enough that B be a set; the axiom of specification essentially says that A must then also be a set.
As with subsets, the empty set is a subclass of every class, and any class is a subclass of itself. But additionally, every class is a subclass of the class of all sets. Accordingly, the subclass relation makes the collection of all classes into a Boolean lattice, which the subset relation does not do for the collection of all sets. Instead, the collection of all sets is an ideal in the collection of all classes. (Of course, the collection of all classes is something larger than even a class!) |
https://en.wikipedia.org/wiki/Colatitude | In a spherical coordinate system, a colatitude is the complementary angle of a given latitude, i.e. the difference between a right angle and the latitude. Here Southern latitudes are defined to be negative, and as a result the colatitude is a non-negative quantity, ranging from zero at the North pole to 180° at the South pole.
The colatitude corresponds to the conventional 3D polar angle in spherical coordinates, as opposed to the latitude as used in cartography.
Examples
Latitude and colatitude sum up to 90°.
Astronomical use
The colatitude is most
useful in astronomy because it refers to the zenith distance of the celestial poles. For example, at latitude 42°N, Polaris (approximately on the North celestial pole) has an altitude of 42°, so the distance from the zenith (overhead point) to Polaris is .
Adding the declination of a star to the observer's colatitude gives the maximum latitude of that star (its angle from the horizon at culmination or upper transit). For example, if Alpha Centauri is seen with a latitude of 72° north (108° south) and its declination is known (60°S), then it can be determined that the observer's colatitude is (i.e. their latitude is ).
Stars whose declinations exceed the observer's colatitude are called circumpolar because they will never set as seen from that latitude. If an object's declination is further south on the celestial sphere than the value of the colatitude, then it will never be seen from that location. For example, Alpha Centauri will always be visible at night from Perth, Western Australia because the colatitude is , and 60° is greater than 58°; on the other hand, the star will never rise in Juneau because its declination of −60° is less than −32° (the negation of Juneau's colatitude). Additionally, colatitude is used as part of the Schwarzschild metric in general relativity. |
https://en.wikipedia.org/wiki/Prismatic%20surface | A prismatic surface is a surface generated by all the lines that are parallel to a given line and intersect a broken line that is not in the same plane as the given line. The broken line is the directrix of the surface; the parallel lines are its generators (or elements). If the broken line is closed (i.e., a closed polygon), then the surface is a closed prismatic surface.
With regards to crystallography, a prismatic surface is a single face of a prismatic form, which is an open form consisting of three, four, or six identical faces related by a symmetry operator.
See also
Prism |
https://en.wikipedia.org/wiki/Amyl%20acetate | Amyl acetate (pentyl acetate) is an organic compound and an ester with the chemical formula CH3COO[CH2]4CH3 and the molecular weight 130.19g/mol. It is colorless and has a scent similar to bananas and apples. The compound is the condensation product of acetic acid and 1-pentanol. However, esters formed from other pentanol isomers (amyl alcohols), or mixtures of pentanols, are often referred to as amyl acetate. The symptoms of exposure to amyl acetate in humans are dermatitis, central nervous system depression, narcosis and irritation to the eyes and nose.
Uses
Amyl acetate is a solvent for paints, lacquers, and liquid bandages; and a flavorant. It also fuels the Hefner lamp and fermentative productions of penicillin.
See also
Isoamyl acetate, also known as banana oil.
Esters, organic molecules with the same functional groups |
https://en.wikipedia.org/wiki/Charlotte%20Bach | Charlotte Bach (born Karoly Hajdu; 1920–1981) was a Hungarian-British impostor and fringe evolutionary theorist. Her alternative theory of evolution acquired a cult following among prominent writers and scientists in London during the 1970s, who remained ignorant of her original identity until after her death.
Early life
Hajdu was born near Budapest in 1920. In the wake of World War II she moved to England and in 1948 began to use the name Baron Carl Hajdu. In 1956 she collected money for Hungarian freedom fighters resisting the then Soviet occupation. The People, which specialised in lurid exposés, alleged that Hajdu had pocketed the proceeds. She was found guilty of fraud in 1957, and was forced into bankruptcy.
Hajdu then adopted the persona of writer and society hypnotherapist Michel Karoly. She rented an apartment in Mayfair. She began to acquire a following in polite society, and wrote an advice column in a mass-selling magazine of the day. Karoly continued in this role until 1965, when both her wife and stepson suddenly died. She was declared bankrupt again, and in 1966 was sentenced to two months in gaol for acquiring a loan whilst bankrupt.
By her own account, the sudden deaths had precipitated a profound identity crisis.
Lectures
In 1968, Hajdu adopted the new identity of Dr Charlotte Bach, a supposed former lecturer at Budapest's Eötvös Loránd University, whose alumni included the philosophers Michael and Karl Polanyi and the mathematician John von Neumann.
'Dr Bach' would openly tell lecture audiences how at first, putting ads in the windows of local newsagents, she had worked as a dominatrix. This experience had provided invaluable research data, she said, for the purpose of compiling a dictionary of psychology. These researches had in turn led to the all-embracing theory. She lived as Charlotte Bach until she died in 1981.
Human ethology
In 1971 Bach unveiled her transformative, as she would insist, theory of evolution. Among other things, she a |
https://en.wikipedia.org/wiki/Percent-encoding | URL encoding, officially known as percent-encoding, is a method to encode arbitrary data in a Uniform Resource Identifier (URI) using only the limited US-ASCII characters legal within a URI. Although it is known as URL encoding, it is also used more generally within the main Uniform Resource Identifier (URI) set, which includes both Uniform Resource Locator (URL) and Uniform Resource Name (URN). As such, it is also used in the preparation of data of the application/x-www-form-urlencoded media type, as is often used in the submission of HTML form data in HTTP requests.
Percent-encoding in a URI
Types of URI characters
The characters allowed in a URI are either reserved or unreserved (or a percent character as part of a percent-encoding). Reserved characters are those characters that sometimes have special meaning. For example, forward slash characters are used to separate different parts of a URL (or more generally, a URI). Unreserved characters have no such meanings. Using percent-encoding, reserved characters are represented using special character sequences. The sets of reserved and unreserved characters and the circumstances under which certain reserved characters have special meaning have changed slightly with each revision of specifications that govern URIs and URI schemes.
Other characters in a URI must be percent-encoded.
Reserved characters
When a character from the reserved set (a "reserved character") has a special meaning (a "reserved purpose") in a certain context, and a URI scheme says that it is necessary to use that character for some other purpose, then the character must be percent-encoded. Percent-encoding a reserved character involves converting the character to its corresponding byte value in ASCII and then representing that value as a pair of hexadecimal digits (if there is a single hex digit, a leading zero is added). The digits, preceded by a percent sign (%) as an escape character, are then used in the URI in place of the reserved chara |
https://en.wikipedia.org/wiki/Intermittent%20rhythmic%20delta%20activity | Intermittent rhythmic delta activity (IRDA) is a type of brain wave abnormality found in electroencephalograms (EEG).
Types
It can be classified based on the area of brain it originates from:
frontal (FIRDA)
occipital (OIRDA)
temporal (TIRDA)
It can also be
Unilateral
Bilateral
Cause
It can be caused by a number of different reasons, some benign, unknown reasons, but also are commonly associated with lesions, tumors, and encephalopathies. Association with periventricular white matter disease and cortical atrophy has been documented and they are more likely to show up during acute metabolic derangements such as uremia and hyperglycemia.
Diagnosis |
https://en.wikipedia.org/wiki/RANAP | In telecommunications networks, RANAP (Radio Access Network Application Part) is a protocol specified by 3GPP in TS 25.413
and used in UMTS for signaling between the Core Network, which can be a MSC or SGSN, and the UTRAN. RANAP is carried over Iu-interface.
RANAP signalling protocol resides in the control plane of Radio network layer of Iu interface in the UMTS (Universal Mobile Telecommunication System) protocol stack. Iu interface is the interface between RNC (Radio Network Controller) and CN (Core Network). nb. For Iu-ps transport RANAP is carried on SCTP if IP interface used on this.
RANAP handles signaling for the Iu-PS - RNC and 3G SGSN and Iu-CS - RNC and 3G MSC . It also provides the signaling channel to transparently pass messages between the User Equipment (UE) and the CN.
In LTE, RANAP has been replaced by S1AP.
In SA (standalone) installations of 5G, S1AP will be replaced by NGAP.
Functionality
Over the Iu interface, RANAP is used to:
- Facilitate general UTRAN procedures from the core network such as paging
- Separate each User Equipment (UE) on protocol level for mobile-specific signaling management
- Transfer transparently non-access signaling
- Request and manage various types of UTRAN radio access bearers
- Perform the Serving Radio Network Subsystem (SRNS) relocation
See also
GPRS Tunnelling Protocol (GTP) |
https://en.wikipedia.org/wiki/DOM%20Inspector | DOM Inspector (DOMi) is a web developer tool created by Joe Hewitt and was originally included in Mozilla Application Suite as well as versions of Mozilla Firefox prior to Firefox 3. It is now included in Firefox, and SeaMonkey. Its main purpose is to inspect and edit the Document Object Model (DOM) tree of HTML and XML-based documents.
A DOM node can be selected from the tree structure, or by clicking on the browser chrome. As well as the DOM tree viewer, other viewers are also available, including Box Model, XBL Bindings, CSS Rules, Style Sheets, Computed Style, JavaScript Object, as well as a number of viewers for document and application accessibility. By default, the DOM Inspector highlights a newly selected non-attribute node with a red flashing border.
Similar tools exist in other browsers, e.g., Opera's Dragonfly, Safari's Web Inspector, the Internet Explorer Developer Toolbar, and Google Chrome's Developer Tools.
See also
Firebug, another web development extension more recently created by Joe Hewitt
External links
DOM Inspector extension for Firefox
DOM Inspector at Mozilla Developer Center
XPather - A DOMi extension that adds XPath support
Software testing tools
Firefox
Firefox extensions merged to Firefox |
https://en.wikipedia.org/wiki/Gerald%20Weinberg | Gerald Marvin Weinberg (October 27, 1933 – August 7, 2018) was an American computer scientist, author and teacher of the psychology and anthropology of computer software development. His most well-known books are The Psychology of Computer Programming and Introduction to General Systems Thinking.
Biography
Gerald Weinberg was born and raised in Chicago. He attended Omaha Central High School in Omaha, Nebraska. In 1963 he received a PhD in Communication Sciences from the University of Michigan.
Weinberg started working in the computing business at IBM in 1956 at the Federal Systems Division Washington, where he participated as Manager of Operating Systems Development in the Project Mercury (1959–1963), which aimed to put a human in orbit around the Earth. In 1960 he published one of his first papers. Since 1969 was consultant and Principal at Weinberg & Weinberg. Here he conducted workshops such as the AYE Conference, The Problem Solving Leadership workshop since 1974, and workshops about the Fieldstone Method. Further Weinberg was an author at Dorset House Publishing since 1970, consultant at Microsoft since 1988, and moderator at the Shape Forum since 1993.
Weinberg was a visiting professor at the University of Nebraska-Lincoln, Binghamton University, and Columbia University. He was a member of the Society for General Systems Research since the late 1950s. He was also a Founding Member of the IEEE Transactions on Software Engineering, a member of the Southwest Writers and the Oregon Writers Network, and a Keynote Speaker on many software development conferences.
In 1993 he was the Winner of The J.-D. Warnier Prize for Excellence in Information Sciences, the 2000 Winner of The Stevens Award for Contributions to Software Engineering, the 2010 Software Test Professionals first annual Luminary Award and the European Testing Excellence Award at the EuroSTAR Conference in 2013.
Weinberg died on August 7, 2018.
Work
His most well-known books are The Psychology |
https://en.wikipedia.org/wiki/Beta-dual%20space | In functional analysis and related areas of mathematics, the beta-dual or -dual is a certain linear subspace of the algebraic dual of a sequence space.
Definition
Given a sequence space the -dual of is defined as
If is an FK-space then each in defines a continuous linear form on
Examples
Properties
The beta-dual of an FK-space is a linear subspace of the continuous dual of . If is an FK-AK space then the beta dual is linear isomorphic to the continuous dual.
Functional analysis |
https://en.wikipedia.org/wiki/Bijection%2C%20injection%20and%20surjection | In mathematics, injections, surjections, and bijections are classes of functions distinguished by the manner in which arguments (input expressions from the domain) and images (output expressions from the codomain) are related or mapped to each other.
A function maps elements from its domain to elements in its codomain. Given a function :
The function is injective, or one-to-one, if each element of the codomain is mapped to by at most one element of the domain, or equivalently, if distinct elements of the domain map to distinct elements in the codomain. An injective function is also called an injection. Notationally:
or, equivalently (using logical transposition),
The function is surjective, or onto, if each element of the codomain is mapped to by at least one element of the domain. That is, the image and the codomain of the function are equal. A surjective function is a surjection. Notationally:
The function is bijective (one-to-one and onto, one-to-one correspondence, or invertible) if each element of the codomain is mapped to by exactly one element of the domain. That is, the function is both injective and surjective. A bijective function is also called a bijection. That is, combining the definitions of injective and surjective,
where means "there exists exactly one ".
In any case (for any function), the following holds:
An injective function need not be surjective (not all elements of the codomain may be associated with arguments), and a surjective function need not be injective (some images may be associated with more than one argument). The four possible combinations of injective and surjective features are illustrated in the adjacent diagrams.
Injection
A function is injective (one-to-one) if each possible element of the codomain is mapped to by at most one argument. Equivalently, a function is injective if it maps distinct arguments to distinct images. An injective function is an injection. The formal definition is the following.
The function is i |
https://en.wikipedia.org/wiki/Saturation%20arithmetic | Saturation arithmetic is a version of arithmetic in which all operations, such as addition and multiplication, are limited to a fixed range between a minimum and maximum value.
If the result of an operation is greater than the maximum, it is set ("clamped") to the maximum; if it is below the minimum, it is clamped to the minimum. The name comes from how the value becomes "saturated" once it reaches the extreme values; further additions to a maximum or subtractions from a minimum will not change the result.
For example, if the valid range of values is from −100 to 100, the following saturating arithmetic operations produce the following values:
60 + 30 → 90.
60 + 43 → 100. (not the expected 103.)
(60 + 43) − (75 + 25) → 0. (not the expected 3.) (100 − 100 → 0.)
10 × 11 → 100. (not the expected 110.)
99 × 99 → 100. (not the expected 9801.)
30 × (5 − 1) → 100. (not the expected 120.) (30 × 4 → 100.)
(30 × 5) − (30 × 1) → 70. (not the expected 120. not the previous 100.) (100 − 30 → 70.)
Here is another example for saturating subtraction when the valid range is from 0 to 100 instead:
30 - 60 → 0. (not the expected -30.)
As can be seen from these examples, familiar properties like associativity and distributivity may fail in saturation arithmetic. This makes it unpleasant to deal with in abstract mathematics, but it has an important role to play in digital hardware and algorithms where values have maximum and minimum representable ranges.
Modern use
Typically, general-purpose microprocessors do not implement integer arithmetic operations using saturation arithmetic; instead, they use the easier-to-implement modular arithmetic, in which values exceeding the maximum value "wrap around" to the minimum value, like the hours on a clock passing from 12 to 1. In hardware, modular arithmetic with a minimum of zero and a maximum of rn − 1, where r is the radix can be implemented by simply discarding all but the lowest n digits. For binary hardware, which the vast majority of |
https://en.wikipedia.org/wiki/Iterative%20closest%20point | Iterative closest point (ICP) is an algorithm employed to minimize the difference between two clouds of points. ICP is often used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve optimal path planning (especially when wheel odometry is unreliable due to slippery terrain), to co-register bone models, etc.
Overview
The Iterative Closest Point algorithm keeps one point cloud, the reference or target, fixed, while transforming the other, the source, to best match the reference. The transformation (combination of translation and rotation) is iteratively estimated in order to minimize an error metric, typically the sum of squared differences between the coordinates of the matched pairs. ICP is one of the widely used algorithms in aligning three dimensional models given an initial guess of the rigid transformation required.
The ICP algorithm was first introduced by Chen and Medioni, and Besl and McKay.
The Iterative Closest Point algorithm contrasts with the Kabsch algorithm and other solutions to the orthogonal Procrustes problem in that the Kabsch algorithm requires correspondence between point sets as an input, whereas Iterative Closest Point treats correspondence as a variable to be estimated.
Inputs: reference and source point clouds, initial estimation of the transformation to align the source to the reference (optional), criteria for stopping the iterations.
Output: refined transformation.
Essentially, the algorithm steps are:
For each point (from the whole set of vertices usually referred to as dense or a selection of pairs of vertices from each model) in the source point cloud, match the closest point in the reference point cloud (or a selected set).
Estimate the combination of rotation and translation using a root mean square point to point distance metric minimization technique which will best align each source point to its match found in the previous step. This step may also involve weighting points and rejecting out |
https://en.wikipedia.org/wiki/Akilia | Akilia Island is an island in southwestern Greenland, about 22 kilometers south of Nuuk. Akilia is the location of a rock formation that has been proposed to contain the oldest known sedimentary rocks on Earth, and perhaps the oldest evidence of life on Earth.
Geology
The rocks in question are part of a metamorphosed supracrustal sequence located at the south-western tip of the island. The sequence has been dated as no younger than 3.85 billion years old - that is, in the Hadean eon - based on the age of an igneous band that cuts the rock. The supracrustal sequence contains layers rich in iron and silica, which are variously interpreted as banded iron formation, chemical sediments from submarine hot springs, or hydrothermal vein deposits. Carbon in the rock, present as graphite, shows low levels of carbon-13, which may suggest an origin as isotopically light organic matter derived from living organisms.
However, this interpretation is complicated because of high-grade metamorphism that affected the Akilia rocks after their formation. The sedimentary origin, age and the carbon content of the rocks have been questioned.
If the Akilia rocks do show evidence of life by 3.85 Ga, it would challenge models which suggest that Earth would not be hospitable to life at this time.
See also
List of islands of Greenland
Origin of life |
https://en.wikipedia.org/wiki/Community%20forests%20in%20England | England's community forests are afforestation-based regeneration projects which were established in the early 1990s. Each of them is a partnership between the Forestry Commission and the Countryside Agency, which are agencies of the British government, and the relevant local councils.
Most of the designated areas are close to large cities and contain large amounts of brownfield, underused and derelict land. When the forests were created the average forest cover in the designated areas was 6.9%, and the target is to increase this to 30% over about 30 years. As most of the land is in private ownership the schemes rely mainly on providing landowners with incentives to plant trees. However the forests contain areas of publicly accessible open land, and increasing public access is one of the objectives.
The table below lists the community forests. As some of them straddle county boundaries they are listed by region and town or city.
See also |
https://en.wikipedia.org/wiki/Pseudoextinction | Pseudoextinction (or phyletic extinction) of a species occurs when all members of the species are extinct, but members of a daughter species remain alive. The term pseudoextinction refers to the evolution of a species into a new form, with the resultant disappearance of the ancestral form. Pseudoextinction results in the relationship between ancestor and descendant still existing even though the ancestor species no longer exists.
The classic example is that of the non-avian dinosaurs. While the non-avian dinosaurs of the Mesozoic died out, their descendants, birds, live on today. Many other families of bird-like dinosaurs also died out as the heirs of the dinosaurs continued to evolve, but because birds continue to thrive in the world today their ancestors are only pseudoextinct.
Overview
From a taxonomic perspective, pseudoextinction is "within an evolutionary lineage, the disappearance of one taxon caused by the appearance of the next." The pseudoextinction of a species can be arbitrary, simply resulting from a change in the naming of a species as it evolves from its ancestral form to its descendant form. Taxonomic pseudoextinction has to do with the disappearance of taxa that are categorized together by taxonomists. As they are just grouped together, their extinction is not reflected through lineage; therefore, unlike evolutionary pseudoextinction, taxonomic pseudoextinction does not alter the evolution of daughter species. From an evolutionary perspective, pseudoextinction entails the loss of a species as a result of the creation of a new one. As the primordial species evolves into its daughter species, either by anagenesis or cladogenesis, the ancestral species can be subject to extinction. Throughout the process of evolution, a taxon can disappear; in this case, pseudoextinction is considered an evolutionary event.
From a genetic perspective, pseudoextinction is the "disappearance of a taxon by virtue of its being evolved by anagenesis into another taxon. |
https://en.wikipedia.org/wiki/Motorola%2068881 | The Motorola 68881 and Motorola 68882 are floating-point units (FPUs) used in some computer systems in conjunction with Motorola's 32-bit 68020 or 68030 microprocessors. These coprocessors are external chips, designed before floating point math became standard on CPUs. The Motorola 68881 was introduced in 1984. The 68882 is a higher performance version produced later.
Overview
The 68020 and 68030 CPUs were designed with the separate 68881 chip in mind. Their instruction sets reserved the "F-line" instructions – that is, all opcodes beginning with the hexadecimal digit "F" could either be forwarded to an external coprocessor or be used as "traps" which would throw an exception, handing control to the computer's operating system. If an FPU is not present in the system, the OS would then either call an FPU emulator to execute the instruction's equivalent using 68020 integer-based software code, return an error to the program, terminate the program, or crash and require a reboot.
Architecture
The 68881 has eight 80-bit data registers (a 64-bit mantissa plus a sign bit, and a 15-bit signed exponent). It allows seven different modes of numeric representation, including single-precision floating point, double-precision floating point, extended-precision floating point, integers as 8-, 16- and 32-bit quantities and a floating-point Binary-coded decimal format. The binary floating point formats are as defined by the IEEE 754 floating-point standard. It was designed specifically for floating-point math and is not a general-purpose CPU. For example, when an instruction requires any address calculations, the main CPU handles them before the 68881 takes control.
The CPU/FPU pair are designed such that both can run at the same time. When the CPU encounters a 68881 instruction, it hands the FPU all operands needed for that instruction, and then the FPU releases the CPU to go on and execute the next instruction.
68882
The 68882 is an improved version of the 68881, with b |
https://en.wikipedia.org/wiki/4B5B | In telecommunication, 4B5B is a form of data communications line code. 4B5B maps groups of 4 bits of data onto groups of 5 bits for transmission. These 5-bit words are pre-determined in a dictionary and they are chosen to ensure that there will be sufficient transitions in the line state to produce a self-clocking signal. A collateral effect of the code is that 25% more bits are needed to send the same information.
An alternative to using 4B5B coding is to use a scrambler. Some systems use scramblers in conjunction with 4B5B coding to assure DC balance and improve electromagnetic compatibility.
Depending on the standard or specification of interest, there may be several 5-bit output codes left unused. The presence of any of the unused codes in the data stream can be used as an indication that there is a fault somewhere in the link. Therefore, the unused codes can be used to detect errors in the data stream.
Applications
4B5B was popularized by Fiber Distributed Data Interface (FDDI) in the mid-1980s. It was adopted for digital audio transmission by MADI in 1989. and by Fast Ethernet in 1995.
The name 4B5B is generally taken to mean the FDDI version. Other 4-to-5-bit codes have been used for magnetic recording and are known as group coded recording (GCR), but those are (0,2) run-length limited codes, with at most two consecutive zeros. 4B5B allows up to three consecutive zeros (a (0,3) RLL code), providing a greater variety of control codes.
On optical fiber, the 4B5B output is NRZI-encoded. FDDI over copper (CDDI) uses MLT-3 encoding instead, as does 100BASE-TX Fast Ethernet.
The 4B5B encoding is also used for USB Power Delivery communication, where it is sent over the USB-C CC pin (further encoded using biphase mark code) or the USB-A/B power lines (further encoded using frequency-shift keying).
Clocking
4B5B codes are designed to produce at least two transitions per 5 bits of output code regardless of input data. The transitions provide necessary trans |
https://en.wikipedia.org/wiki/Plastic%20wrap | Plastic wrap, cling film, Saran wrap, cling wrap, Glad wrap or food wrap is a thin plastic film typically used for sealing food items in containers to keep them fresh over a longer period of time. Plastic wrap, typically sold on rolls in boxes with a cutting edge, clings to many smooth surfaces and can thus remain tight over the opening of a container without adhesive. Common plastic wrap is roughly 0.0005 inches (12.7 μm) thick. The trend has been to produce thinner plastic wrap, particularly for household use (where very little stretch is needed), so now the majority of brands on shelves around the world are 8, 9 or 10 μm thick.
Materials used
Plastic wrap was initially created from polyvinyl chloride (PVC), which remains the most common component globally. PVC has an acceptably-low permeability to water vapor and oxygen, helping to preserve the freshness of food. There are concerns about the transfer of plasticizers from PVC into food. Pliofilm was made of various kinds of rubber chloride. Used in the middle of the 20th century, it could be heat-sealed.
A common, cheaper alternative to PVC is low-density polyethylene (LDPE). It is less adhesive than PVC, but this can be remedied by adding linear low-density polyethylene (LLDPE), which also increases the film's tensile strength.
In the US and Japan, plastic wrap is sometimes produced using polyvinylidene chloride (PVdC), though some brands, such as Saran wrap, have switched to other formulations due to environmental concerns.
Food use
Purpose
The most important role plastic wrap plays in food packaging is protection and preservation. Plastic wrap can prevent food from perishing, extend its shelf-life, and maintain the quality of food. Plastic wrap generally provides protection for food from three aspects: chemical (gases, moisture, and light), biological (microorganisms, insects and animals), and physical (mechanical damage). In addition to food protection and preservation, plastic wrap can also reduce food |
https://en.wikipedia.org/wiki/Eddy%20current%20brake | An eddy current brake, also known as an induction brake, Faraday brake, electric brake or electric retarder, is a device used to slow or stop a moving object by generating eddy currents and thus dissipating its kinetic energy as heat. Unlike friction brakes, where the drag force that stops the moving object is provided by friction between two surfaces pressed together, the drag force in an eddy current brake is an electromagnetic force between a magnet and a nearby conductive object in relative motion, due to eddy currents induced in the conductor through electromagnetic induction.
A conductive surface moving past a stationary magnet develops circular electric currents called eddy currents induced in it by the magnetic field, as described by Faraday's law of induction. By Lenz's law, the circulating currents create their own magnetic field that opposes the field of the magnet. Thus the moving conductor experiences a drag force from the magnet that opposes its motion, proportional to its velocity. The kinetic energy of the moving object is dissipated as heat generated by the current flowing through the electrical resistance of the conductor.
In an eddy current brake the magnetic field may be created by a permanent magnet or an electromagnet. With an electromagnet system, the braking force can be turned on and off (or varied) by varying the electric current in the electromagnet windings. Another advantage is that since the brake does not work by friction, there are no brake shoe surfaces to wear, eliminating replacement as with friction brakes. A disadvantage is that since the braking force is proportional to the relative velocity of the brake, the brake has no holding force when the moving object is stationary, as provided by static friction in a friction brake, hence in vehicles it must be supplemented by a friction brake.
In some cases, energy in the form of momentum stored within a motor or other machine is used to energize any electromagnets involved. The |
https://en.wikipedia.org/wiki/IEEE%20802.21 | The IEEE 802.21 refers to Media Independent Handoff (MIH) and is an IEEE standard published in 2008. The standard supports algorithms enabling seamless handover between wired and wireless networks of the same type as well as handover between different wired and wireless network types also called Media independent handover (MIH) or vertical handover. The vertical handover was first introduced by Mark Stemn and Randy Katz at U C Berkeley. The standard provides information to allow handing over to and from wired 802.3 networks to wireless 802.11, 802.15, 802.16, 3GPP and 3GPP2 networks through different handover mechanisms.
The IEEE 802.21 working group started work in March 2004. More than 30 companies have joined the working group. The group produced a first draft of the standard including the protocol definition in May 2005. The standard was published in January 2009.
Reasons for 802.21
Cellular networks and 802.11 networks employ handover mechanisms for handover within the same network type (aka horizontal handover). Mobile IP provides handover mechanisms for handover across subnets of different types of networks, but can be slow in the process. Current 802 standards do not support handover between different types of networks. They also do not provide triggers or other services to accelerate mobile IP-based handovers. Moreover, existing 802 standards provide mechanisms for detecting and selecting network access points, but do not allow for detection and selection of network access points in a way that is independent of the network type.
Some of the expectations
Allow roaming between 802.11 networks and 3G cellular networks.
Allow users to engage in ad hoc teleconferencing.
Apply to both wired and wireless networks, likely the same list as IEEE P1905 specifies to cooperate in software-defined networking (see also OpenFlow)
Allow for use by multiple vendors and users.
Compatibility and conformance with other IEEE 802 standards especially 802.11u unknown user |
https://en.wikipedia.org/wiki/Internet%20studies | Internet studies is an interdisciplinary field studying the social, psychological, political, technical, cultural and other dimensions of the Internet and associated information and communication technologies. The human aspects of the Internet are a subject of focus in this field. While that may be facilitated by the underlying technology of the Internet, the focus of study is often less on the technology itself than on the social circumstances that technology creates or influences.
While studies of the Internet are now widespread across academic disciplines, there is a growing collaboration among these investigations. In recent years, Internet studies have become institutionalized as courses of study at several institutions of higher learning. Cognates are found in departments of a number of other names, including departments of "Internet and Society", "virtual society", "digital culture", "new media" or "convergent media", various "iSchools", or programs like "Media in Transition" at MIT. On the research side, Internet studies intersects with studies of cyberculture, human–computer interaction, and science and technology studies.
Internet and society is a research field that addresses the interrelationship of Internet and society, i.e. how society has changed the Internet and how the Internet has changed society.
The topic of social issues relating to Internet has become notable since the rise of the World Wide Web, which can be observed from the fact that journals and newspapers run many stories on topics such as cyberlove, cyberhate, Web 2.0, cybercrime, cyberpolitics, Internet economy, etc.
As most of the scientific monographs that have considered Internet and society in their book titles are social theoretical in nature, Internet and society can be considered as a primarily social theoretical research approach of Internet studies.
Topics of study
In recent years, Internet studies have become institutionalized as courses of study, and even separate departme |
https://en.wikipedia.org/wiki/PEPA | Performance Evaluation Process Algebra (PEPA) is a stochastic process algebra designed for modelling computer and communication systems introduced by Jane Hillston in the 1990s. The language extends classical process algebras such as Milner's CCS and Hoare's CSP by introducing probabilistic branching and timing of transitions.
Rates are drawn from the exponential distribution and PEPA models are finite-state and so give rise to a stochastic process, specifically a continuous-time Markov process (CTMC). Thus the language can be used to study quantitative properties of models of computer and communication systems such as throughput, utilisation and response time as well as qualitative properties such as freedom from deadlock. The language is formally defined using a structured operational semantics in the style invented by Gordon Plotkin.
As with most process algebras, PEPA is a parsimonious language. It has only four combinators, prefix, choice, co-operation and hiding. Prefix is the basic building block of a sequential component: the process (a, r).P performs activity a at rate r before evolving to behave as component P. Choice sets up a competition between two possible alternatives: in the process (a, r).P + (b, s).Q either a wins the race (and the process subsequently behaves as P) or b wins the race (and the process subsequently behaves as Q).
The co-operation operator requires the two "co-operands" to join for those activities which are specified in the co-operation set: in the process P < a, b> Q the processes P and Q must co-operate on activities a and b, but any other activities may be performed independently. The reversed compound agent theorem gives a set of sufficient conditions for a co-operation to have a product form stationary distribution.
Finally, the process P/{a} hides the activity a from view (and prevents other processes from joining with it).
Syntax
Given a set of action names, the set of PEPA processes is defined by the following BNF |
https://en.wikipedia.org/wiki/B%20Reactor | The B Reactor at the Hanford Site, near Richland, Washington, was the first large-scale nuclear reactor ever built. The project was a key part of the Manhattan Project, the United States nuclear weapons development program during World War II. Its purpose was to convert natural (not isotopically enriched) uranium metal into plutonium-239 by neutron activation, as plutonium is simpler to chemically separate from spent fuel assemblies, for use in nuclear weapons, than it is to isotopically enrich uranium into weapon-grade material. The B reactor was fueled with metallic natural uranium, graphite moderated, and water-cooled. It has been designated a U.S. National Historic Landmark since August 19, 2008 and in July 2011 the National Park Service recommended that the B Reactor be included in the Manhattan Project National Historical Park commemorating the Manhattan Project. Visitors can take a tour of the reactor by advance reservation.
Design and construction
The reactor was designed and built by E. I. du Pont de Nemours and Company based on experimental designs tested by Enrico Fermi at the University of Chicago, and tests from the X-10 Graphite Reactor at Oak Ridge National Laboratory. It was designed to operate at 250 megawatts (thermal).
The purpose of the reactor was to breed plutonium from natural (not isotopically enriched) uranium metal, as uranium enrichment was a difficult process, while plutonium is relatively simple to process chemically. The Y12 uranium enrichment plant required 14,700 tons of silver for its enrichment calutrons, as well as 22,000 employees and more electrical power than most entire states required at the time. Reactor B required only a few dozen employees and fewer exotic materials, which were also required in far smaller amounts. The largest part of the reactor was 1200 tons of graphite used as a moderator, and its power consumption was vastly smaller, requiring only enough electricity to run the cooling pumps.
The reactor occupies a f |
https://en.wikipedia.org/wiki/Handel%20%28warning%20system%29 | Handel was the code-name for the UK's national attack warning system in the Cold War. It consisted of a small console with two microphones, lights and gauges. The reason behind this was to provide a back-up if anything failed.
If an enemy airstrike was detected, a key on the left-hand side of the console would be turned and two lights would come on. Then the operator would press and hold down a red button and give the message:
The message would be sent to the police by the telephone system used for the speaking clock, who would in turn activate the air attack sirens using the local telephone lines. The rationale was to tackle two problems at once, as it reduced running costs (it would most likely be used only once in its working life, though it was regularly tested) and the telephone lines were continually tested for readiness by sharing infrastructure with a public service. This meant a fault could be detected in time to give a warning.
A Handel warning console can be seen at the Imperial War Museum in London among their Cold War exhibits, alongside the warning apparatus used by Kent Police (which was located at Maidstone police station to activate the sirens).
See also
BIKINI state
Four-minute warning
National Emergency Alarm Repeater |
https://en.wikipedia.org/wiki/Phase%20synchronization | Phase synchronization is the process by which two or more cyclic signals tend to oscillate with a repeating sequence of relative phase angles.
Phase synchronisation is usually applied to two waveforms of the same frequency with identical phase angles with each cycle. However it can be applied if there is an integer relationship of frequency, such that the cyclic signals share a repeating sequence of phase angles over consecutive cycles. These integer relationships are called Arnold tongues which follow from bifurcation of the circle map.
One example of phase synchronization of multiple oscillators can be seen in the behavior of Southeast Asian fireflies. At dusk, the flies begin to flash periodically with random phases and a gaussian distribution of native frequencies. As night falls, the flies, sensitive to one another's behavior, begin to synchronize their flashing. After some time all the fireflies within a given tree (or even larger area) will begin to flash simultaneously in a burst.
Thinking of the fireflies as biological oscillators, we can define the phase to be 0° during the flash and +-180° exactly halfway until the next flash. Thus, when they begin to flash in unison, they synchronize in phase.
One way to keep a local oscillator "phase synchronized" with a remote transmitter uses a phase-locked loop.
See also
Algebraic connectivity
Coherence (physics)
Kuramoto model
Synchronization (alternating current) |
https://en.wikipedia.org/wiki/Superhelix | A superhelix is a molecular structure in which a helix is itself coiled into a helix. This is significant to both proteins and genetic material, such as overwound circular DNA.
The earliest significant reference in molecular biology is from 1971, by F. B. Fuller:
A geometric invariant of a space curve, the writhing number, is defined and studied. For the central curve of a twisted cord the writhing number measures the extent to which coiling of the central curve has relieved local twisting of the cord. This study originated in response to questions that arise in the study of supercoiled double-stranded DNA rings.</blockquote>
About the writhing number, mathematician W. F. Pohl says:
<blockquote>It is well known that the writhing number is a standard measure of the global geometry of a closed space curve.
Contrary to intuition, a topological property, the linking number, arises from the geometric properties twist and writhe according to the following relationship:
Lk= T + W,
where Lk is the linking number, W is the writhe and T is the twist of the coil.
The linking number refers to the number of times that one strand wraps around the other. In DNA this property does not change and can only be modified by specialized enzymes called topoisomerases.
See also
DNA supercoil (superhelical DNA)
Knot theory |
https://en.wikipedia.org/wiki/Philip%20Stott | Philip Stott (born England, 1945) is a professor emeritus of biogeography at the School of Oriental and African Studies, University of London, and a former editor (1987–2004) of the Journal of Biogeography.
Background
In the early 1970s, Stott and his wife, a historian and biographer, lived in Thailand and he was carrying out field research at Kalasin. He has two daughters.
He has written academic papers and books on chalk grassland, on the vegetation and archaeology of Thailand (and on the rest of southeast Asia), on ecology and biogeography (e.g. his textbook 'Historical Plant Geography'), on fire ecology on lichens and mosses, on tropical rain forest and on the construction of environmental knowledge.
He was chairman of The Anglo-Thai Society (2003–2007), UK. He is no longer a member of the Scientific Alliance because he deems it important to be academically independent of all organisations, industry, and green groups.
Media
He writes for the press, especially for The Times, and broadcasts regularly on BBC radio and television on subjects including biogeography, extinction, climatology, and ecology.
He hosted a number of websites including pro-biotech supporting genetically modified foods, another countering 'ecohype and later 'envirospin', and one based on Bruno Latour's 'A Parliament of Things'.
Stott was often on Talksport with James Whale talking about Global Warming in his regular evening show. He appeared on "The Great Global Warming Swindle" on Channel 4 and presented several issues of 'Home Planet' on BBC Radio 4 (2009-2011).
In June 2008 he was a guest on Private Passions, the biographical music discussion programme on BBC Radio 3.
Published works
Royal Siamese Maps: War and Trade in Nineteenth Century Thailand (River Books and Thames & Hudson: 2005), for H.R.H. Princess Sirindhorn (with Santanee Phasuk)
Global environmental change (Blackwell Science: largely on climate change) (with Dr. Peter Moore and Professor Bill Chaloner)
Political ecolo |
https://en.wikipedia.org/wiki/VSAN | A virtual storage area network (virtual SAN, VSAN or vSAN) is a logical representation of a physical storage area network (SAN). A VSAN abstracts the storage-related operations from the physical storage layer, and provides shared storage access to the applications and virtual machines by combining the servers' local storage over a network into a single or multiple storage pools.
The use of VSANs allows the isolation of traffic within specific portions of the network. If a problem occurs in one VSAN, that problem can be handled with a minimum of disruption to the rest of the network. VSANs can also be configured separately and independently.
Technology
Operation
A VSAN operates as a dedicated piece of software responsible for storage access, and depending on the vendor, can run either as a virtual storage appliance (VSA), a storage controller that runs inside an isolated virtual machine (VM) or as an ordinary user-mode application, such as StarWind Virtual SAN, or DataCore SANsymphony. Alternatively it can be implemented as a kernel-mode loadable module, such as VMware vSAN, or Microsoft Storage Spaces Direct (S2D). A VSAN can be tied to a specific hypervisor, known as hypervisor-dedicated, or it can allow different hypervisors, known as hypervisor-agnostic.
Different vendors have different requirements for the minimum number of nodes that participate in a resilient VSAN cluster. The minimum requirement is to have at least 2 for high availability.
All-flash versus hybrid VSAN
Data center operators can deploy VSANs in an all-flash environment or a hybrid configuration, where flash is only used at the caching layer, and traditional spinning disk storage is used everywhere else. All-flash VSANs are higher performing, but as of 2019 were more expensive than hybrid networks.
Protocols
For sharing storage over a network, VSAN utilizes protocols including Fibre Channel (FC), Internet Small Computer Systems Interface (iSCSI), Server Message Block (SMB), and Network |
https://en.wikipedia.org/wiki/Imperforate%20hymen | An imperforate hymen is a congenital disorder where a hymen without an opening completely obstructs the vagina. It is caused by a failure of the hymen to perforate during fetal development. It is most often diagnosed in adolescent girls when menstrual blood accumulates in the vagina and sometimes also in the uterus. It is treated by surgical incision of the hymen.
Signs and symptoms
Affected newborns may present with acute urinary retention. In adolescent females, the most common symptoms of an imperforate hymen are cyclic pelvic pain and amenorrhea; other symptoms associated with hematocolpos include urinary retention, constipation, low back pain, nausea, and diarrhea. Other vaginal anomalies can have similar symptoms to an imperforate hymen. Vaginal atresia and a transverse vaginal septum require differentiation. A strong urge to defecate has been observed in a few women.
Complications
If untreated or unrecognized before puberty, an imperforate hymen can lead to peritonitis or endometriosis due to retrograde bleeding. Additionally, it can lead to mucometrocolpos (dilatation of the vaginal canal and uterus due to mucus buildup) or hematometrocolpos (dilatation due to buildup of menstrual fluid). Mucometrocolpos and hematocolpos can in turn cause urinary retention, constipation, and urinary tract infection.
Pathophysiology
An imperforate hymen is formed during fetal development when the sinovaginal bulbs fail to canalize with the rest of the vagina. Although some instances of familial occurrence have been reported, the condition's occurrence is mostly sporadic, and no genetic markers or mutations have been linked to its cause.
Diagnosis
An imperforate hymen is most often diagnosed in adolescent girls after the age of menarche with otherwise normal development. In adolescent girls of menarcheal age, the typical presentation of the condition is amennorhea and cyclic pelvic pain, indicative of hematocolpos secondary to vaginal obstruction. An imperforate hymen is u |
https://en.wikipedia.org/wiki/Graphics%20device%20interface | A graphics device interface is a subsystem that most operating systems use for representing graphical objects and transmitting them to output devices such as monitors and printers. In most cases, the graphics device interface is only able to draw 2D graphics and simple 3D graphics, in order to make use of more advanced graphics and keep performance, an API such as DirectX or OpenGL needs to be installed.
In Microsoft Windows, the GDI functionality resides in gdi.exe on 16-bit Windows, and gdi32.dll on 32-bit Windows.
Operating system technology |
https://en.wikipedia.org/wiki/Motilin | Motilin is a 22-amino acid polypeptide hormone in the motilin family that, in humans, is encoded by the MLN gene.
Motilin is secreted by endocrine Mo cells (also referred to as M cells, which are not the same as the M cells, or microfold cells, found in Peyer's patches) that are numerous in crypts of the small intestine, especially in the duodenum and jejunum. It is released into the general circulation in humans at about 100-min intervals during the inter-digestive state and is the most important factor in controlling the inter-digestive migrating contractions; and it also stimulates endogenous release of the endocrine pancreas. Based on amino acid sequence, motilin is unrelated to other hormones. Because of its ability to stimulate gastric activity, it was named "motilin." Apart from in humans, the motilin receptor has been identified in the gastrointestinal tracts of pigs, rats, cows, and cats, and in the central nervous system of rabbits.
Discovery
Motilin was discovered by J.C. Brown when he introduced alkaline solution into duodena of dogs, which caused strong gastric contractions. Brown et al. predicted that alkali could either release stimulus to activate motor activity or prevent the secretion of inhibitory hormone. They isolated a polypeptide as a by-product from purification of secretin on carboxymethyl cellulose. They named this polypeptide "Motilin."
Structure
Motilin has 22 amino acids and molecular weight of 2698 Daltons. In extract from human gut and plasma, there are two basic forms of motilin. The first molecular form is the polypeptide of 22 amino acids. The second form, on the other hand, is larger and contains the same 22 amino acids as the first form but includes an additional carboxyl-terminus end.
The sequences of amino acids of motilin is: Phe-Val-Pro-Ile-Phe-Thr-Tyr-Gly-Glu-Leu-Gln-Arg-Met-Gln-Glu-Lys-Glu-Arg-Asn-Lys-Gly-Gln.
The structure and dynamics of the gastrointestinal peptide hormone motilin have been studied in the presence o |
https://en.wikipedia.org/wiki/WeatherStar | WeatherStar (sometimes rendered Weather Star or WeatherSTAR; "STAR" being an acronym for Satellite Transponder Addressable Receiver) is the technology used by American cable and satellite television network The Weather Channel (TWC) to generate its local forecast segments—branded as Local on the 8s (LOT8s) since 2002 and previously from 1996 to 1998—on cable and IPTV systems nationwide. The hardware takes the form of a computerized unit installed at a cable system's headend. It receives, generates, and inserts local forecasts and other weather information, including weather advisories and warnings, into TWC's national programming.
Overview
The primary purpose of WeatherStar units is to disseminate weather information for local forecast segments on The Weather Channel. The forecast and observation data – which is compiled from local offices of the National Weather Service (NWS), the Storm Prediction Center (SPC), and The Weather Channel (which began producing in-house forecasts in 2002, replacing the NWS-sourced zone forecasts that were utilized for the STAR's descriptive, regional and extended forecast products) – is received from the vertical blanking interval of the TWC video feed and from data transmitted via satellite; the localized data is then sent to the unit that inserts the data and accompanying programmed graphics over the TWC feed. The WeatherStar systems are typically programmed to cue the local forecast segments and Lower Display Line (LDL) at given times. The units are programmed to feature customized segments known as "flavors," pre-determined segment lengths for each local forecast segment, varying by the time of broadcast, accommodating the inclusion or exclusion of certain products from a segment's product list. (Until the Local on the 8s segments adopted a uniform length, the extended forecast was the only product regularly included in each flavor.) Flavor lengths previously varied commonly between 30 seconds and two minutes, with some running as |
https://en.wikipedia.org/wiki/Differential-algebraic%20system%20of%20equations | In electrical engineering, a differential-algebraic system of equations (DAE) is a system of equations that either contains differential equations and algebraic equations, or is equivalent to such a system.
In mathematics these are examples of differential algebraic varieties and correspond to ideals in differential polynomial rings (see the article on differential algebra for the algebraic setup).
We can write these differential equations for a dependent vector of variables x in one independent variable t, as
When considering these symbols as functions of a real variable (as is the case in applications in electrical engineering or control theory) we look at as a vector of dependent variables and the system has as many equations, which we consider as functions .
They are distinct from ordinary differential equation (ODE) in that a DAE is not completely solvable for the derivatives of all components of the function x because these may not all appear (i.e. some equations are algebraic); technically the distinction between an implicit ODE system [that may be rendered explicit] and a DAE system is that the Jacobian matrix is a singular matrix for a DAE system. This distinction between ODEs and DAEs is made because DAEs have different characteristics and are generally more difficult to solve.
In practical terms, the distinction between DAEs and ODEs is often that the solution of a DAE system depends on the derivatives of the input signal and not just the signal itself as in the case of ODEs; this issue is commonly encountered in nonlinear systems with hysteresis, such as the Schmitt trigger.
This difference is more clearly visible if the system may be rewritten so that instead of x we consider a pair of vectors of dependent variables and the DAE has the form
where , , and
A DAE system of this form is called semi-explicit. Every solution of the second half g of the equation defines a unique direction for x via the first half f of the equations, while the dir |
https://en.wikipedia.org/wiki/Nassi%E2%80%93Shneiderman%20diagram | A Nassi–Shneiderman diagram (NSD) in computer programming is a graphical design representation for structured programming. This type of diagram was developed in 1972 by Isaac Nassi and Ben Shneiderman who were both graduate students at Stony Brook University. These diagrams are also called structograms, as they show a program's structures.
Overview
Following a top-down design, the problem at hand is reduced into smaller and smaller subproblems, until only simple statements and control flow constructs remain. Nassi–Shneiderman diagrams reflect this top-down decomposition in a straightforward way, using nested boxes to represent subproblems. Consistent with the philosophy of structured programming, Nassi–Shneiderman diagrams have no representation for a GOTO statement.
Nassi–Shneiderman diagrams are only rarely used for formal programming. Their abstraction level is close to structured program code and modifications require the whole diagram to be redrawn, but graphic editors removed that limitation. They clarify algorithms and high-level designs, which make them useful in teaching. They were included in Microsoft Visio and dozens of other software tools, such as the German EasyCODE.
In Germany, Nassi–Shneiderman diagrams were standardised in 1985 as DIN 66261. They are still used in German introductions to programming, for example Böttcher and Kneißl's introduction to C, Baeumle-Courth and Schmidt's introduction to C and Kirch's introduction to C#.
Nassi–Shneiderman diagrams can also be used in technical writing.
Diagrams
Process blocks: the process block represents the simplest of steps and requires no analysis. When a process block is encountered, the action inside the block is performed and we move onto the next block.
Branching blocks: there are two types of branching blocks. First is the simple True/False or Yes/No branching block which offers the program two paths to take depending on whether or not a condition has been fulfilled. These blocks can be u |
https://en.wikipedia.org/wiki/Fuel%20element%20failure | A fuel element failure is a rupture in a nuclear reactor's fuel cladding that allows the nuclear fuel or fission products, either in the form of dissolved radioisotopes or hot particles, to enter the reactor coolant or storage water.
The de facto standard nuclear fuel is uranium dioxide or a mixed uranium/plutonium dioxide. This has a higher melting point than the actinide metals. Uranium dioxide resists corrosion in water and provides a stable matrix for many of the fission products; however, to prevent fission products (such as the noble gases) from leaving the uranium dioxide matrix and entering the coolant, the pellets of fuel are normally encased in tubes of a corrosion-resistant metal alloy (normally Zircaloy for water-cooled reactors).
Those elements are then assembled into bundles to allow good handling and cooling. As the fuel fissions, the radioactive fission products are also contained by the cladding, and the entire fuel element can then be disposed of as nuclear waste when the reactor is refueled.
If, however, the cladding is damaged, those fission products (which are not immobile in the uranium dioxide matrix) can enter the reactor coolant or storage water and can be carried out of the core, into the rest of the primary cooling circuit, increasing contamination levels there.
In the EU, some work has been done in which fuel is overheated in a special research reactor named PHEBUS. During these experiments the emissions of radioactivity from the fuel are measured and afterwards the fuel is subjected to Post Irradiation Examination to discover more about what happened to it. |
https://en.wikipedia.org/wiki/Arakan%20forest%20turtle | The Arakan forest turtle (Heosemys depressa) is a critically endangered turtle species native to the Arakan Hills in western Myanmar and the bordering Chittagong Hill Tracts in Bangladesh. The Arakan forest turtle is a semiterrestrial turtle, meaning it can survive in aquatic as well as terrestrial habitats, but adults prefer living in terrestrial habitats.
Taxonomy
Geoëmyda depressa was the scientific name proposed by Anderson in 1875 who described a zoological specimen collected in Arakan.
Characteristics
The Arakan forest turtle has 18 plastral annuli, a carapace length of and weighs .
Distribution and habitat
In 2009, the Arakan forest turtles was discovered in Rakhine Yoma Elephant Range in Myanmar. The scientific team also labeled the area as a good prospective place to focus conservation efforts for the turtle, despite the fact that locals do occasionally hunt and eat them. Even with those activities, this protected area is difficult to access and lacks any human settlement, making any human interference with the turtle merely opportunistic. No large-scale commercial project hunts the turtle, nor would there be a demand for one, since the turtle is too difficult to find compared to the little profit there is for doing so. Furthermore, the area even has a low risk of being exploited for natural resources.
In 2015, a potential population was discovered during a non-governmental organization's citizen science project, suggesting a population of the Arakan turtle may reside in the Chittagong Hill Tracts.
Behaviour and ecology
The Arakan forest turtle is active at night and increases its activity during the early wet season. Local hunters found eggs in June and July when skinning female specimens, possibly revealing the reproductive system and cycle of the species.
It remains dormant the majority of the time and hides in leaves and debris. It is an omnivore, feeding on both animals and plants. Although it is considered a relatively reserved animal for |
https://en.wikipedia.org/wiki/Epicondylitis | Epicondylitis is the inflammation of an epicondyle or of adjacent tissues. Epicondyles are on the medial and lateral aspects of the elbow, consisting of the two bony prominences at the distal end of the humerus. These bony projections serve as the attachment point for the forearm musculature. Inflammation to the tendons and muscles at these attachment points can lead to medial and/or lateral epicondylitis. This can occur through a range of factors that overuse the muscles that attach to the epicondyles, such as sports or job-related duties that increase the workload of the forearm musculature and place stress on the elbow. Lateral epicondylitis is also known as “Tennis Elbow” due to its sports related association to tennis athletes, while medial epicondylitis is often referred to as “golfer's elbow.”
Risk factors
In a cross-sectional population-based study among the working population, it was found that psychological distress and bending and straightening of the elbow joint for >1hr per day were associated risk factors to epicondylitis.
Another study revealed the following potential risk factors among the working population:
Force and repetitive motions (handling tools > 1 kg, handling loads >20 kg at least 10 times/day, repetitive movements > 2 h/day) were found to be associated with the occurrence of lateral epicondylitis.
Low job control and low social support were also found to be associated with lateral epicondylitis.
Exposures of force (handling loads >5 kg, handling loads >20 kg at least 10 times/day, high hand grip forces >1 h/day), repetitiveness (repetitive movements for >2 h/day) and vibration (working with vibrating tools > 2 h/day) were associated with medial epicondylitis.
In addition to repetitive activities, obesity and smoking have been implicated as independent risk factors.
Symptoms
Tender to palpation at the medial or lateral epicondyle
Pain or difficulty with wrist flexion or extension
Diminished grip strength
Pain or burning se |
https://en.wikipedia.org/wiki/Vadose%20zone | The vadose zone, also termed the unsaturated zone, is the part of Earth between the land surface and the top of the phreatic zone, the position at which the groundwater (the water in the soil's pores) is at atmospheric pressure ("vadose" is from the Latin word for "shallow"). Hence, the vadose zone extends from the top of the ground surface to the water table.
Water in the vadose zone has a pressure head less than atmospheric pressure, and is retained by a combination of adhesion (funiculary groundwater), and capillary action (capillary groundwater). If the vadose zone envelops soil, the water contained therein is termed soil moisture. In fine grained soils, capillary action can cause the pores of the soil to be fully saturated above the water table at a pressure less than atmospheric. The vadose zone does not include the area that is still saturated above the water table, often referred to as the capillary fringe.
Movement of water within the vadose zone is studied within soil physics and hydrology, particularly hydrogeology, and is of importance to agriculture, contaminant transport, and flood control. The Richards equation is often used to mathematically describe the flow of water, which is based partially on Darcy's law. Groundwater recharge, which is an important process that refills aquifers, generally occurs through the vadose zone from precipitation.
In hydrology
The vadose zone is the undersaturated portion of the subsurface that lies above the groundwater table. The soil and rock in the vadose zone are not fully saturated with water; that is, the pores within them contain air as well as water. The portion of the vadose zone that is inhabited by soil microorganism, fungi and plant roots may sometimes be called the soil carbon sponge.
In some places, the vadose zone is absent, as is common where there are lakes and marshes, and in some places, it is hundreds of meters thick, as is common in arid regions.
Unlike the aquifers of the underlying water |
https://en.wikipedia.org/wiki/Nakayama%27s%20lemma | In mathematics, more specifically abstract algebra and commutative algebra, Nakayama's lemma — also known as the Krull–Azumaya theorem — governs the interaction between the Jacobson radical of a ring (typically a commutative ring) and its finitely generated modules. Informally, the lemma immediately gives a precise sense in which finitely generated modules over a commutative ring behave like vector spaces over a field. It is an important tool in algebraic geometry, because it allows local data on algebraic varieties, in the form of modules over local rings, to be studied pointwise as vector spaces over the residue field of the ring.
The lemma is named after the Japanese mathematician Tadashi Nakayama and introduced in its present form in , although it was first discovered in the special case of ideals in a commutative ring by Wolfgang Krull and then in general by Goro Azumaya (1951). In the commutative case, the lemma is a simple consequence of a generalized form of the Cayley–Hamilton theorem, an observation made by Michael Atiyah (1969). The special case of the noncommutative version of the lemma for right ideals appears in Nathan Jacobson (1945), and so the noncommutative Nakayama lemma is sometimes known as the Jacobson–Azumaya theorem. The latter has various applications in the theory of Jacobson radicals.
Statement
Let be a commutative ring with identity 1. The following is Nakayama's lemma, as stated in :
Statement 1: Let be an ideal in , and a finitely generated module over . If , then there exists with such that .
This is proven below. A useful mnemonic for Nakayama's lemma is "". This summarizes the following alternative formulation:
Statement 2: Let be an ideal in , and a finitely generated module over . If , then there exists an such that for all .
Proof: Take in Statement 1.
The following corollary is also known as Nakayama's lemma, and it is in this form that it most often appears.
Statement 3: If is a finitely generated mod |
https://en.wikipedia.org/wiki/Oceanographic%20Museum%20of%20Monaco | The Oceanographic Museum (Musée océanographique) is a museum of marine sciences in Monaco-Ville, Monaco.
This building is part of the Institut océanographique, which is committed to sharing its knowledge of the oceans.
History
The Oceanographic Museum was inaugurated in 1910 by Monaco's modernist reformer Prince Albert I, who invited to the celebrations not just high officials and celebrities but also the world-leading oceanographers of the day to develop the concept of a future Mediterranean Commission dedicated to oceanography, now called Mediterranean Science Commission.
Jacques-Yves Cousteau was director from 1957 to 1988. The Museum celebrated its centenary in March 2010, after extensive renovations.
Overview
The museum is home to exhibitions and collections of various species of sea fauna (starfish, seahorses, turtles, jellyfish, crabs, lobsters, rays, sharks, sea urchins, sea cucumbers, eels, cuttlefish etc.). The museum's holdings also include a great variety of sea related objects, including model ships, sea animal skeletons, tools, weapons etc., as well as a collection of material culture and ritual objects made from, or integrating materials such as pearls, molluscs and nacre.
At the first floor, A Sailor’s Career showcases the work of Prince Albert I. It includes the laboratory from L’Hirondelle, the first of Prince Albert's research yachts. Observations made there led to an understanding of the phenomenon of anaphylaxis, for which Dr Charles Richet received the Nobel Prize in Physiology or Medicine in 1913.
An aquarium in the basement of the museum presents a wide array of flora and fauna. Four thousand species of fish and over 200 families of invertebrates can be seen. The aquarium also features a presentation of Mediterranean and tropical marine ecosystems.
Numerous artists display their artworks in the museum, such as Damien Hirst and Philippe Pasqua.
Architecture
This monumental example of highly charged Baroque Revival architecture has a |
https://en.wikipedia.org/wiki/Ion%20beam-assisted%20deposition | Ion beam assisted deposition or IBAD or IAD (not to be confused with ion beam induced deposition, IBID) is a materials engineering technique which combines ion implantation with simultaneous sputtering or another physical vapor deposition technique. Besides providing independent control of parameters such as ion energy, temperature and arrival rate of atomic species during deposition, this technique is especially useful to create a gradual transition between the substrate material and the deposited film, and for depositing films with less built-in strain than is possible by other techniques. These two properties can result in films with a much more durable bond to the substrate. Experience has shown that some meta-stable compounds like cubic boron nitride (c-BN), can only be formed in thin films when bombarded with energetic ions during the deposition process.
See also
Ion beam deposition
Physical vapor deposition
Ion plating |
https://en.wikipedia.org/wiki/Titanium%20nitride | Titanium nitride (TiN; sometimes known as tinite) is an extremely hard ceramic material, often used as a physical vapor deposition (PVD) coating on titanium alloys, steel, carbide, and aluminium components to improve the substrate's surface properties.
Applied as a thin coating, TiN is used to harden and protect cutting and sliding surfaces, for decorative purposes (for its golden appearance), and as a non-toxic exterior for medical implants. In most applications a coating of less than is applied.
Characteristics
TiN has a Vickers hardness of 1800–2100, hardness of 31 ± 4 GPa, a modulus of elasticity of 550 ± 50 GPa, a thermal expansion coefficient of 9.35 K−1, and a superconducting transition temperature of 5.6 K.
TiN will oxidize at 800 °C in a normal atmosphere. TiN has a brown color, and appears gold when applied as a coating. It is chemically stable at 20 °C, according to laboratory tests, but can be slowly attacked by concentrated acid solutions with rising temperatures.
Depending on the substrate material and surface finish, TiN will have a coefficient of friction ranging from 0.4 to 0.9 against another TiN surface (non-lubricated). The typical TiN formation has a crystal structure of NaCl-type with a roughly 1:1 stoichiometry; TiNx compounds with x ranging from 0.6 to 1.2 are, however, thermodynamically stable.
TiN becomes superconducting at cryogenic temperatures, with critical temperature up to 6.0 K for single crystals. Superconductivity in thin-film TiN has been studied extensively, with the superconducting properties strongly varying depending on sample preparation, up to complete suppression of superconductivity at a superconductor-insulator transition. A thin film of TiN was chilled to near absolute zero, converting it into the first known superinsulator, with resistance suddenly increasing by a factor of 100,000.
Natural occurrence
Osbornite is a very rare natural form of titanium nitride, found almost exclusively in meteorites.
Uses
A well |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.