source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Antiferromagnetism | In materials that exhibit antiferromagnetism, the magnetic moments of atoms or molecules, usually related to the spins of electrons, align in a regular pattern with neighboring spins (on different sublattices) pointing in opposite directions. This is, like ferromagnetism and ferrimagnetism, a manifestation of ordered magnetism. The phenomenon of antiferromagnetism was first introduced by Lev Landau in 1933.
Generally, antiferromagnetic order may exist at sufficiently low temperatures, but vanishes at and above the Néel temperature – named after Louis Néel, who had first identified this type of magnetic ordering. Above the Néel temperature, the material is typically paramagnetic.
Measurement
When no external field is applied, the antiferromagnetic structure corresponds to a vanishing total magnetization. In an external magnetic field, a kind of ferrimagnetic behavior may be displayed in the antiferromagnetic phase, with the absolute value of one of the sublattice magnetizations differing from that of the other sublattice, resulting in a nonzero net magnetization. Although the net magnetization should be zero at a temperature of absolute zero, the effect of spin canting often causes a small net magnetization to develop, as seen for example in hematite.
The magnetic susceptibility of an antiferromagnetic material typically shows a maximum at the Néel temperature. In contrast, at the transition between the ferromagnetic to the paramagnetic phases the susceptibility will diverge. In the antiferromagnetic case, a divergence is observed in the staggered susceptibility.
Various microscopic (exchange) interactions between the magnetic moments or spins may lead to antiferromagnetic structures. In the simplest case, one may consider an Ising model on a bipartite lattice, e.g. the simple cubic lattice, with couplings between spins at nearest neighbor sites. Depending on the sign of that interaction, ferromagnetic or antiferromagnetic order will result. Geometrical frustrat |
https://en.wikipedia.org/wiki/Ecotope | Ecotopes are the smallest ecologically distinct landscape features in a landscape mapping and classification system. As such, they represent relatively homogeneous, spatially explicit landscape functional units that are useful for stratifying landscapes into ecologically distinct features for the measurement and mapping of landscape structure, function and change.
Like ecosystems, ecotopes are identified using flexible criteria, in the case of ecotopes, by criteria defined within a specific ecological mapping and classification system. Just as ecosystems are defined by the interaction of biotic and abiotic components, ecotope classification should stratify landscapes based on a combination of both biotic and abiotic factors, including vegetation, soils, hydrology, and other factors. Other parameters that must be considered in the classification of ecotopes include their period of stability (such as the number of years that a feature might persist), and their spatial scale (minimum mapping unit).
The first definition of ecotope was made by Thorvald Sørensen in 1936. Arthur Tansley picked this definition up in 1939 and elaborated it. He stated that an ecotope is "the particular portion, [...], of the physical world that forms a home for the organisms which inhabit it". In 1945 Carl Troll first applied the term to landscape ecology "the smallest spatial object or component of a geographical landscape". Other academics clarified this to suggest that an ecotope is ecologically homogeneous and is the smallest ecological land unit that is relevant.
The term "patch" was used in place of the term "ecotope", by Foreman and Godron (1986), who defined a patch as "a nonlinear surface area differing in appearance from its surroundings". However, by definition, ecotopes must be identified using a full suite of ecosystem characteristics: patches are a more general type of spatial unit than ecotopes.
In ecology an ecotope has also been defined as "The species relation |
https://en.wikipedia.org/wiki/Juan%20Mari%20Arzak | Juan Mari Arzak Arratibel (born July 31, 1942) is a Spanish chef, the owner and chef for Arzak restaurant. He is considered to be one of the great masters of New Basque cuisine. He describes his cooking as "signature cuisine, Basque cuisine that's evolutionary, investigatory, and avant-garde."
Personal life
Arzak was an only child born to Juan Ramon Arzak and Francisca Arratibel in San Sebastián, Spain. He spent much of his childhood in his grandparents' restaurant. Later, Juan Mari Arzak's parents took over control of the restaurant. Juan Mari Arzak's father died in 1951, after which time his mother continued to run the restaurant until he took over control of the restaurant. Juan Mari Arzak has two daughters, Marta and Elena, with Maite Espina.
Professional life
Arzak said that his interest in cooking began at birth, and that in his childhood he would help in his family's restaurant. However, his real interest in cuisine didn't begin until his time at a hotel management school in Madrid. After Arzak's mandatory time in the military, he returned to his family's restaurant and began training as a chef, during which time he was responsible for roasting meat. Since he took over the restaurant, the restaurant has garnered much praise, and received 3 Michelin stars in 1989. In 2008 Arzak received the "Universal Basque" award for "adapting gastronomy, one of the most important traditions of the Basque Country, to the new times and making of it one of the most innovative of the world". He trained his daughter Elena Arzak (1969-), who has won the "Veuve Clicquot World's Best Female Chef" at the "World's 50 Best Restaurant Awards", to take over the restaurant.
External links
Restaurant Arzak website
Notes
1942 births
Living people
Molecular gastronomy
Spanish chefs
People from San Sebastián
Head chefs of Michelin starred restaurants
Basque cuisine |
https://en.wikipedia.org/wiki/List%20of%20Microsoft%20Windows%20versions | Microsoft Windows is a computer operating system developed by Microsoft. It was first launched in 1985 as a graphical operating system built on MS-DOS. The initial version was followed by several subsequent releases, and by the early 1990s, the Windows line had split into two separate lines of releases: Windows 9x for consumers and Windows NT for businesses and enterprises. In the following years, several further variants of Windows would be released: Windows CE in 1996 for embedded systems; Pocket PC in 2000 (renamed to Windows Mobile in 2003 and Windows Phone in 2010) for personal digital assistants and, later, smartphones; Windows Holographic in 2016 for AR/VR headsets; and several other editions.
Personal computer versions
A "personal computer" version of Windows is considered to be a version that end-users or OEMs can install on personal computers, including desktop computers, laptops, and workstations.
The first five versions of Windows–Windows 1.0, Windows 2.0, Windows 2.1, Windows 3.0, and Windows 3.1–were all based on MS-DOS, and were aimed at both consumers and businesses. However, Windows 3.1 had two separate successors, splitting the Windows line in two: the consumer-focused "Windows 9x" line, consisting of Windows 95, Windows 98, and Windows Me; and the professional Windows NT line, comprising Windows NT 3.1, Windows NT 3.5, Windows NT 3.51, Windows NT 4.0, and Windows 2000. These two lines were reunited into a single line with the NT-based Windows XP; this Windows release succeeded both Windows Me and Windows 2000 and had separate editions for consumer and professional use. Since Windows XP, multiple further versions of Windows have been released, the most recent of which is Windows 11.
Mobile versions
Mobile versions refer to versions of Windows that can run on smartphones or personal digital assistants.
Server versions
High-performance computing (HPC) servers
Windows Essential Business Server
Windows Home Server
Windows MultiPoint Server
Wi |
https://en.wikipedia.org/wiki/Cathodoluminescence | Cathodoluminescence is an optical and electromagnetic phenomenon in which electrons impacting on a luminescent material such as a phosphor, cause the emission of photons which may have wavelengths in the visible spectrum. A familiar example is the generation of light by an electron beam scanning the phosphor-coated inner surface of the screen of a television that uses a cathode ray tube. Cathodoluminescence is the inverse of the photoelectric effect, in which electron emission is induced by irradiation with photons.
Origin
Luminescence in a semiconductor results when an electron in the conduction band recombines with a hole in the valence band. The difference energy (band gap) of this transition can be emitted in form of a photon. The energy (color) of the photon, and the probability that a photon and not a phonon will be emitted, depends on the material, its purity, and the presence of defects. First, the electron has to be excited from the valence band into the conduction band. In cathodoluminescence, this occurs as the result of an impinging high energy electron beam onto a semiconductor. However, these primary electrons carry far too much energy to directly excite electrons. Instead, the inelastic scattering of the primary electrons in the crystal leads to the emission of secondary electrons, Auger electrons and X-rays, which in turn can scatter as well. Such a cascade of scattering events leads to up to 103 secondary electrons per incident electron. These secondary electrons can excite valence electrons into the conduction band when they have a kinetic energy about three times the band gap energy of the material . From there the electron recombines with a hole in the valence band and creates a photon. The excess energy is transferred to phonons and thus heats the lattice. One of the advantages of excitation with an electron beam is that the band gap energy of materials that are investigated is not limited by the energy of the incident light as in the case of |
https://en.wikipedia.org/wiki/Basal%20angiosperms | The basal angiosperms are the flowering plants which diverged from the lineage leading to most flowering plants. In particular, the most basal angiosperms were called the ANITA grade, which is made up of Amborella (a single species of shrub from New Caledonia), Nymphaeales (water lilies, together with some other aquatic plants) and Austrobaileyales (woody aromatic plants including star anise).
ANITA stands for Amborella, Nymphaeales, I lliciales, Trimeniaceae, and Austrobaileya. Some authors have shortened this to ANA-grade for the three orders, Amborellales, Nymphaeales, and Austrobaileyales, since the order Iliciales was reduced to the family Illiciaceae and placed, along with the family Trimeniaceae, within the Austrobaileyales.
The basal angiosperms are only a few hundred species, compared with hundreds of thousands of species of eudicots, monocots, and magnoliids. They diverged from the ancestral angiosperm lineage before the five groups comprising the mesangiosperms diverged from each other.
Phylogeny
Amborella, Nymphaeales and Austrobaileyales, in that order, are basal to all other angiosperms.
Older terms
Paleodicots (sometimes spelled "palaeodicots") is an informal name used by botanists (Spichiger & Savolainen 1997, Leitch et al. 1998) to refer to angiosperms which are not monocots or eudicots.
The paleodicots correspond to Magnoliidae sensu Cronquist 1981 (minus Ranunculales and Papaverales) and to Magnoliidae sensu Takhtajan 1980 (Spichiger & Savolainen 1997). Some of the paleodicots share apparently plesiomorphic characters with monocots, e.g., scattered vascular bundles, trimerous flowers, and non-tricolpate pollen.
The "paleodicots" are not a monophyletic group and the term has not been widely adopted. The APG II system does not recognize a group called "paleodicots" but assigns these early-diverging dicots to several orders and unplaced families: Amborellaceae, Nymphaeaceae (including Cabombaceae), Austrobaileyales, Ceratophyllales (not incl |
https://en.wikipedia.org/wiki/HelpNDoc | HelpNDoc ( ) is a Windows-based help authoring tool published by French company IBE Software.
Features
HelpNDoc allows the writer to create a single source text which it then converts to a number of target formats such as:
CHM ( HTML Help)
PDF
RTF
DocX
Qt Help
HTML
EPUB (including Amazon Kindle compatible E-books)
Markdown
HelpNDoc integrates a WYSIWYG editor which aims to look like popular word processing software such as Microsoft Word or OpenOffice.org Writer. HelpNDoc has the ability to include variables and external files. It also has the ability to generate code for the C++, Delphi, Fortran, Pascal and Visual Basic programming languages for integration of the generated CHM help files with the application being developed. As of version 4 HelpNDoc comes with a project analyzer that can track the document’s layout, provide statistics and identify potential problems such as broken links or problems with media items. HelpNDoc can also be used to publish mobile websites, using the table of contents as a navigation menu, the content of the topics with navigation buttons, the keyword index menu, a built-in search engine, and mobile-specific user interface elements.
Licensing terms
HelpNDoc's licensing model offers a free version of the program (Personal Edition) for personal use and three paid Editions for commercial use:
Standard
Professional
Ultimate
The free version includes an advertisement at the bottom of each generated documentation page, while the Professional / Ultimate Editions do not. The Standard Edition removes those ads from the CHM- and HTML-generated documentation.
The Ultimate Edition was first released with version 8.0 of the software. It allows you to "Encrypt and sign Word and PDF documents". These are new features that were released with 8.0.
History
1.0 was released to the public in December 2004 and only provided CHM and HTML documentation generation.
2.0.0.25 was released on May 16, 2009. Other 2.x versions followed |
https://en.wikipedia.org/wiki/Data%20transfer%20object | In the field of programming a data transfer object (DTO) is an object that carries data between processes. The motivation for its use is that communication between processes is usually done resorting to remote interfaces (e.g., web services), where each call is an expensive operation. Because the majority of the cost of each call is related to the round-trip time between the client and the server, one way of reducing the number of calls is to use an object (the DTO) that aggregates the data that would have been transferred by the several calls, but that is served by one call only.
The difference between data transfer objects and business objects or data access objects is that a DTO does not have any behavior except for storage, retrieval, serialization and deserialization of its own data (mutators, accessors, serializers and parsers). In other words,
DTOs are simple objects that should not contain any business logic but may contain serialization and deserialization mechanisms for transferring data over the wire.
This pattern is often incorrectly used outside of remote interfaces. This has triggered a response from its author where he reiterates that the whole purpose of DTOs is to shift data in expensive remote calls.
Terminology
A "Value Object" is not a DTO. The two terms have been conflated by Sun/Java community in the past. |
https://en.wikipedia.org/wiki/Quantum%20state%20space | In physics, a quantum state space is an abstract space in which different "positions" represent, not literal locations, but rather quantum states of some physical system. It is the quantum analog of the phase space of classical mechanics.
Relative to Hilbert space
In quantum mechanics a state space is a complex Hilbert space in which each unit vector represents a different state that could come out of a measurement. The number of dimensions in this Hilbert space depends on the system we choose to describe. Any state vectors in this space can be written as a linear combination of unit vectors. Having an nonzero component along multiple dimensions is called a superposition. In the formalism of quantum mechanics these state vectors are often written using Dirac's compact bra–ket notation.
Examples
The spin (physics) state of a silver atom in the Stern-Gerlach experiment can be represented in a two state space. The spin can be aligned with a measuring apparatus (arbitrarily called 'up') or oppositely ('down'). In Dirac's notation these two states can be written as . The space of a two spin system has four states, .
The spin state is a discrete degree of freedom; quantum state spaces can have continuous degrees of freedom. For example, a particle in one space dimension has one degree of freedom ranging from to . In Dirac notation, the states in this space might be written as or .
Relative to 3D space
Even in the early days of quantum mechanics, the state space (or configurations as they were called at first) was understood to be essential for understanding simple QM problems. In 1929, Nevill Mott showed that "tendency to picture the wave as existing in ordinary three dimensional space, whereas we are really dealing with wave functions in multispace" makes analysis of simple interaction problems more difficult. Mott analyzes -particle emission in a cloud chamber. The emission process is isotropic, a spherical wave in QM, but the tracks observed are linear.
|
https://en.wikipedia.org/wiki/Two%20Generals%27%20Problem | In computing, the Two Generals' Problem is a thought experiment meant to illustrate the pitfalls and design challenges of attempting to coordinate an action by communicating over an unreliable link. In the experiment, two generals are only able to communicate with one another by sending a messenger through enemy territory. The experiment asks how they might reach an agreement on the time to launch an attack, while knowing that any messenger they send could be captured.
The Two Generals' Problem appears often as an introduction to the more general Byzantine Generals problem in introductory classes about computer networking (particularly with regard to the Transmission Control Protocol, where it shows that TCP can't guarantee state consistency between endpoints and why this is the case), though it applies to any type of two-party communication where failures of communication are possible. A key concept in epistemic logic, this problem highlights the importance of common knowledge. Some authors also refer to this as the Two Generals' Paradox, the Two Armies Problem, or the Coordinated Attack Problem. The Two Generals' Problem was the first computer communication problem to be proved to be unsolvable. An important consequence of this proof is that generalizations like the Byzantine Generals problem are also unsolvable in the face of arbitrary communication failures, thus providing a base of realistic expectations for any distributed consistency protocols.
Definition
Two armies, each led by a different general, are preparing to attack a fortified city. The armies are encamped near the city, each in its own valley. A third valley separates the two hills, and the only way for the two generals to communicate is by sending messengers through the valley. Unfortunately, the valley is occupied by the city's defenders and there's a chance that any given messenger sent through the valley will be captured.
While the two generals have agreed that they will attack, they haven't |
https://en.wikipedia.org/wiki/Range%20state | Range state is a term generally used in zoogeography and conservation biology to refer to any nation that exercises jurisdiction over any part of a range which a particular species, taxon or biotope inhabits, or crosses or overflies at any time on its normal migration route. The term is often expanded to also include, particularly in international waters, any nation with vessels flying their flag that engage in exploitation (e.g. hunting, fishing, capturing) of that species. Countries in which a species occurs only as a vagrant or ‘accidental’ visitor outside of its normal range or migration route are not usually considered range states.
Because governmental conservation policy is often formulated on a national scale, and because in most countries, both governmental and private conservation organisations are also organised at the national level, the range state concept is often used by international conservation organizations in formulating their conservation and campaigning policy.
An example of one such organization is the Convention on the Conservation of Migratory Species of Wild Animals (CMS, or the “Bonn Convention”). It is a multilateral treaty focusing on the conservation of critically endangered and threatened migratory species, their habitats and their migration routes. Because such habitats and/or migration routes may span national boundaries, conservation efforts are less likely to succeed without the cooperation, participation, and coordination of each of the range states.
External links
Bonn Convention (CMS) — Text of Convention Agreement
Bonn Convention (CMS): List of Range States for Critically Endangered Migratory Species |
https://en.wikipedia.org/wiki/Komornik%E2%80%93Loreti%20constant | In the mathematical theory of non-standard positional numeral systems, the Komornik–Loreti constant is a mathematical constant that represents the smallest base q for which the number 1 has a unique representation, called its q-development. The constant is named after Vilmos Komornik and Paola Loreti, who defined it in 1998.
Definition
Given a real number q > 1, the series
is called the q-expansion, or -expansion, of the positive real number x if, for all , , where is the floor function and need not be an integer. Any real number such that has such an expansion, as can be found using the greedy algorithm.
The special case of , , and or is sometimes called a -development. gives the only 2-development. However, for almost all , there are an infinite number of different -developments. Even more surprisingly though, there exist exceptional for which there exists only a single -development. Furthermore, there is a smallest number known as the Komornik–Loreti constant for which there exists a unique -development.
Value
The Komornik–Loreti constant is the value such that
where is the Thue–Morse sequence, i.e., is the parity of the number of 1's in the binary representation of . It has approximate value
The constant is also the unique positive real root of
This constant is transcendental.
See also
Euler-Mascheroni constant
Fibonacci word
Golay–Rudin–Shapiro sequence
Prouhet–Thue–Morse constant |
https://en.wikipedia.org/wiki/Bilbao%20Crystallographic%20Server | Bilbao Crystallographic Server is an open access website offering online crystallographic database and programs aimed at analyzing, calculating and visualizing problems of structural and mathematical crystallography, solid state physics and structural chemistry. Initiated in 1997 by the Materials Laboratory of the Department of Condensed Matter Physics at the University of the Basque Country, Bilbao, Spain, the Bilbao Crystallographic Server is developed and maintained by academics.
Information on contents and an overview of tools hosted
Focusing on crystallographic data and applications of the group theory in solid state physics, the server is built on a core of databases and contains different shells.
Space Groups Retrieval Tools
The set of databases includes data from International Tables of Crystallography, Vol. A: Space-Group Symmetry, and the data of maximal subgroups of space groups as listed in International Tables of Crystallography, Vol. A1: Symmetry relations between space groups. A k-vector database with Brillouin zone figures and classification tables of the k-vectors for space groups is also available via the KVEC tool.
Magnetic Space Groups
In 2011, the Magnetic Space Groups data compiled from H.T. Stokes & B.J. Campbell's and D. Litvin's's works general positions/symmetry operations and Wyckoff positions for different settings, along with systematic absence rules have also been incorporated into the server and a new shell has been dedicated to the related tools (MGENPOS, MWYCKPOS, MAGNEXT).
Group-Subgroup Relations of Space Groups
This shell contains applications which are essential for problems involving group-subgroup relations between space groups. Given the space group types of G and H and their index, the program SUBGROUPGRAPH provides graphs of maximal subgroups for a group-subgroup pair G > H, all the different subgroups H and their distribution into conjugacy classes. The Wyckoff position splitting rules for a group-subgroup pair are cal |
https://en.wikipedia.org/wiki/Martingale%20representation%20theorem | In probability theory, the martingale representation theorem states that a random variable that is measurable with respect to the filtration generated by a Brownian motion can be written in terms of an Itô integral with respect to this Brownian motion.
The theorem only asserts the existence of the representation and does not help to find it explicitly; it is possible in many cases to determine the form of the representation using Malliavin calculus.
Similar theorems also exist for martingales on filtrations induced by jump processes, for example, by Markov chains.
Statement
Let be a Brownian motion on a standard filtered probability space and let be the augmented filtration generated by . If X is a square integrable random variable measurable with respect to , then there exists a predictable process C which is adapted with respect to such that
Consequently,
Application in finance
The martingale representation theorem can be used to establish the existence
of a hedging strategy.
Suppose that is a Q-martingale process, whose volatility is always non-zero.
Then, if is any other Q-martingale, there exists an -previsible process , unique up to sets of measure 0, such that with probability one, and N can be written as:
The replicating strategy is defined to be:
hold units of the stock at the time t, and
hold units of the bond.
where is the stock price discounted by the bond price to time and is the expected payoff of the option at time .
At the expiration day T, the value of the portfolio is:
and it is easy to check that the strategy is self-financing: the change in the value of the portfolio only depends on the change of the asset prices .
See also
Backward stochastic differential equation |
https://en.wikipedia.org/wiki/Sobhuza%20II | Sobhuza II, (; also known as Nkhotfotjeni, Mona; 22 July 1899 – 21 August 1982) was Ngwenyama (King) of Swaziland for 82 years and 254 days, the longest verifiable reign of any monarch in recorded history.
Sobhuza was born on 22 July 1899 at Zombodze Royal Residence, the son of Inkhosikati Lomawa Ndwandwe and King Ngwane V. When he was only four months old, his father died suddenly while dancing incwala. Sobhuza was chosen king soon after that and his grandmother Labotsibeni and his uncle Prince Malunge led the Swazi nation until his maturity in 1921. Sobhuza was acknowledged as King by the British in 1967, and Swaziland achieved independence in 1968. Sobhuza continued to reign until his death in 1982. He was succeeded by Mswati III, his young son with Inkhosikati Ntfombi Tfwala, who was crowned in 1986.
Early life and education
Ingwenyama Sobhuza was born in Zombodze on 22 July 1899. He ascended to the throne after the death of his father, Ngwane V, as King of Swaziland on 10 December 1899, when he was only four months old. He was educated at the Swazi National School, Zombodze, and at the Lovedale Institution in the Eastern Cape, South Africa, before assuming the Swazi throne as King at the age of twenty-two. His grandmother, Labotsibeni Mdluli, served as regent throughout his youth, formally transferring power to the Ngwenyama on 22 December 1921. Before assuming his royal duties, he studied anthropology in England.
Kingship
Sobhuza's direct reign would endure more than 60 years (1921–1982), during which he presided over Swaziland's independence from the United Kingdom in 1968, after which the British government recognised him as King of Swaziland (Eswatini). Early in his reign, Sobhuza sought to address the problem of land partition and deprivation instituted by the British authorities in 1907. He did so by first leading a delegation to London to meet with King George V and petition him to restore the lands to the Swazi people. He again took his case on t |
https://en.wikipedia.org/wiki/HIPPI | HIPPI, short for High Performance Parallel Interface, is a computer bus for the attachment of high speed storage devices to supercomputers, in a point-to-point link. It was popular in the late 1980s and into the mid-to-late 1990s, but has since been replaced by ever-faster standard interfaces like Fibre Channel and 10 Gigabit Ethernet.
The first HIPPI standard defined a 50-pair (100-wire) unidirectional twisted pair cable, running at 800 Mbit/s (100 MB/s) with maximum range limited to . A bidirectional connection therefore required two separate cables. Late standards included a 1600 Mbit/s (200 MB/s) mode running on Serial HIPPI fibre optic cable with a maximum range of .
HIPPI usage dwindled in the late 1990s. This was partly because Ultra3 SCSI offered rates of 320 MB/s and was available at almost any computer store at commodity prices. Meanwhile, Fibre Channel offered simple interconnect with both HIPPI and SCSI (it can run both protocols) and speeds of up to 400 MB/s on fibre and 100 MB/s on a single pair of twisted pair copper wires. Both of these systems have since been supplanted by even higher performance systems during the 2000s.
GSN - HIPPI-6400
In 1999 in an effort to improve the speed resulted in HIPPI-6400, which was later renamed GSN (for Gigabyte System Network) but this saw little use due to competing standards and the high cost of GSN. GSN has a full-duplex bandwidth of 6400 Mbit/s or 800 MB/s in each direction. GSN was developed at Los Alamos National Laboratory and uses a parallel interface for higher speeds. GSN copper cables are limited to and fibre optic cables limited to 1 km.
HIPPI is the first “near-gigabit” (0.8 Gbit/s) (ANSI) standard for network data transmission. It was specifically designed for supercomputers and was never intended for mass market networks such as Ethernet. Many of the features developed for HIPPI were integrated into such technologies as InfiniBand. What is remarkable about HIPPI is that it came out when Ether |
https://en.wikipedia.org/wiki/Philosophi%C3%A6%20Naturalis%20Principia%20Mathematica | (English: The Mathematical Principles of Natural Philosophy) often referred to as simply the (), is a book by Isaac Newton that expounds Newton's laws of motion and his law of universal gravitation. The Principia is written in Latin and comprises three volumes, and was first published on 5 July 1687.
The is considered one of the most important works in the history of science. The French mathematical physicist Alexis Clairaut assessed it in 1747: "The famous book of Mathematical Principles of Natural Philosophy marked the epoch of a great revolution in physics. The method followed by its illustrious author Sir Newton ... spread the light of mathematics on a science which up to then had remained in the darkness of conjectures and hypotheses." Joseph-Louis Lagrange described it as "the greatest production of a human mind".
A more recent assessment has been that while acceptance of Newton's laws was not immediate, by the end of the century after publication in 1687, "no one could deny that" (out of the ) "a science had emerged that, at least in certain respects, so far exceeded anything that had ever gone before that it stood alone as the ultimate exemplar of science generally".
The forms the foundation of classical mechanics. Among other achievements, it explains Johannes Kepler's laws of planetary motion, which Kepler had first obtained empirically. In formulating his physical laws, Newton developed and used mathematical methods now included in the field of calculus, expressing them in the form of geometric propositions about "vanishingly small" shapes. In a revised conclusion to the , Newton emphasized the empirical nature of the work with the expression Hypotheses non fingo ("I frame/feign no hypotheses").
After annotating and correcting his personal copy of the first edition, Newton published two further editions, during 1713 with errors of the 1687 corrected, and an improved version of 1726.
Contents
Expressed aim and topics covered
The Preface of the |
https://en.wikipedia.org/wiki/Friedrich%20Prym | Friedrich Emil Fritz Prym (28 September 1841, Düren – 15 December 1915, Bonn) was a German mathematician who introduced Prym varieties and Prym differentials.
Prym completed his Ph.D. at the University of Berlin in 1863 with a thesis written under the direction of Ernst Kummer and Martin Ohm. In 1867 he started a Professor at the University of Würzburg, where he later became Dean, and then Rector in 1897–98. |
https://en.wikipedia.org/wiki/K%20band%20%28infrared%29 | In infrared astronomy, the K band is an atmospheric transmission window centered on 2.2 μm (in the near-infrared 136 THz range). HgCdTe-based detectors are typically preferred for observing in this band.
See also
Absolute magnitude
UBV photometric system |
https://en.wikipedia.org/wiki/Internet%20Citizen%27s%20Band | Internet Citizen's Band (better known as ICB) is an early Internet chat program and its associated protocol. It was released in 1989.
ICB is typically served on port 7326.
History
The first version of ICB was a program called "Forumnet" or "fn", written by University of Kentucky IT staffer Sean Carrick Casey. It was widely used at the University of Kentucky, Georgia Tech, UC Davis, MIT, University of New Mexico, Stanford University, Mills College, UC Santa Cruz, and UC Berkeley. Fn, based on a MUD software program by Casey, established the protocol and clients.
Fn was used as a realtime communications channel after the 1989 Loma Prieta earthquake - Internet access from hard-hit Santa Cruz returned to service before reliable phone service did. In March 1991 the University of Kentucky changed policy and shut down the fn server. Within 2 months a new server had been created from the client software by another fn user, John Atwood Devries, and was put online now renamed ICB. This new server code, unrelated to the original server except by the common client software source, was then used as the basis of many ICB servers to follow. From 1995 to 2000 the server code was heavily rewritten for stability and additional features by Jon Luini and Michel Hoche-Mong and remains available at the ICB.net web site.
ICB is still in operation with a dedicated user base. A variety of clients exist for all major operating systems.
Features
ICB features many standard chat program functions, including channels, private messages, and nickname registration. Most of the common clients support TCL scripting of commands and functions. Some clients (principally icbm) support scripting in Perl instead.
Limitations
ICB has never supported multi-server shared groups, so the number of simultaneous users has always been somewhat limited in comparison to more popular chat programs.
ICB does not support transferring files or multimedia via the chat program. However, the very restrict |
https://en.wikipedia.org/wiki/Amanita%20muscaria%20var.%20formosa | Amanita muscaria var. formosa, known as the yellow orange fly agaric, is a hallucinogenic and poisonous basidiomycete fungus of the genus Amanita. This variety, which can sometimes be distinguished from most other A. muscaria by its yellow cap, is a European taxon, although several North American field guides have referred A. muscaria var. guessowii to this name. American mycologist Harry D. Thiers described a yellow-capped taxon that he called var. formosa from the United States, but it is not the same as the European variety.
The Amanita Muscaria is native to temperate or boreal forest regions of the Northern Hemisphere. However, it has also been introduced in New Zealand, Australia, South America, and South Africa.
Biochemistry
As with other Amanita muscaria, the formosa variety contains ibotenic acid, and muscimol, two psychoactive constituents which can cause effects such as hallucinations, synaesthesia, euphoria, dysphoria and retrograde amnesia. The effects of muscimol and ibotenic acid most closely resemble that of a Z drug, like Ambien at high doses, and not a classical psychedelic, i.e. psilocybin.
Ibotenic acid is mostly broken down into the body to muscimol, but what remains of the ibotenic acid is believed to cause the majority of dysphoric effects of consuming A. muscaria mushrooms. Ibotenic acid is also a scientifically important neurotoxin used in lab research as a brain-lesioning agent in mice.
As with other wild-growing mushrooms, the ratio of ibotenic acid to muscimol depends on countless external factors, including: season, age, and habitat - and percentages will naturally vary from mushroom-to-mushroom.
Controversy
Recent DNA evidence has shown Amanita muscaria var. formosa to be a distinct species from Amanita muscaria and it will be getting its own species status soon. Amanita muscaria var. formosa has been described as Amanita muscaria var. guessowii.
See also
List of Amanita species |
https://en.wikipedia.org/wiki/Head%20shadow | A head shadow (or acoustic shadow) is a region of reduced amplitude of a sound because it is obstructed by the head. It is an example of diffraction.
Sound may have to travel through and around the head in order to reach an ear. The obstruction caused by the head can account for attenuation (reduced amplitude) of overall intensity as well as cause a filtering effect. The filtering effects of head shadowing are an essential element of sound localisation—the brain weighs the relative amplitude, timbre, and phase of a sound heard by the two ears and uses the difference to interpret directional information.
The shadowed ear, the ear further from the sound source, receives sound slightly later (up to approximately 0.7 ms later) than the unshadowed ear, and the timbre, or frequency spectrum, of the shadowed sound wave is different because of the obstruction of the head.
The head shadow causes particular difficulty in sound localisation in people suffering from unilateral hearing loss. It is a factor to consider when correcting hearing loss with directional hearing aids.
See also
Interaural intensity difference
Hearing |
https://en.wikipedia.org/wiki/List%20of%20microprocessors | This is a list of microprocessors.
Altera
Nios 16-bit (soft processor)
Nios II 32-bit (soft processor)
AMD
List of AMD K5 processors
List of AMD Athlon processors
List of AMD Athlon 64 processors
List of AMD Athlon XP processors
List of AMD Duron processors
List of AMD Opteron processors
List of AMD Sempron processors
List of AMD Turion processors
List of AMD Athlon X2 processors
List of AMD Phenom processors
List of AMD FX processors
List of AMD Ryzen processors
Apollo
PRISM
ARM
ARM
Atmel
AVR32
AVR
AT&T
Hobbit
Bell Labs
Bellmac 32
BLX IC Design Corporation
Godson/Loongson
Broadcom
XLS 200 series multicore processor
Centaur Technology/IDT
WinChip
Computer Cowboys
Sh-Boom
Cyrix
486, 5x86, 6x86
Data General
microNOVA mN601 and mN602
microECLIPSE
Centre for Development of Advanced Computing
VEGA Microprocessors
Digital Equipment Corporation
DEC T-11
DEC J-11
DEC V-11
MicroVAX 78032
CVAX
Rigel
Mariah
NVAX
Alpha 21064
Alpha 21164
Alpha 21264
Alpha 21364
StrongARM
DM&P Electronics
Vortex86
Emotion Engine by Sony & Toshiba
Emotion Engine
Elbrus
Elbrus 2K (VLIW design)
Electronic Arrays
Electronic Arrays 9002
EnSilica
eSI-RISC
Fairchild Semiconductor
9440
F8
Clipper
Freescale Semiconductor (formerly Motorola)
List of Freescale products
Fujitsu
FR
FR-V
SPARC64 V
Garrett AiResearch/American Microsystems
MP944
Google
Tensor processing unit
Harris Semiconductor
Harris RTX2000
Hewlett-Packard
Capricorn (microprocessor)
FOCUS 32-bit stack architecture
PA-7000 PA-RISC Version 1.0 (32-bit)
PA-7100 PA-RISC Version 1.1
PA-7100LC
PA-7150
PA-7200
PA-7300LC
PA-8000 PA-RISC Version 2.0 (64-bit)
PA-8200
PA-8500
PA-8600
PA-8700
PA-8800
PA-8900
Saturn Nibble CPU (4-bit)
Hitachi
SuperH SH-1/SH-2 etc.
Inmos
Transputer T2/T4/T8
IBM
1977 – OPD Mini Processor
1986 – IBM ROMP
2000 – Gekko processor
2005 – Xenon processor
2006 – Cell processor
2006 – Broadway processor
2012 – Espresso |
https://en.wikipedia.org/wiki/Genome%20in%20a%20Bottle | Genome in a Bottle is a consortium hosted by NIST and dedicated to characterization of benchmark human genomes. The NCBI is serving as the repository for the detailed information on samples, genotypes, raw sequencing reads and mapped reads, via a dedicated FTP site. |
https://en.wikipedia.org/wiki/Ovarian%20follicle%20activation | Ovarian follicle activation can be defined as primordial follicles in the ovary moving from a quiescent (inactive) to a growing phase. The primordial follicle in the ovary is what makes up the “pool” of follicles that will be induced to enter growth and developmental changes that change them into pre-ovulatory follicles, ready to be released during ovulation. The process of development from a primordial follicle to a pre-ovulatory follicle is called folliculogenesis.
Activation of the primordial follicle involves the following: a morphological change from flattened to cuboidal granulosa cells, proliferation of granulosa cells, formation of the protective zona pellucida layer, and growth of the oocyte.
It is widely understood that androgens act primarily on preantral follicles and that this activity is important for preantral follicle growth. Additionally, it is thought that androgens are involved in primordial follicle activation. However, the influence of androgens on primordial follicle recruitment and whether this response is primary or secondary is still uncertain.
Activation of Primordial Follicle Development
Primordial follicles are activated to grow into antral follicles. Communication between the oocytes and the surrounding somatic cells, such as the granulosa cells and the theca cells, is involved in the control of primordial follicle activation. There are various activator signalling pathways that are involved in the control of ovarian follicle activation, including: Neurotropin, nerve growth factor (NGF) and its tyrosine receptor kinase (NTRK1), neurotrophin 4 (NT4), brain-derived neurotrophic factor (BDNF) and their receptor NTRK2. Additional ligands have a role in facilitating primordial follicle activation such as transforming growth factor-beta (TGF-B), growth differentiation factor 9 (GDF9) and bone morphogenic protein 15 (BMP15).
GDF9
Follicular activation rate is increased in experiments where recombinant GDF9 is added. Additionally, in vitro |
https://en.wikipedia.org/wiki/Alcohol%20consumption%20recommendations | Recommendations for consumption of the drug alcohol (also known formally as ethanol) vary from recommendations to be alcohol-free to daily or weekly drinking "safe limits" or maximum intakes. Many governmental agencies and organizations have issued guidelines. These recommendations concerning maximum intake are distinct from any legal restrictions, for example countries with drunk driving laws or countries that have prohibited alcohol.
General recommendations
These guidelines apply to men, and women who are neither pregnant nor breastfeeding.
Alcohol-free recommendations
The World Health Organization published a statement in The Lancet Public Health in April 2023 that "there is no safe amount that does not affect health"'.
The 2023 Nordic Nutrition Recommendations state "Since no safe limit for alcohol consumption can be provided, the recommendation in NNR2023 is that everyone should avoid drinking alcohol."''
The American Heart Association recommends that those who do not already consume alcoholic beverages should not start doing so because of the negative long-term effects of alcohol consumption.
The Canadian Centre on Substance Use and Addiction states "Not drinking has benefits, such as better health, and better sleep."
Alcohol intake recommendations by country
Some governments set the same recommendation for both sexes, while others give separate limits. The guidelines give drink amounts in a variety of formats, such as standard drinks, fluid ounces, or milliliters, but have been converted to grams of ethanol for ease of comparison.
Overall, the daily limits range from 10–37 g per day for men and 10-16 g per day for women. Weekly limits range from 27–170 g/week for men and 27–140 g/week for women. The weekly limits are lower than the daily limits, meaning intake on a particular day may be higher than one-seventh of the weekly amount, but consumption on other days of the week should be lower. The limits for women are consistently lower than those for m |
https://en.wikipedia.org/wiki/Ursula%20Hamenst%C3%A4dt | Ursula Hamenstädt (born 15 January 1961) is a German mathematician who works as a professor at the University of Bonn. Her primary research subject is differential geometry.
Education and career
Hamenstädt earned her PhD from the University of Bonn in 1986, under the supervision of Wilhelm Klingenberg.
Her dissertation, Zur Theorie der Carnot-Caratheodory Metriken und ihren Anwendungen [The theory of Carnot–Caratheodory metrics and their applications], concerned the theory of sub-Riemannian manifolds.
After completing her doctorate, she became a Miller Research Fellow at the University of California, Berkeley and then an assistant professor at the California Institute of Technology before returning to Bonn as a faculty member in 1990.
Honors
Hamenstädt was an invited speaker at the International Congress of Mathematicians in 2010. In 2012 she was elected to the German Academy of Sciences Leopoldina, and in the same year she became one of the inaugural fellows of the American Mathematical Society.
She was the Emmy Noether Lecturer of the German Mathematical Society in 2017.
Selected publications |
https://en.wikipedia.org/wiki/Autotroph | An autotroph is an organism that produces complex organic compounds (such as carbohydrates, fats, and proteins) using carbon from simple substances such as carbon dioxide, generally using energy from light (photosynthesis) or inorganic chemical reactions (chemosynthesis). They convert an abiotic source of energy (e.g. light) into energy stored in organic compounds, which can be used by other organisms (e.g. heterotrophs). Autotrophs do not need a living source of carbon or energy and are the producers in a food chain, such as plants on land or algae in water (in contrast to heterotrophs as consumers of autotrophs or other heterotrophs). Autotrophs can reduce carbon dioxide to make organic compounds for biosynthesis and as stored chemical fuel. Most autotrophs use water as the reducing agent, but some can use other hydrogen compounds such as hydrogen sulfide.
The primary producers can convert the energy in the light (phototroph and photoautotroph) or the energy in inorganic chemical compounds (chemotrophs or chemolithotrophs) to build organic molecules, which is usually accumulated in the form of biomass and will be used as carbon and energy source by other organisms (e.g. heterotrophs and mixotrophs). The photoautotrophs are the main primary producers, converting the energy of the light into chemical energy through photosynthesis, ultimately building organic molecules from carbon dioxide, an inorganic carbon source. Examples of chemolithotrophs are some archaea and bacteria (unicellular organisms) that produce biomass from the oxidation of inorganic chemical compounds, these organisms are called chemoautotrophs, and are frequently found in hydrothermal vents in the deep ocean. Primary producers are at the lowest trophic level, and are the reasons why Earth sustains life to this day.
Most chemoautotrophs are lithotrophs, using inorganic electron donors such as hydrogen sulfide, hydrogen gas, elemental sulfur, ammonium and ferrous oxide as reducing agents and hyd |
https://en.wikipedia.org/wiki/Supercow%20%28dairy%29 | Supercow (also super cow or super-cow) is a term used in the dairy industry to denote lines or individual animals that have superior milk production: that is, which produce more milk per day, or in some cases produce more fat per gallon of milk.
Until recently, super-cows have been developed through selective breeding – either traditional breeding or, since the 1960s, artificial insemination. Now the term tends to be applied to cows that have been genetically altered or whose genome has been studied in order to improve breeding. By 2023, according to a news report "[i]n many countries, including the United States, farmers breed clones with conventional animals to add desirable traits, such as high milk production or disease resistance, into the gene pool". In that year, Chinese scientists reported the cloning of three so called "super-cows" with a milk productivity "nearly 1.7 times the amount of milk an average cow in the United States produced in 2021" and a plan for 1,000 such super-cows in the near-term.
See also
Environmental impacts of animal agriculture
Genetically modified animal |
https://en.wikipedia.org/wiki/Central%20place%20foraging | Central place foraging (CPF) theory is an evolutionary ecology model for analyzing how an organism can maximize foraging rates while traveling through a patch (a discrete resource concentration), but maintains the key distinction of a forager traveling from a home base to a distant foraging location rather than simply passing through an area or travelling at random. CPF was initially developed to explain how red-winged blackbirds might maximize energy returns when traveling to and from a nest. The model has been further refined and used by anthropologists studying human behavioral ecology and archaeology.
Case studies
Central place foraging in non-human animals
Orians and Pearson (1979) found that red-winged blackbirds in eastern Washington State tend to capture a larger number of single species prey items per trip compared to the same species in Costa Rica, which brought back large, single insects. Foraging specialization by Costa Rican blackbirds was attributed to increased search and handling costs of nocturnal foraging, whereas birds in Eastern Washington forage diurnally for prey with lower search and handling costs. Studies with sea birds and seals have also found that load size tends to increase with foraging distance from the nest, as predicted by CPF. Other central place foragers, such as social insects, also show support for CPF theory. European honeybees increase their nectar load as travel time to nectar sites from a hive increases. Beavers have been found to preferentially collect larger diameter trees as distance from their lodge increases.
Archaeological case study: acorns and mussels in California
To apply the central place foraging model to ethnographic and experimental archaeological data driven by middle range theory, Bettinger et al. (1997) simplify the Barlow and Metcalf (1996) central place model to explore the archaeological implications of acorn (Quercus kelloggii) and mussel (Mytilus californianus) procurement and processing. This m |
https://en.wikipedia.org/wiki/CUT%26Tag%20sequencing | CUT&Tag-sequencing, also known as cleavage under targets and tagmentation, is a method used to analyze protein interactions with DNA. CUT&Tag-sequencing combines antibody-targeted controlled cleavage by a protein A-Tn5 fusion with massively parallel DNA sequencing to identify the binding sites of DNA-associated proteins. It can be used to map global DNA binding sites precisely for any protein of interest. Currently, ChIP-Seq is the most common technique utilized to study protein–DNA relations, however, it suffers from a number of practical and economical limitations that CUT&RUN and CUT&Tag sequencing do not. CUT&Tag sequencing is an improvement over CUT&RUN because it does not require cells to be lysed or chromatin to be fractionated. CUT&RUN is not suitable for single-cell platforms so CUT&Tag is advantageous for these.
Uses
CUT&Tag-sequencing can be used to examine gene regulation or to analyze transcription factor and other chromatin-associated protein binding. Protein-DNA interactions regulate gene expression and are responsible for many biological processes and disease states. This epigenetic information is complementary to genotype and expression analysis. CUT&Tag is an alternative to the current standard of ChIP-seq. ChIP-Seq suffers from limitations due to the cross linking step in ChIP-Seq protocols that can promote epitope masking and generate false-positive binding sites. As well, ChIP-seq suffers from suboptimal signal-to-noise ratios and poor resolution. CUT&Run-sequencing and CUT&Tag have the advantage of being simpler techniques with lower costs due to the high signal-to-noise ratio, requiring less depth in sequencing.
Specific DNA sites in direct physical interaction with proteins such as transcription factors can be isolated by Protein-A (pA) conjugated Tn5 bound to a protein of interest. Tn5 mediated cleavage produces a library of target DNA sites bound to a protein of interest in situ. Sequencing of prepared DNA libraries and comparison to who |
https://en.wikipedia.org/wiki/Sheypoor | Sheypoor (; "bugle") is a classified ads platform website and mobile application, headquartered in Tehran, Iran.
Thousands of advertisements are added to the app every day.
See also
Divar (website) |
https://en.wikipedia.org/wiki/Categories%20for%20the%20Working%20Mathematician | Categories for the Working Mathematician (CWM) is a textbook in category theory written by American mathematician Saunders Mac Lane, who cofounded the subject together with Samuel Eilenberg. It was first published in 1971, and is based on his lectures on the subject given at the University of Chicago, the Australian National University, Bowdoin College, and Tulane University. It is widely regarded as the premier introduction to the subject.
Contents
The book has twelve chapters, which are:
Chapter I. Categories, Functors, and Natural Transformations.
Chapter II. Constructions on Categories.
Chapter III. Universals and Limits.
Chapter IV. Adjoints.
Chapter V. Limits.
Chapter VI. Monads and Algebras.
Chapter VII. Monoids.
Chapter VIII. Abelian Categories.
Chapter IX. Special Limits.
Chapter X. Kan Extensions.
Chapter XI. Symmetry and Braiding in Monoidal Categories
Chapter XII. Structures in Categories.
Chapters XI and XII were added in the 1998 second edition, the first in view of its importance in string theory and quantum field theory, and the second to address higher-dimensional categories that have come into prominence.
Although it is the classic reference for category theory, some of the terminology is not standard. In particular, Mac Lane attempted to settle an ambiguity in usage for the terms epimorphism and monomorphism by introducing the terms epic and monic, but the distinction is not in common use. |
https://en.wikipedia.org/wiki/UKCA%20marking | UK Conformity Assessed (UKCA) marking is a conformity mark that indicates conformity with the applicable requirements for products sold within Great Britain.
Applicability of UKCA and CE marks
The UKCA marking became part of UK law at the end of the Brexit transition period, on 31 December 2020, with the coming into force of The Product Safety and Metrology etc. (Amendment etc.) (EU Exit) Regulations 2019. It has been mandatory since then but the CE mark was to be accepted as an alternative until 1 January 2022. This deadline was extended to 31 December 2022, then to 1 January 2023, then to 31 December 2024; on 1 August 2023 the deadline was extended with the CE mark accepted indefinitely for most goods as a valid alternative.
The scope and procedures of the UKCA scheme will initially follow those for CE marking but, the Government said that after 31 December 2020 the two schemes may diverge.
Initial guidance regarding UKCA marking was originally published by the Government of the United Kingdom in 2019 ahead of a potential no-deal Brexit but was subsequently withdrawn.
Characteristics of UKCA marking
The height of the UKCA marking must be at least 5mm; it may be larger so long as the proportions are kept. The marking should be "easily visible, legible, and [from 1 January 2023] permanently attached to the goods".
Northern Ireland
The UKCA marking only applies to products placed on the market in Great Britain. In Northern Ireland, a part of the United Kingdom that remains aligned to the European Single Market due to the Northern Ireland Protocol, CE marking continues to be required. UK-resident bodies are no longer qualified to carry out CE mark conformity assessments for goods intended for the EU, but under the Northern Ireland Protocol they may do so for Northern Ireland. Where a UK body has carried out the assessment for goods intended for Northern Ireland, the product should display both the CE mark and a UKNI mark. However, goods intended for export to the |
https://en.wikipedia.org/wiki/Plant%20tolerance%20to%20herbivory | Tolerance is the ability of plants to mitigate the negative fitness effects caused by herbivory. It is one of the general plant defense strategies against herbivores, the other being resistance, which is the ability of plants to prevent damage (Strauss and Agrawal 1999). Plant defense strategies play important roles in the survival of plants as they are fed upon by many different types of herbivores, especially insects, which may impose negative fitness effects (Strauss and Zangerl 2002). Damage can occur in almost any part of the plants, including the roots, stems, leaves, flowers and seeds (Strauss and Zergerl 2002). In response to herbivory, plants have evolved a wide variety of defense mechanisms and although relatively less studied than resistance strategies, tolerance traits play a major role in plant defense (Strauss and Zergerl 2002, Rosenthal and Kotanen 1995).
Traits that confer tolerance are controlled genetically and therefore are heritable traits under selection (Strauss and Agrawal 1999). Many factors intrinsic to the plants, such as growth rate, storage capacity, photosynthetic rates and nutrient allocation and uptake, can affect the extent to which plants can tolerate damage (Rosenthal and Kotanen 1994). Extrinsic factors such as soil nutrition, carbon dioxide levels, light levels, water availability and competition also have an effect on tolerance (Rosenthal and Kotanen 1994).
History of the study of plant tolerance
Studies of tolerance to herbivory has historically been the focus of agricultural scientists (Painter 1958; Bardner and Fletcher 1974). Tolerance was actually initially classified as a form of resistance (Painter 1958). Agricultural studies on tolerance, however, are mainly concerned with the compensatory effect on the plants' yield and not its fitness, since it is of economical interest to reduce crop losses due to herbivory by pests (Trumble 1993; Bardner and Fletcher 1974). One surprising discovery made about plant tolerance was th |
https://en.wikipedia.org/wiki/Clifford%20Martin%20Will | Clifford Martin Will (born 1946) is a Canadian-born theoretical physicist noted for his contributions to general relativity.
Life and work
Will was born in Hamilton, Ontario. In 1968, he earned a B.Sc. from McMaster University. At Caltech, he studied under Kip Thorne, earning his Ph.D. in 1971. He has taught at the University of Chicago and Stanford University, and in 1981 joined the faculty of Washington University in St. Louis. In 2012, he moved to a faculty position at the University of Florida.
Will's theoretical work has centered on post-Newtonian expansions of approximate solutions to the Einstein field equation, a notoriously difficult area which forms the theoretical underpinnings essential for such achievements as the indirect verification by Russell Hulse and Joseph Taylor of the existence of gravitational radiation from observations of a binary pulsar.
Will's book reviewing experimental tests of general relativity is widely regarded as the essential resource for research in this area; his popular book on the same subject was listed by The New York Times as one of the 200 best books published in 1986.
Will was a Guggenheim Fellow for the academic year 1996–1997. From 2009 to 2018, Will was the editor-in-chief of IOP Publishing's journal Classical and Quantum Gravity.
Honors and awards
He was elected a Fellow of the American Physical Society in 1989 and elected to the National Academy of Sciences in 2007.
In 2019, Will received the Albert Einstein Medal, awarded each year since 1979 by the Albert Einstein Society in Bern, Switzerland, for his "important contributions to General Relativity, in particular including the Post-Newtonian expansions of approximate solutions of the Einstein field equations and their confrontation with experiments."
Bibliographic information
According to the NASA ADS database, the h-index of Professor Will is 57.
Selected works
(original publication date 1986) |
https://en.wikipedia.org/wiki/Primitive%20root%20modulo%20n | In modular arithmetic, a number is a primitive root modulo if every number coprime to is congruent to a power of modulo . That is, is a primitive root modulo if for every integer coprime to , there is some integer for which ≡ (mod ). Such a value is called the index or discrete logarithm of to the base modulo . So is a primitive root modulo if and only if is a generator of the multiplicative group of integers modulo .
Gauss defined primitive roots in Article 57 of the Disquisitiones Arithmeticae (1801), where he credited Euler with coining the term. In Article 56 he stated that Lambert and Euler knew of them, but he was the first to rigorously demonstrate that primitive roots exist for a prime . In fact, the Disquisitiones contains two proofs: The one in Article 54 is a nonconstructive existence proof, while the proof in Article 55 is constructive.
Elementary example
The number 3 is a primitive root modulo 7 because
Here we see that the period of 3 modulo 7 is 6. The remainders in the period, which are 3, 2, 6, 4, 5, 1, form a rearrangement of all nonzero remainders modulo 7, implying that 3 is indeed a primitive root modulo 7. This derives from the fact that a sequence ( modulo ) always repeats after some value of , since modulo produces a finite number of values. If is a primitive root modulo and is prime, then the period of repetition is Permutations created in this way (and their circular shifts) have been shown to be Costas arrays.
Definition
If is a positive integer, the integers from 1 to that are coprime to (or equivalently, the congruence classes coprime to ) form a group, with multiplication modulo as the operation; it is denoted by , and is called the group of units modulo , or the group of primitive classes modulo . As explained in the article multiplicative group of integers modulo , this multiplicative group () is cyclic if and only if is equal to 2, 4, , or 2 where is a power of an odd prime number. When (and only when) |
https://en.wikipedia.org/wiki/Clobber | Clobber is an abstract strategy game invented in 2001 by combinatorial game theorists Michael H. Albert, J.P. Grossman and Richard Nowakowski. It has subsequently been studied by Elwyn Berlekamp and Erik Demaine among others. Since 2005, it has been one of the events in the Computer Olympiad.
Details
Clobber is best played with two players and takes an average of 15 minutes to play. It is suggested for ages 8 and up. It is typically played on a rectangular white and black checkerboard. Players take turns to move one of their own pieces onto an orthogonally adjacent opposing piece, removing it from the game. The winner of the game is the player who makes the last move (i.e. whose opponent cannot move).
To start the game, each of the squares on the checkerboard is occupied by a stone. White stones are placed on the white squares and black stones on the black squares. To move, the player must pick up one of his or her own stones and "clobber" an opponent's stone on an adjacent square, either horizontally or vertically. Once the opponent's stone is clobbered, it must then be removed from the board and replaced by the stone that was moved. The player who, on their turn, is unable to move, loses the game.
Variants
In computational play (e.g., Computer Olympiad), clobber is generally played on a 10x10 board. There are also variations in the initial layout of the pieces. Another variant is Cannibal Clobber, where a stone may not only capture stones of the opponent but also other stones of its owner. An advantage of Cannibal Clobber over Clobber is that a player may not only win, but win by a non-trivial margin. Cannibal Clobber was proposed in the summer of 2003 by Ingo Althoefer. Another variant, also proposed by Ingo Althoefer in 2015, is San Jego: Here the pieces are not clobbered, but stacked to towers. Each tower belongs to the player with the piece on its top. At the end the player with the highest tower is declared winner. Draws are possible. |
https://en.wikipedia.org/wiki/ISO/IEC%2019790 | ISO/IEC 19790 is an ISO/IEC standard for security requirements for cryptographic modules. It addresses a wide range of issues regarding their implementation, including specifications, interface definitions, authentication, operational and physical security, configuration management, testing, and life-cycle management. The first version of ISO/IEC 19790 was derived from the U.S. government computer security standard FIPS 140-2, Security Requirements for Cryptographic Modules.
, the current version of the standard is ISO/IEC 19790:2012. This replaces a previous version, ISO/IEC 19790:2006, which is now obsolete.
Use of ISO/IEC 17970 is referenced in the U.S. government standard FIPS 140-3. As an ISO/IEC standard, access to it requires payment, typically on a per-user basis.
ISO/IEC 24759 is a related standard for the testing of cryptographic modules, the first version of which derived from NIST's Derived Test Requirements for FIPS PUB 140-2, Security Requirements for Cryptographic Modules. |
https://en.wikipedia.org/wiki/Interspinous%20ligament | The interspinous ligaments (interspinal ligaments) are thin, membranous ligaments that connect adjoining spinous processes of the vertebra in the spine. They take the form of relatively weak sheets of fibrous tissue and are well developed only in the lumbar region.
They extend from the root to the apex of each spinous process. They meet the ligamenta flava anteriorly, and blend with the supraspinous ligament posteriorly at the apexes of the spinal processes. The function of the interspinous ligaments is to limit ventral flexion of the spine and sliding movement of the vertebrae.
The ligaments are narrow and elongated in the thoracic region. They are broader, thicker, and quadrilateral in form in the lumbar region. They are only slightly developed in the neck; in the neck, they are often considered part of the nuchal ligament. |
https://en.wikipedia.org/wiki/Cyanobacterial%20clock%20proteins | In molecular biology, the cyanobacterial clock proteins are the main circadian regulator in cyanobacteria. The cyanobacterial clock proteins comprise three proteins: KaiA, KaiB and KaiC. The kaiABC complex may act as a promoter-nonspecific transcription regulator that represses transcription, possibly by acting on the state of chromosome compaction. This complex is expressed from a KaiABC operon.
See also: bacterial circadian rhythms
In the complex, KaiA enhances the phosphorylation status of kaiC. In contrast, the presence of kaiB in the complex decreases the phosphorylation status of kaiC, suggesting that kaiB acts by antagonising the interaction between kaiA and kaiC. The activity of KaiA activates kaiBC expression, while KaiC represses it.
Also in the KaiC family is RadA/Sms, a highly conserved eubacterial protein that shares sequence similarity with both RecA strand transferase and lon protease. The RadA/Sms family are probable ATP-dependent proteases involved in both DNA repair and degradation of proteins, peptides, glycopeptides. They are classified in as non-peptidase homologues and unassigned peptidases in MEROPS peptidase family S16 (lon protease family, clan SJ). RadA/Sms is involved in recombination and recombinational repair, most likely involving the stabilisation or processing of branched DNA molecules or blocked replication forks because of its genetic redundancy with RecG and RuvABC.
Structure
The overall fold of the KaiA monomer is that of a four-helix bundle, which forms a dimer in the known structure. KaiA functions as a homodimer. Each monomer is composed of three functional domains: the N-terminal amplitude-amplifier domain, the central period-adjuster domain and the C-terminal clock-oscillator domain. The N-terminal domain of KaiA, from cyanobacteria, acts as a pseudo-receiver domain, but lacks the conserved aspartyl residue required for phosphotransfer in response regulators. The C-terminal domain is responsible for dimer formation, bi |
https://en.wikipedia.org/wiki/Copyright%20alternatives | Various copyright alternatives in an alternative compensation systems (ACS) have been proposed as ways to allow the widespread reproduction of digital copyrighted works while still paying the authors and copyright owners of those works. This article only discusses those proposals which involve some form of government intervention. Other models, such as the street performer protocol or voluntary collective licenses, could arguably be called "alternative compensation systems" although they are very different and generally less effective at solving the free rider problem.
The impetus for these proposals has come from the widespread use of peer-to-peer file sharing networks. A few authors argue that an ACS is simply the only practical response to the situation. But most ACS advocates go further, holding that P2P file sharing is in fact greatly beneficial, and that tax or levy funded systems are actually more desirable tools for paying artists
than sales coupled with DRM copy prevention technologies.
Artistic freedom voucher
The artistic freedom voucher (AFV) proposal argues that the current copyright system providing a state enforced monopoly leads to "enormous inefficiencies and creates substantial enforcement problems". Under the AFV proposed system, individuals would be allowed to contribute a refundable tax credit of approximately $100 to a "creative worker", this contribution would act as a voucher that can only be used to support artistic or creative work.
Recipients of the AFV contribution would in turn be required to register with the government in similar fashion to that of religious or charitable institutions do so for tax-exempt status. The sole purpose of the registration would be to prevent fraud and would have no evaluation of the quality or work being produced. Alongside registration, artists would also now be ineligible for copyright protection for a set period of time (5 years for example) as the work is contributed to the public domain and allowed |
https://en.wikipedia.org/wiki/Barnes%20G-function | In mathematics, the Barnes G-function G(z) is a function that is an extension of superfactorials to the complex numbers. It is related to the gamma function, the K-function and the Glaisher–Kinkelin constant, and was named after mathematician Ernest William Barnes. It can be written in terms of the double gamma function.
Formally, the Barnes G-function is defined in the following Weierstrass product form:
where is the Euler–Mascheroni constant, exp(x) = ex is the exponential function, and Π denotes multiplication (capital pi notation).
The integral representation, which may be deduced from the relation to the double gamma function, is
As an entire function, G is of order two, and of infinite type. This can be deduced from the asymptotic expansion given below.
Functional equation and integer arguments
The Barnes G-function satisfies the functional equation
with normalisation G(1) = 1. Note the similarity between the functional equation of the Barnes G-function and that of the Euler gamma function:
The functional equation implies that G takes the following values at integer arguments:
(in particular, )
and thus
where denotes the gamma function and K denotes the K-function. The functional equation uniquely defines the Barnes G-function if the convexity condition,
is added. Additionally, the Barnes G-function satisfies the duplication formula,
Characterisation
Similar to the Bohr-Mollerup theorem for the gamma function, for a constant , we have for
and for
as .
Value at 1/2
where is the Glaisher–Kinkelin constant.
Reflection formula 1.0
The difference equation for the G-function, in conjunction with the functional equation for the gamma function, can be used to obtain the following reflection formula for the Barnes G-function (originally proved by Hermann Kinkelin):
The logtangent integral on the right-hand side can be evaluated in terms of the Clausen function (of order 2), as is shown below:
The proof of this result hinges on the follow |
https://en.wikipedia.org/wiki/Zuhur%20Habibullaev | Zuhur Habibullaev (, ) (1932April 8, 2013) was a Tajikistani artist. Born in Dushanbe, he graduated from Olimov State art College in Dushanbe in 1953 and Mukhin High industrial-art school in St. Petersburg in 1959.
He was a People's Artist of the Republic of Tajikistan and Member of the Union of artists of Tajikistan and has given international exhibitions from 1960. His works are located in museums and private collections in Tajikistan, Russia, Europe and Asia. |
https://en.wikipedia.org/wiki/Teixeira%20Mendes | Raimundo Teixeira Mendes (5 January 1855 – 28 June 1927) was a Brazilian philosopher and mathematician. He is credited with creating the national motto, "Order and Progress", as well as the national flag on which it appears.
Teixeira Mendes was born in Caxias, Maranhão.
Comtean Positivism
Teixeira Mendes was heavily influenced by Comtism and is classed as a "Humanity Apostle" by Brazil's Religion of Humanity, which is called "Igreja Positivista do Brasil" or in English "Positivist Church of Brazil." In life he led the Positivist Church after 1903. For him the Positivist viewpoint meant he opposed most wars and believed in the eventual disappearance of nations. He also opposed Christian missionary work toward the indigenous Brazilians and instead favored a policy based on protection and gradual assimilation. He deemed their societies "fetishistic", but believed a gradual non-coercive assimilation was the way to turn them into Positivists.
He died in Rio de Janeiro, aged 72. |
https://en.wikipedia.org/wiki/Polymeric%20liquid%20crystal | Polymeric liquid crystals are similar to monomeric liquid crystals used in displays. Both have dielectric anitroscopy, or the ability to change directions and absorb or transmit light depending on electric fields. Polymeric liquid crystals form long head-to-tail or side chain polymers, which are woven in thick mats and therefore have high viscosities. The high viscosities allow the polymeric liquid crystals to be used in complex structures, but they are harder to align, limiting their usefulness. The polymerics align in microdomains facing all different directions, which ruins the optical effect. One solution to this is to mix in a small amount of photo-curing polymer, which when spin-coated onto a surface can be hardened. Basically, the polymeric liquid crystal and photocurer are aligned in one direction, and then the photo curer is cured, "freezing" the polymeric in one direction. |
https://en.wikipedia.org/wiki/Cosmic%20infrared%20background | Cosmic infrared background is infrared radiation caused by stellar dust.
History
Recognizing the cosmological importance of the darkness of the night sky (Olbers' paradox) and the first speculations on an extragalactic background light dates back to the first half of the 19th century. Despite its importance, the first attempts were made only in the 1950-60s to derive the value of the visual background due to galaxies, at that time based on the integrated starlight of these stellar systems. In the 1960s the absorption of starlight by dust was already taken into account, but without considering the re-emission of this absorbed energy in the infrared. At that time Jim Peebles pointed out, that in a Big Bang-created Universe there must have been a cosmic infrared background (CIB) – different from the cosmic microwave background – that can account for the formation and evolution of stars and galaxies.
In order to produce today's metallicity, early galaxies must have been significantly more powerful than they are today. In the early CIB models the absorption of starlight was neglected, therefore in these models the CIB peaked between 1–10μm wavelengths. These early models have already shown correctly that the CIB was most probably fainter than its foregrounds, and so it was very difficult to observe. Later the discovery and observations of high luminosity infrared galaxies in the vicinity of the Milky Way showed, that the peak of the CIB is most likely at longer wavelengths (around 50μm), and its full power could be ~1−10% of that of the CMB.
As Martin Harwit emphasized, the CIB is very important in the understanding of some special astronomical objects, like quasars or ultraluminous infrared galaxies, which are very bright in the infrared. He also pointed out, that the CIB cause a significant attenuation for very high energy electrons, protons and gamma-rays of the cosmic radiation through inverse Compton scattering, photopion and electron-positron pair production. |
https://en.wikipedia.org/wiki/Robert%20R.%20Williams | Robert Runnels Williams (February 16, 1886 – October 2, 1965) was an American chemist, known for being the first to chemically fully characterize and then synthesize thiamine (vitamin B1). He first isolated thiamine in 1933, and synthesized it in 1935, reporting this in 1936. Williams also provided the modern name "thiamine" from the molecule's sulfur atom, and it being a vitamin (a class ultimately named for the earlier-known amine of thiamine itself).
Among his awards were the Elliott Cresson Medal in 1940 and the Perkin Medal in 1947. He was elected to both the American Philosophical Society and the United States National Academy of Sciences. His brother was Roger J. Williams, another important chemist at the time and discoverer of Vitamin B5.
Life
He was born in Nellore, India to Baptist missionaries. He moved to the United States when he was ten. In the early 1900s, Williams studied at Ottawa University and eventually procured a master's degree at the University of Chicago in 1908. He then spent some time teaching in the Philippines. After returning to the United States, he worked for Bell Telephone Laboratories from 1915, until he retired in 1945.
A resident of Summit, New Jersey, Williams died there at the age of 79 on October 2, 1965.
Work
1933-4 - developed a way of isolating 1/3 an ounce of thiamine from a ton of rice polishings.
1935 - Worked out its molecular structure and named it "thiamine" from its sulfur atom and amino group
1935 - Synthesized thiamine (vitamin B1), reporting the work in 1936. |
https://en.wikipedia.org/wiki/KCNE2 | Potassium voltage-gated channel subfamily E member 2 (KCNE2), also known as MinK-related peptide 1 (MiRP1), is a protein that in humans is encoded by the KCNE2 gene on chromosome 21. MiRP1 is a voltage-gated potassium channel accessory subunit (beta subunit) associated with Long QT syndrome. It is ubiquitously expressed in many tissues and cell types. Because of this and its ability to regulate multiple different ion channels, KCNE2 exerts considerable influence on a number of cell types and tissues. Human KCNE2 is a member of the five-strong family of human KCNE genes. KCNE proteins contain a single membrane-spanning region, extracellular N-terminal and intracellular C-terminal. KCNE proteins have been widely studied for their roles in the heart and in genetic predisposition to inherited cardiac arrhythmias. The KCNE2 gene also contains one of 27 SNPs associated with increased risk of coronary artery disease. More recently, roles for KCNE proteins in a variety of non-cardiac tissues have also been explored.
Discovery
Steve Goldstein (then at Yale University) used a BLAST search strategy, focusing on KCNE1 sequence stretches known to be important for function, to identify related expressed sequence tags (ESTs) in the NCBI database. Using sequences from these ESTs, KCNE2, 3 and 4 were cloned.
Tissue distribution
KCNE2 protein is most readily detected in the choroid plexus epithelium, gastric parietal cells, and thyroid epithelial cells. KCNE2 is also expressed in atrial and ventricular cardiomyocytes, the pancreas, pituitary gland, and lung epithelium. In situ hybridization data suggest that KCNE2 transcript may also be expressed in various neuronal populations.
Structure
Gene
The KCNE2 gene resides on chromosome 21 at the band 21q22.11 and contains 2 exons. Since human KCNE2 is located ~79 kb from KCNE1 and in the opposite direction, KCNE2 is proposed to originate from a gene duplication event.
Protein
This protein belongs to the potassium channel KCNE fa |
https://en.wikipedia.org/wiki/Vincent%20Pilloni | Vincent Pilloni is a French mathematician, specializing in arithmetic geometry and the Langlands program.
Career
Pilloni studied at the École Normale Supérieure and received his doctorate in 2009 from Université Sorbonne Paris Nord with thesis advisor Jacques Tilouine and thesis Arithmétique des variétés de Siegel.
His research deals with, among other topics, the question of how the modularity theorem for elliptic curves over the rational numbers (which led to the proof of Fermat's Last Theorem) can be extended to abelian varieties. With Fabrizio Andreatta and Adrian Iovita, he worked on general modularity conjectures (following Fontaine-Mazur, Langlands, Clozel, and others).
Pilloni is a Chargé de recherche of CNRS at the École normale supérieure de Lyon (UMPA).
In 2018 he was an invited speaker, with Fabrizio Andreatta and Adrian Iovita, at the International Congress of Mathematicians in Rio de Janeiro. In 2018 Pilloni received the Prix Élie Cartan. In 2021 he was awarded the Fermat Prize.
Selected publications
with Benoit Stroh: Surconvergence, ramification et modularité, Astérisque, vol. 382, 2016, pp. 195–266. MR |
https://en.wikipedia.org/wiki/Polyknight | A polyknight is a plane geometric figure formed by selecting cells in a square lattice that could represent the path of a chess knight in which doubling back is allowed. It is a polyform with square cells which are not necessarily connected, comparable to the polyking. Alternatively, it can be interpreted as a connected subset of the vertices of a knight's graph, a graph formed by connecting pairs of lattice squares that are a knight's move apart.
Enumeration of polyknights
Free, one-sided, and fixed polyknights
Three common ways of distinguishing polyominoes for enumeration can also be extended to polyknights:
free polyknights are distinct when none is a rigid transformation (translation, rotation, reflection or glide reflection) of another (pieces that can be picked up and flipped over).
one-sided polyknights are distinct when none is a translation or rotation of another (pieces that cannot be flipped over).
fixed polyknights are distinct when none is a translation of another (pieces that can be neither flipped nor rotated).
The following table shows the numbers of polyknights of various types with n cells.
Notes
Polyforms |
https://en.wikipedia.org/wiki/Ingar%20Roggen | Dich Ingar Emil Roggen (born 26 April 1934 in Valencia, Spain) is a Norwegian sociologist, and has been described as one of the European social informatics pioneers. His field of work is focused on the social aspects of virtual space, the social analysis of the Internet, the interaction between man and computer, and with the implications of the information technology usage communication in all fields of society. In 1996 he introduced the Sociology of the World Wide Web (web sociology) as a web science, based on the principles of social informatics.
He earned the PhD (mag.art.) degree in sociology in 1970 with a dissertation on social time, where he developed a theory of Relative Social Time with tense logic as method. In 1974 he was appointed assistant professor in sociology at the University of Oslo, where he was employed until his retirement in 2004. Through a series of studies of health risk factors in Norwegian industries and social services in 1970-1990 he developed a system for detection, prediction and prevention of supermortality in workplaces.
From a global historical viewpoint Stein Bråten must be considered as the founder of the science of social informatics, which he originally called "socioinformatics" (Norwegian: sosioinformatikk). He defined it as the common field of psychology, sociology and informatics. (Stein Bråten: Dialogens vilkår i datasamfunnet. Universitetsforlaget 1983.) At the Department of Sociology, University of Oslo (UiO) Stein Bråten inspired a group of pioneers in this field, including Eivind Jahren, Arild Jansen and Ingar Roggen. When web sociology was introduced in 1996, the first World Wide Web Virtual Laboratoratory (known as Weblab at UiO) was established at the Department of Sociology, directed by Ingar Roggen in collaboration with Knut A. G. Hauge and Trond Enger. The Department offered degrees up to the master (Norwegian: Cand.Polit.) level in these fields until 2000. While Kristen Nygaard's Simula was the programming langua |
https://en.wikipedia.org/wiki/Fluopicolide | Fluopicolide is a fungicide used in agriculture to control diseases caused by oomycetes such as late blight of potato. It is classed as an acylpicolide and its chemical name is 2,6-dichloro-N-{[3-chloro-5-(trifluoromethyl)pyridin-2-yl]methyl}benzamide. The precise mode of action is not known, but it is thought to act by affecting spectrin-like proteins in the cytoskeleton of oomycetes. This mode of action differs from other available fungicides used to control oomycetes and it can inhibit the growth of strains that are resistant to phenylamides, strobilurin, dimethomorph and iprovalicarb. It has some systemic activity as it moves through the xylem towards the tips of stems, but does not get transported to the roots. It affects the motility of zoospores, the germination of cysts, the growth of the mycelium and sporulation. Bayer CropScience developed the compound and it was first released as a commercial product in 2006.
Uses
Fluopicolide has been shown to be effective at controlling Phytophthora infestans, Phytophthora capsicii, Phytophthora porri, Plasmopara viticola, Perenospora parasitica, Peronospora tabacina, Peronospora sparsa, Pseudoperonospora cubensis and Bremia lactucae. As of 2007, it was only available commercially as a co-formulation with Fosetyl-Al for use in vines (as Profiler) and as a co-formulation with propamocarb for use on potatoes and vegetables (as Infinito). Other products were in development.
Toxicity
The median lethal dose in rats is >5000 mg/kg meaning that fluopicolide has low acute toxicity. Tests in other mammals indicate that it does not cause skin sensitisation, cancer or developmental problems. |
https://en.wikipedia.org/wiki/Dimond%20ring | A Dimond ring or Dimond ring translator was an early type of computer memory, created in the early 1940s by T. L. Dimond at Bell Laboratories for Bell's #5 Crossbar Switch, a type of early telephone switch.
Structure
Large-diameter magnetic ferrite toroidal rings with solenoid windings, through which are threaded writing and reading wires.
Uses
It was used in the #5 Crossbar Switch and TXE (prior to version 4) telephone exchanges.
See also
Core rope memory, a later development |
https://en.wikipedia.org/wiki/P.T.O.%20%28video%20game%29 | P.T.O. (Pacific Theater of Operations), released as in Japan, is a console strategy video game released by Koei. It was originally released for the PC-9801 in 1989 and had been ported to various platforms, such as the X68000, FM Towns, PC-8801 (1990), MSX2 (1991), Sega Genesis and the Super NES (all three in 1992). Players could assume one side of the Pacific Theater of Operations during World War II, acting as naval commander, organizing fleets, building new ships, appropriating supplies and fuel, and even engaging in diplomacy with other countries. The player can choose one of several World War Two battles to simulate, or could control the entire Pacific campaign well before the Japanese attack on Pearl Harbor.
A sequel, P.T.O. II, was released by Koei in 1993. Teitoku no Ketsudan III was never released outside Japan, but a P.T.O. IV was released for the PlayStation 2 in the US and Europe.
Gameplay
Scenarios
Players must choose one of nine scenarios when starting a game. The first scenario (Negotiations Breakdown) is a long-term campaign, where the player must win the war from mid-1941 before the war is declared. Victory can only be obtained by controlling all ports or eliminating all enemy ships. The other scenarios begin in the midst of a certain major Pacific conflict, where the goal is to capture or defend a certain port or sink or protect a number of enemy ships. If a scenario's goal is achieved, the player can continue with the full campaign.
Naval powers
The player has the option of playing as one of the two major World War II Pacific maritime powers: the United States for the Allies, or Japan for the Axis. Other countries begin as allies of the two nations as they were at whatever point in the war gameplay begins; over time, non-allied nations can be convinced to ally with your side after significant gifts and diplomacy. Nations can also break alliances with poor diplomacy, which can lead to the departure of ships loaned from their navy. For instance, |
https://en.wikipedia.org/wiki/118th%20meridian%20east | The meridian 118° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Indian Ocean, Australasia, the Southern Ocean, and Antarctica to the South Pole.
The 118th meridian east forms a great circle with the 62nd meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 118th meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Laptev Sea
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Sakha Republic Irkutsk Oblast — from Zabaykalsky Krai — from
|-valign="top"
|
! scope="row" |
| Inner Mongolia
|-
|
! scope="row" |
|
|-valign="top"
|
! scope="row" |
| Inner Mongolia Hebei – for about 17 km from Inner Mongolia – for about 4 km from Hebei – from Tianjin – from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Bohai Sea
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Shandong Jiangsu – from Anhui – from Jiangsu – from Anhui – from Jiangxi – from Fujian – from , passing just west of Xiamen (at )
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | South China Sea
| style="background:#b0e0e6;" | Passing just east of the disputed Scarborough Shoal (at )
|-
|
! scope="row" |
| Island of Palawan
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Sulu Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Sabah – island of Borneo
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Celebes Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| East Kalimantan – isl |
https://en.wikipedia.org/wiki/Intersection%20%28geometry%29 | In geometry, an intersection is a point, line, or curve common to two or more objects (such as lines, curves, planes, and surfaces). The simplest case in Euclidean geometry is the line–line intersection between two distinct lines, which either is one point (sometimes called a vertex) or does not exist (if the lines are parallel). Other types of geometric intersection include:
Line–plane intersection
Line–sphere intersection
Intersection of a polyhedron with a line
Line segment intersection
Intersection curve
Determination of the intersection of flats – linear geometric objects embedded in a higher-dimensional space – is a simple task of linear algebra, namely the solution of a system of linear equations. In general the determination of an intersection leads to non-linear equations, which can be solved numerically, for example using Newton iteration. Intersection problems between a line and a conic section (circle, ellipse, parabola, etc.) or a quadric (sphere, cylinder, hyperboloid, etc.) lead to quadratic equations that can be easily solved. Intersections between quadrics lead to quartic equations that can be solved algebraically.
On a plane
Two lines
For the determination of the intersection point of two non-parallel lines
one gets, from Cramer's rule or by substituting out a variable, the coordinates of the intersection point :
(If the lines are parallel and these formulas cannot be used because they involve dividing by 0.)
Two line segments
For two non-parallel line segments and there is not necessarily an intersection point (see diagram), because the intersection point of the corresponding lines need not to be contained in the line segments. In order to check the situation one uses parametric representations of the lines:
The line segments intersect only in a common point of the corresponding lines if the corresponding parameters fulfill the condition .
The parameters are the solution of the linear system
It can be solved for |
https://en.wikipedia.org/wiki/Fisher%27s%20principle | Fisher's principle is an evolutionary model that explains why the sex ratio of most species that produce offspring through sexual reproduction is approximately 1:1 between males and females. A. W. F. Edwards has remarked that it is "probably the most celebrated argument in evolutionary biology".
Fisher's principle was outlined by Ronald Fisher in his 1930 book The Genetical Theory of Natural Selection (but has been incorrectly attributed as original to Fisher). Fisher couched his argument in terms of parental expenditure, and predicted that parental expenditure on both sexes should be equal. Sex ratios that are 1:1 are hence known as "", and those that are not 1:1 are "" or "" and occur because they break the assumptions made in Fisher's model.
Basic explanation
W. D. Hamilton gave the following simple explanation in his 1967 paper on "Extraordinary sex ratios", given the condition that males and females cost equal amounts to produce:
Suppose male births are less common than female.
A newborn male then has better mating prospects than a newborn female, and therefore can expect to have more offspring.
Therefore parents genetically disposed to produce males tend to have more than average numbers of grandchildren born to them.
Therefore the genes for male-producing tendencies spread, and male births become more common.
As the 1:1 sex ratio is approached, the advantage associated with producing males dies away.
The same reasoning holds if females are substituted for males throughout. Therefore 1:1 is the equilibrium ratio.
In modern language, the 1:1 ratio is the evolutionarily stable strategy (ESS).
Parental investment
Fisher wrote the explanation described by Eric Charnov and James J. Bull as being "characteristically terse" and "cryptic": in Chapter 6: "Sexual Reproduction and Sexual Selection":
Development of the argument
Fisher's principle is an early example of a model in which genes for greater production of either sex become equalized in the populat |
https://en.wikipedia.org/wiki/Macacine%20gammaherpesvirus%2010 | Macacine gammaherpesvirus 10 (McHV-10) is a species of virus in the genus Lymphocryptovirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales. |
https://en.wikipedia.org/wiki/Streamlining%20theory | Genomic streamlining is a theory in evolutionary biology and microbial ecology that suggests that there is a reproductive benefit to prokaryotes having a smaller genome size with less non-coding DNA and fewer non-essential genes. There is a lot of variation in prokaryotic genome size, with the smallest free-living cell's genome being roughly ten times smaller than the largest prokaryote. Two of the bacterial taxa with the smallest genomes are Prochlorococcus and Pelagibacter ubique, both highly abundant marine bacteria commonly found in oligotrophic regions. Similar reduced genomes have been found in uncultured marine bacteria, suggesting that genomic streamlining is a common feature of bacterioplankton. This theory is typically used with reference to free-living organisms in oligotrophic environments.
Overview
Genome streamlining theory states that certain prokaryotic genomes tend to be small in size in comparison to other prokaryotes, and all eukaryotes, due to selection against the retention of non-coding DNA. The known advantages of small genome size include faster genome replication for cell division, fewer nutrient requirements, and easier co-regulation of multiple related genes, because gene density typically increases with decreased genome size. This means that an organism with a smaller genome is likely to be more successful, or have higher fitness, than one hindered by excessive amounts of unnecessary DNA, leading to selection for smaller genome sizes.
Some mechanisms that are thought to underlie genome streamlining include deletion bias and purifying selection. Deletion bias is the phenomenon in bacterial genomes where the rate of DNA loss is naturally higher than the rate of DNA acquisition. This is a passive process that simply results from the difference in these two rates. Purifying selection is the process by which extraneous genes are selected against, making organisms lacking this genetic material more successful by effectively reducing their |
https://en.wikipedia.org/wiki/E-Stewards | The e-Stewards Initiative is an electronics waste recycling standard created by the Basel Action Network.
The program and the organization that created it grew out of the concern that electronic waste generated in wealthy countries was being dismantled in poor countries, often by underage workers. The young workers were being exposed to toxic metals and working in unsafe conditions.
In 2009, BAN published the e-Stewards Standard for Responsible Recycling and Reuse of Electronic Equipment which set forth requirements for becoming a Certified e-Stewards Recycler—a program that "recognizes electronics recyclers that adhere to the most stringent environmentally and socially responsible practices when recovering hazardous electronic materials." Recyclers that were qualified under the older Pledge program had until 1 September 2011 to achieve certification to the Standard by an e-Stewards Accredited Certification Body accredited by ANAB (ANSI-ASQ National Accreditation Board).
Standard
The e-Stewards Standard for Responsible Recycling and Reuse of Electronic Equipment was developed by the Basel Action Network. It is an industry-specific environmental management system standard that is the basis for the e-Stewards Initiative. On 6 March 2012, BAN released and updated revised Version 2.0 to open public comment prior to its final adoption later in the spring of 2012.
The certification is available to all electronics recyclers and refurbishers. To achieve an e-Stewards certification organizations are subject to an initial Stage I and Stage II audit. After passing such audits and being accepted by BAN, yearly surveillance audits take place to ensure organizations with the standard and have a registered ISO 14001 environmental management system in place, as well as achieving numerous performance requirements including assuring no export of hazardous electronic wastes to developing countries, no use of prison labor and no dumping of toxic materials in municipal landfills. |
https://en.wikipedia.org/wiki/Academic%20detailing | Academic detailing is "university or non-commercial-based educational outreach." The process involves face-to-face education of prescribers by trained health care professionals, typically pharmacists, physicians, or nurses. The goal of academic detailing is to improve prescribing of targeted drugs to be consistent with medical evidence from randomized controlled trials, which ultimately improves patient care and can reduce health care costs. A key component of non-commercial or university-based academic detailing programs is that they (academic detailers/clinical educators, management, staff, program developers, etc.) do not have any financial links to the pharmaceutical industry.
Academic detailing has been studied for over 25 years and has been shown to be effective at improving prescribing of targeted medications about 5% from baseline. Though it is primarily used to affect prescribing, it is also used to educate providers regarding other non-drug interventions, such as screening guidelines.
Organizations
Many academic detailing programs exist around the world. In the United States, university-based state programs exist in Vermont, Oregon and South Carolina. The nonprofit organization Alosa Health runs an academic detailing program in Pennsylvania, Massachusetts and Washington, DC called the Independent Drug Information Service (IDIS). The National Resource Center for Academic Detailing (NaRCAD), funded by the Agency for Healthcare Research and Quality (AHRQ) in 2010, was created to help organizations with limited resources to establish and improve their own programs and to create a network of academic detailing programs. The U.S. Department of Veterans Affairs Pharmacy Benefits Management pilot tested the National Academic Detailing Service in 2010 to enhance veterans' outcomes by empowering clinicians and promoting the use of evidence-based treatments using delivered by clinical pharmacy specialists. After the pilot, in March 2015, the Interim Under Secret |
https://en.wikipedia.org/wiki/Global%20cascades%20model | Global cascades models are a class of models aiming to model large and rare cascades that are triggered by exogenous perturbations which are relatively small compared with the size of the system. The phenomenon occurs ubiquitously in various systems, like information cascades in social systems, stock market crashes in economic systems, and cascading failure in physics infrastructure networks. The models capture some essential properties of such phenomenon.
Model description
To describe and understand global cascades, a network-based threshold model has been proposed by Duncan J. Watts in 2002. The model is motivated by considering a population of individuals who must make a decision between two alternatives, and their choices depend explicitly on other people's states or choices. The model assumes that an individual will adopt a new particular opinion (product or state) if a threshold fraction of his/her neighbors have adopted the new one, else he would keep his original state. To initiate the model, a new opinion will be randomly distributed among a small fraction of individuals in the network. If the fraction satisfies a particular condition, a large cascades can be triggered.(see Global Cascades Condition) A phase transition phenomenon has been observed: when the network of interpersonal influences is sparse, the size of the cascades exhibits a power law distribution, the most highly connected nodes are critical in triggering cascades, and if the network is relatively dense, the distribution shows a bimodal form, in which nodes with average degree show more importance by serving as triggers.
Several generalizations of the Watt's threshold model have been proposed and analyzed in the following years. For example, the original model has been combined with independent interaction models to provide a generalized model of social contagion, which classifies the behavior of the system into three universal classes. It has also been generalized on modular networks deg |
https://en.wikipedia.org/wiki/Stillbirth | Stillbirth is typically defined as fetal death at or after 20 or 28 weeks of pregnancy, depending on the source. It results in a baby born without signs of life. A stillbirth can often result in the feeling of guilt or grief in the mother. The term is in contrast to miscarriage, which is an early pregnancy loss, and Sudden Infant Death Syndrome, where the baby dies a short time after being born alive.
Often the cause is unknown. Causes may include pregnancy complications such as pre-eclampsia and birth complications, problems with the placenta or umbilical cord, birth defects, infections such as malaria and syphilis, and poor health in the mother. Risk factors include a mother's age over 35, smoking, drug use, use of assisted reproductive technology, and first pregnancy. Stillbirth may be suspected when no fetal movement is felt. Confirmation is by ultrasound.
Worldwide prevention of most stillbirths is possible with improved health systems. Around half of stillbirths occur during childbirth, with this being more common in the developing than developed world. Otherwise, depending on how far along the pregnancy is, medications may be used to start labor or a type of surgery known as dilation and evacuation may be carried out. Following a stillbirth, women are at higher risk of another one; however, most subsequent pregnancies do not have similar problems. Depression, financial loss, and family breakdown are known complications.
Worldwide in 2021, there were an estimated 1.9 million stillbirths that occurred after 28 weeks of pregnancy (about 1 for every 72 births). More than three-quarters of estimated stillbirths in 2021 occurred in sub-Saharan Africa and South Asia, with 47% of the global total in sub-Saharan Africa and 32% in South Asia. Stillbirth rates have declined, though more slowly since the 2000s.
Causes
As of 2016, there is no international classification system for stillbirth causes. The causes of a large percentage of stillbirths is unknown, even in |
https://en.wikipedia.org/wiki/Water%20cycle | The water cycle, also known as the hydrologic cycle or the hydrological cycle, is a biogeochemical cycle that describes the continuous movement of water on, above and below the surface of the Earth. The mass of water on Earth remains fairly constant over time but the partitioning of the water into the major reservoirs of ice, fresh water, saline water (salt water) and atmospheric water is variable depending on a wide range of climatic variables. The water moves from one reservoir to another, such as from river to ocean, or from the ocean to the atmosphere, by the physical processes of evaporation, transpiration, condensation, precipitation, infiltration, surface runoff, and subsurface flow. In doing so, the water goes through different forms: liquid, solid (ice) and vapor. The ocean plays a key role in the water cycle as it is the source of 86% of global evaporation.
The water cycle involves the exchange of energy, which leads to temperature changes. When water evaporates, it takes up energy from its surroundings and cools the environment. When it condenses, it releases energy and warms the environment. These heat exchanges influence climate.
The evaporative phase of the cycle purifies water, causing salts and other solids picked up during the cycle to be left behind, and then the condensation phase in the atmosphere replenishes the land with freshwater. The flow of liquid water and ice transports minerals across the globe. It is also involved in reshaping the geological features of the Earth, through processes including erosion and sedimentation. The water cycle is also essential for the maintenance of most life and ecosystems on the planet.
Description
Overall process
The water cycle is powered from the energy emitted by the sun. This energy heats water in the ocean and seas. Water evaporates as water vapor into the air. Some ice and snow sublimates directly into water vapor. Evapotranspiration is water transpired from plants and evaporated from the soil. Th |
https://en.wikipedia.org/wiki/Mu%20to%20E%20Gamma | The Mu to E Gamma (MEG) is a particle physics experiment dedicated to measuring the decay of the muon into an electron and a photon, a decay mode which is heavily suppressed in the Standard Model by lepton flavour conservation, but enhanced in supersymmetry and grand unified theories. It is located at the Paul Scherrer Institute and began taking data September 2008.
Results
In May 2016 the MEG experiment published the world's leading upper limit on the branching ratio of this decay:
at 90% confidence level, based on data collected in 2009–2013. This improved the MEG limit from the prior MEGA experiment by a factor of about 28.
Apparatus
MEG uses a continuous muon beam (3 × 107/s) incident on a plastic target. The decay is reconstructed to look for a back-to-back positron and monochromatic photon (52.8 MeV). A liquid xenon scintillator with photomultiplier tubes measure the photon energy, and a drift chamber in a magnetic field detects the positrons.
The MEG collaboration presented upgrade plans for MEG-II at the Particles and Nuclei International Conference 2014, with one order of magnitude greater sensitivity, and increased muon production, to begin data taking in 2017. More experiments are planned to explore rare muon transitions, such as Comet (experiment), Mu2e and Mu3e. |
https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Trybu%C5%82a | Stanisław Czesław Trybuła (2 January 1932 – 28 January 2008) was a Polish mathematician and statistician.
He was a pupil of state high school in Rypin, Poland, and he graduated from The First High School in Toruń in 1950. He studied mathematics in Nicolaus Copernicus University in Toruń and Wrocław University. He defended his master thesis on some problems of the game theory prepared under supervision of Hugo Steinhaus at Wrocław University in 1955. In 1955 he became a faculty member at Department of Mathematics, Wrocław University of Technology. In 1959 he was distinguished as the candidate of science and in 1960 he defended his PhD on minimax estimation under supervision of Hugo Steinhaus. Many years Trybuła cooperated or was the staff member of Institute of Power Systems (IASE) in Wrocław. He worked out the original method of identification of the complex power systems. Since 1968 he was faculty member of the Institute of Mathematics, Faculty of Fundamental Problems of Technology, Wrocław University of Technology. Stanisław Trybuła got habilitation at Faculty of Mathematics, Physics and Chemistry, Wrocław University in 1968 based on his seminal works on sequential analysis for stochastic processes. He was the advisor of 14 PhD theses. He took early retirement in 1998 and was writing academic books on statistics and the game theory.
Stanisław Trybuła is the co-author of the WJ bidding system in the bridge, known also as Polish Club (see also Trybula transfers, Wesolowski texas, Gawrys fourth suit forcing). |
https://en.wikipedia.org/wiki/Cultural%20materialism%20%28cultural%20studies%29 | Cultural materialism in literary theory and cultural studies traces its origin to the work of the left-wing literary critic Raymond Williams. Cultural materialism espouses analysis based in critical theory, in the tradition of the Frankfurt School.
Overview
Cultural materialism emerged as a theoretical movement in the early 1980s along with new historicism, an American approach to early modern literature, with which it shares common ground. The term was coined by Williams, who used it to describe a theoretical blending of leftist culturalism and Marxist analysis. Cultural materialists deal with specific historical documents and attempt to analyze and recreate the zeitgeist of a particular moment in history.
Williams viewed culture as a "productive process", part of the means of production, and cultural materialism often identifies what he called "residual", "emergent" and "oppositional" cultural elements. Following in the tradition of Herbert Marcuse, Antonio Gramsci and others, cultural materialists extend the class-based analysis of traditional Marxism (Neo-Marxism) by means of an additional focus on the marginalized.
Cultural materialists analyze the processes by which hegemonic forces in society appropriate canonical and historically important texts, such as Shakespeare and Austen, and utilize them in an attempt to validate or inscribe certain values on the cultural imaginary. Jonathan Dollimore and Alan Sinfield, authors of Political Shakespeare, have had considerable influence in the development of this movement and their book is considered to be a seminal text. They have identified four defining characteristics of cultural materialism as a theoretical device:
Historical context
Close textual analysis
Political commitment
Theoretical method
Cultural materialists seek to draw attention to the processes being employed by contemporary power structures, such as the church, the state or the academy, to disseminate ideology. To do this they explore a text’s |
https://en.wikipedia.org/wiki/Analytical%20regularization | In physics and applied mathematics, analytical regularization is a technique used to convert boundary value problems which can be written as Fredholm integral equations of the first kind involving singular operators into equivalent Fredholm integral equations of the second kind. The latter may be easier to solve analytically and can be studied with discretization schemes like the finite element method or the finite difference method because they are pointwise convergent. In computational electromagnetics, it is known as the method of analytical regularization. It was first used in mathematics during the development of operator theory before acquiring a name.
Method
Analytical regularization proceeds as follows. First, the boundary value problem is formulated as an integral equation. Written as an operator equation, this will take the form
with representing boundary conditions and inhomogeneities, representing the field of interest, and the integral operator describing how Y is given from X based on the physics of the problem.
Next, is split into , where is invertible and contains all the singularities of and is regular. After splitting the operator and multiplying by the inverse of , the equation becomes
or
which is now a Fredholm equation of the second type because by construction is compact on the Hilbert space of which is a member.
In general, several choices for will be possible for each problem. |
https://en.wikipedia.org/wiki/C-list%20%28computer%20security%29 | In capability-based computer security, a C-list is an array of capabilities, usually associated with a process and maintained by the kernel. The program running in the process does not manipulate capabilities directly, but refers to them via C-list indexes—integers indexing into the C-list.
The file descriptor table in Unix is an example of a C-list. Unix processes do not manipulate file descriptors directly, but refer to them via file descriptor numbers, which are C-list indexes.
In the KeyKOS and EROS operating systems, a process's capability registers constitute a C-list.
See also
Access-control list |
https://en.wikipedia.org/wiki/Turkana%20Basin | The greater Turkana Basin in East Africa (mainly northwestern Kenya and southern Ethiopia, smaller parts of eastern Uganda and southeastern South Sudan) determines a large endorheic basin, a drainage basin with no outflow centered around the north-southwards directed Gregory Rift system in Kenya and southern Ethiopia. The deepest point of the basin is the endorheic Lake Turkana, a brackish soda lake with a very high ecological productivity in the Gregory Rift.
A narrower definition for the term Turkana Basin is also in widespread use and means Lake Turkana and its environment within the confines of the Gregory Rift in Kenya and Ethiopia. This includes the lower Omo River valley in Ethiopia. The Basin in the narrower definition is a site of geological subsidence containing one of the most continuous and temporally well controlled fossil records of the Plio-Pleistocene with some fossils as old as the Cretaceous. Among the Basin's critical fossiliferous sites are Lothagam, Allia Bay, and Koobi Fora.
Geography
Lake Turkana sits at the center of the Turkana Basin and is flanked by the Chalbi Desert to the east, the Lotakipi Plains to the north, Karasuk to the west and Samburu to the south. Included within these regions are desert scrub, desert grass and shrubland, and scattered acacia or open grasslands. The only true perennial river is the Omo River in Ethiopia, in the northern part of the basin, which discharges into the lake on its northern shore and supplies the lake with more than 98% of its annual water inflow. The two intermittent rivers – which almost alone contribute the remaining 2% of water inflow – are the Turkwel River and the Kerio River in Kenya, in the western part of the basin. Much of the Turkana Basin today can be described as arid scrubland or even desert. The exception is the Omo-Gibe River valley to the north.
Important towns within the Turkana Basin include Lokitaung, Kakuma, Lodwar, Lorogumu, Ileret and Kargi. The Turkana people inhabit the w |
https://en.wikipedia.org/wiki/Microbes%20and%20Man | Microbes and Man is a popularising book by the English microbiologist John Postgate FRS on the role of microorganisms in human society, first published in 1969, and still in print in 2017. Critics called it a "classic" and "a pleasure to read".
Book
Contents
The book is structured as follows:
1 Man and microbes
2 Microbiology
3 Microbes in society
4 Interlude: how to handle microbes
5 Microbes in nutrition
6 Microbes in production
7 Deterioration, decay and pollution
8 Disposal and cleaning-up
9 Second interlude: microbiologists and man
10 Microbes in evolution
11 Microbes in the future
Illustrations
The 4th edition has 32 illustrations, ranging from photographs of microscopic algae, protozoa, fungi, viruses and bacteria, to the macroscopic effects of microbes such as a sulphur-forming lake in Libya and fish killed by bacterial reduction of sulphate in water.
Editions
1st edition, Cambridge University Press, 1969
2nd edition, Cambridge University Press, 1986
3rd edition, Cambridge University Press, 1992
4th edition, Cambridge University Press, 2000
The book has been translated into nine languages: Arabic, Chinese, Czech, French, German, Japanese, Polish, Portuguese, and Spanish.
Reception
The Guardian described the book as "a passionate case for the importance of micro-organisms".
In his textbook Essential Microbiology, Stuart Hogg recommends the book to readers who want a general overview of microbes and their uses, stating "there can be no better starting point than John Postgate's classic".
New Scientist described the book as "a pleasure to read from first page to last. It is a literal statement. Start to read it and the first page, describing the astonishing dispersion of microbes, from the upper atmosphere to the depths of the sea, will provide any reader with enough wonder and excitement to take them through to the last page and the surface of Venus." The magazine commented that Postgate's "admirable, elegantly written and painlessl |
https://en.wikipedia.org/wiki/Standard%20score | In statistics, the standard score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores.
It is calculated by subtracting the population mean from an individual raw score and then dividing the difference by the population standard deviation. This process of converting a raw score into a standard score is called standardizing or normalizing (however, "normalizing" can refer to many types of ratios; see Normalization for more).
Standard scores are most commonly called z-scores; the two terms may be used interchangeably, as they are in this article. Other equivalent terms in use include z-value, z-statistic, normal score, standardized variable and pull in high energy physics.
Computing a z-score requires knowledge of the mean and standard deviation of the complete population to which a data point belongs; if one only has a sample of observations from the population, then the analogous computation using the sample mean and sample standard deviation yields the t-statistic.
Calculation
If the population mean and population standard deviation are known, a raw score
x is converted into a standard score by
where:
μ is the mean of the population,
σ is the standard deviation of the population.
The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above.
Calculating z using this formula requires use of the population mean and the population standard deviation, not the sample mean or sample deviation. However, knowing the true mean and standard deviation of a population is often an unrealistic expectation, except in cases such as standardized testing, where the entire population is m |
https://en.wikipedia.org/wiki/Solicited-node%20multicast%20address | A Solicited-Node multicast address is an IPv6 multicast address used by the Neighbor Discovery Protocol to determine the link layer address associated with a given IPv6 address, which is also used to check if an address is already being used by the local-link or not, through a process called DAD (Duplicate Address Detection). The Solicited-Node multicast addresses are generated from the host's IPv6 unicast or anycast address, and each interface must have a Solicited-Node multicast address associated with it.
A Solicited-Node address is created by taking the least-significant 24 bits of a unicast or anycast address and appending them to the prefix .
Example
Assume a host with a unicast/anycast IPv6 address of . Its Solicited-Node multicast address will be .
fe80::2aa:ff:fe28:9c5a IPv6 unicast/anycast address (compressed notation)
fe80:0000:0000:0000:02aa:00ff:fe28:9c5a IPv6 unicast/anycast address (uncompressed notation)
-- ---- the least-significant 24-bits
ff02::1:ff00:0/104 Solicited-Node multicast address prefix
ff02:0000:0000:0000:0000:0001:ff00:0000/104 (uncompressed)
---- ---- ---- ---- ---- ---- -- The first 104 bits
ff02:0000:0000:0000:0000:0001:ff28:9c5a Solicited-Node multicast address (uncompressed notation)
ff02::1:ff28:9c5a Solicited-Node multicast address (compressed notation) |
https://en.wikipedia.org/wiki/Densa | Densa has been used as the name of a number of fictional organizations parodying Mensa International, the organization for highly intelligent people. Densa is ostensibly an organization for people insufficiently intelligent to be members of Mensa. The name Densa has been said to be an acronym for "Diversely Educated Not Seriously Affected." The name Densa is a portmanteau of (in the sense of stupider) and Mensa.
There is no single formal Densa organization; instead, various projects using that name exist as informal groups, usually meant by their founders as a joke rather than a serious organization. Even within Mensa itself, a SIG (special interest group, an informal sub-group of Mensans sharing a particular common interest) has existed for Densa, which, like all Mensa SIGs, required Mensa membership for admission, while it was active.
The concept of an organization for the mentally dense originated in Boston & Outskirts Mensa Bulletin (BOMB), August 1974, in "A-Bomb-inable Puzzle II" by John D. Coons. The puzzle involved "The Boston chapter of Densa, the low IQ society". Subsequent issues had additional puzzles with gags about the group and were widely reprinted by the bulletins of other Mensa groups before the concept of a low IQ group gained wider circulation in the 1970s, with other people creating quizzes, etc.
A humor book called The Densa Quiz: The Official & Complete Dq Test of the International Densa Society was written in 1983 by Stephen Price and J. Webster Shields. |
https://en.wikipedia.org/wiki/Cardinality%20of%20the%20continuum | In set theory, the cardinality of the continuum is the cardinality or "size" of the set of real numbers , sometimes called the continuum. It is an infinite cardinal number and is denoted by (lowercase Fraktur "c") or .
The real numbers are more numerous than the natural numbers . Moreover, has the same number of elements as the power set of Symbolically, if the cardinality of is denoted as , the cardinality of the continuum is
This was proven by Georg Cantor in his uncountability proof of 1874, part of his groundbreaking study of different infinities. The inequality was later stated more simply in his diagonal argument in 1891. Cantor defined cardinality in terms of bijective functions: two sets have the same cardinality if, and only if, there exists a bijective function between them.
Between any two real numbers a < b, no matter how close they are to each other, there are always infinitely many other real numbers, and Cantor showed that they are as many as those contained in the whole set of real numbers. In other words, the open interval (a,b) is equinumerous with This is also true for several other infinite sets, such as any n-dimensional Euclidean space (see space filling curve). That is,
The smallest infinite cardinal number is (aleph-null). The second smallest is (aleph-one). The continuum hypothesis, which asserts that there are no sets whose cardinality is strictly between and means that . The truth or falsity of this hypothesis is undecidable and cannot be proven within the widely used Zermelo–Fraenkel set theory with axiom of choice (ZFC).
Properties
Uncountability
Georg Cantor introduced the concept of cardinality to compare the sizes of infinite sets. He famously showed that the set of real numbers is uncountably infinite. That is, is strictly greater than the cardinality of the natural numbers, :
In practice, this means that there are strictly more real numbers than there are integers. Cantor proved this statement in several different |
https://en.wikipedia.org/wiki/Thermal%20cycler | The thermal cycler (also known as a thermocycler, PCR machine or DNA amplifier) is a laboratory apparatus most commonly used to amplify segments of DNA via the polymerase chain reaction (PCR). Thermal cyclers may also be used in laboratories to facilitate other temperature-sensitive reactions, including restriction enzyme digestion or rapid diagnostics. The device has a thermal block with holes where tubes holding the reaction mixtures can be inserted. The cycler then raises and lowers the temperature of the block in discrete, pre-programmed steps.
History
The earliest thermal cyclers were designed for use with the Klenow fragment of DNA polymerase I. Since this enzyme is destroyed during each heating step of the amplification process, new enzyme had to be added every cycle. This led to a cumbersome machine based on an automated pipettor, with open reaction tubes. Later, the PCR process was adapted to the use of thermostable DNA polymerase from Thermus aquaticus, which greatly simplified the design of the thermal cycler. While in some old machines the block is submerged in an oil bath to control temperature, in modern PCR machines a Peltier element is commonly used. Quality thermal cyclers often contain silver blocks to achieve fast temperature changes and uniform temperature throughout the block. Other cyclers have multiple blocks with high heat capacity, each of which is kept at a constant temperature, and the reaction tubes are moved between them by means of an automated process. Miniaturized thermal cyclers have been created in which the reaction mixture moves via channel through hot and cold zones on a microfluidic chip. Thermal cyclers designed for quantitative PCR have optical systems which enable fluorescence to be monitored during reaction cycling.
Modern innovations
Modern thermal cyclers are equipped with a heated lid that presses against the lids of the reaction tubes. This prevents condensation of water from the reaction mixtures on the insides |
https://en.wikipedia.org/wiki/Deevirus | Deevirus is a genus of viruses in the realm Ribozyviria, containing the single species Deevirus actinopterygii. Various ray-finned fishes (Actinopterygii) serve as its hosts. |
https://en.wikipedia.org/wiki/Drug%20carrier | A drug carrier or drug vehicle is a substrate used in the process of drug delivery which serves to improve the selectivity, effectiveness, and/or safety of drug administration. Drug carriers are primarily used to control the release of drugs into systemic circulation. This can be accomplished either by slow release of a particular drug over a long period of time (typically diffusion) or by triggered release at the drug's target by some stimulus, such as changes in pH, application of heat, and activation by light. Drug carriers are also used to improve the pharmacokinetic properties, specifically the bioavailability, of many drugs with poor water solubility and/or membrane permeability.
A wide variety of drug carrier systems have been developed and studied, each of which has unique advantages and disadvantages. Some of the more popular types of drug carriers include liposomes, polymeric micelles, microspheres, and nanoparticles. Different methods of attaching the drug to the carrier have been implemented, including adsorption, integration into the bulk structure, encapsulation, and covalent bonding. Different types of drug carrier utilize different methods of attachment, and some carriers can even implement a variety of attachment methods.
Carrier types
Liposomes
Liposomes are structures which consist of at least one lipid bilayer surrounding an aqueous core. This hydrophobic/hydrophilic composition is particularly useful for drug delivery as these carriers can accommodate a number of drugs of varying lipophilicity. Disadvantages associated with using liposomes as drug carriers involve poor control over drug release. Drugs which have high membrane-permeability can readily 'leak' from the carrier, while optimization of in vivo stability can cause drug release by diffusion to be a slow and inefficient process. Much of the current research involving liposomes is focused on improving the delivery of anticancer drugs such as doxorubicin and paclitaxel.
Polymeric mice |
https://en.wikipedia.org/wiki/Incest%20taboo | An incest taboo is any cultural rule or norm that prohibits sexual relations between certain members of the same family, mainly between individuals related by blood. All human cultures have norms that exclude certain close relatives from those considered suitable or permissible sexual or marriage partners, making such relationships taboo. However, different norms exist among cultures as to which blood relations are permissible as sexual partners and which are not. Sexual relations between related persons which are subject to the taboo are called incestuous relationships.
Some cultures proscribe sexual relations between clan-members, even when no traceable biological relationship exists, while members of other clans are permissible irrespective of the existence of a biological relationship. In many cultures, certain types of cousin relations are preferred as sexual and marital partners, whereas in others these are taboo. Some cultures permit sexual and marital relations between aunts/uncles and nephews/nieces. In some instances, brother–sister marriages have been practised by the elites with some regularity. Parent–child and sibling–sibling unions are almost universally taboo.
Origin
Debate about the origin of the incest taboo has often been framed as a question of whether it is based in nature or nurture.
One explanation sees the incest taboo as a cultural implementation of a biologically evolved preference for sexual partners with whom one is unlikely to share genes, since inbreeding may have detrimental outcomes. The most widely held hypothesis proposes that the so-called Westermarck effect discourages adults from engaging in sexual relations with individuals with whom they grew up. The existence of the Westermarck effect has achieved some empirical support.
Another school argues that the incest prohibition is a cultural construct which arises as a side effect of a general human preference for group exogamy, which arises because intermarriage between groups c |
https://en.wikipedia.org/wiki/Orgastic%20potency | Within the work of the Austrian psychoanalyst Wilhelm Reich (1897–1957), orgastic potency is a human's natural ability to experience an orgasm with certain psychosomatic characteristics and resulting in full sexual gratification.
For Reich, "orgastic impotence" is an acquired fear of sexual excitation, resulting in the inability to find full sexual gratification (not to be confused with anorgasmia, the inability to reach orgasm). This always resulted in neurosis, according to Reich, because that person could never discharge all built-up libido, which Reich regarded as actual biological or bioelectric energy. According to Reich, "not a single neurotic individual possesses orgastic potency" and, inversely, all people free from neuroses have orgastic potency.
Reich coined the term orgastic potency in 1924 and described the concept in his 1927 book Die Funktion des Orgasmus, the manuscript of which he presented to Sigmund Freud on the latter's 70th birthday. Though Reich regarded his work as complementing Freud's original theory of anxiety neurosis, Freud was ambivalent in his reception. Freud's view was that there was no single cause of neurosis.
Reich continued to use the concept as a foundation of a person's psychosexual health in his later therapeutic methods, such as character analysis and vegetotherapy. During the period 1933–1937, he attempted to ground his orgasm theory in physiology, both theoretically and experimentally, as he published in the articles: The Orgasm as an Electrophysiological Discharge (1934), Sexuality and Anxiety: The Basic Antithesis of Vegetative Life (1934) and The Bioelectrical Function of Sexuality and Anxiety (1937).
Background
Reich developed his orgasm theory between 1921 and 1924 and it formed the basis for much of his later work, including the theory of character analysis. The starting point of Reich's orgasm theory was his clinical observation of genital disturbance in all neurotics, which he presented in November 1923, in the |
https://en.wikipedia.org/wiki/Aneuploidy | Aneuploidy is the presence of an abnormal number of chromosomes in a cell, for example a human cell having 45 or 47 chromosomes instead of the usual 46. It does not include a difference of one or more complete sets of chromosomes. A cell with any number of complete chromosome sets is called a euploid cell.
An extra or missing chromosome is a common cause of some genetic disorders. Some cancer cells also have abnormal numbers of chromosomes. About 68% of human solid tumors are aneuploid. Aneuploidy originates during cell division when the chromosomes do not separate properly between the two cells (nondisjunction). Most cases of aneuploidy in the autosomes result in miscarriage, and the most common extra autosomal chromosomes among live births are 21, 18 and 13. Chromosome abnormalities are detected in 1 of 160 live human births. Autosomal aneuploidy is more dangerous than sex chromosome aneuploidy, as autosomal aneuploidy is almost always lethal to embryos that cease developing because of it.
Chromosomes
Most cells in the human body have 23 pairs of chromosomes, or a total of 46 chromosomes. (The sperm and egg, or gametes, each have 23 unpaired chromosomes, and red blood cells in bone marrow have a nucleus at first but those red blood cells that are active in blood lose their nucleus and thus they end up having no nucleus and therefore no chromosomes.)
One copy of each pair is inherited from the mother and the other copy is inherited from the father. The first 22 pairs of chromosomes (called autosomes) are numbered from 1 to 22, from largest to smallest. The 23rd pair of chromosomes are the sex chromosomes. Typical females have two X chromosomes, while typical males have one X chromosome and one Y chromosome. The characteristics of the chromosomes in a cell as they are seen under a light microscope are called the karyotype.
During meiosis, when germ cells divide to create sperm and egg (gametes), each half should have the same number of chromosomes. But sometim |
https://en.wikipedia.org/wiki/Intelligent%20vehicular%20ad%20hoc%20network |
Intelligent vehicular ad hoc networks (InVANETs) use WiFi IEEE 802.11p (WAVE standard) and effective communication between vehicles with dynamic mobility. Effective measures such as media communication between vehicles can be enabled as well methods to track automotive vehicles. InVANET is not foreseen to replace current mobile (cellular phone) communication standards.
"Older" designs within the IEEE 802.11 scope may refer just to IEEE 802.11b/g. More recent designs refer to the latest issues of IEEE 802.11p (WAVE, draft status). Due to inherent lag times, only the latter one in the IEEE 802.11 scope is capable of coping with the typical dynamics of vehicle operation.
Automotive vehicular information can be viewed on electronic maps using the Internet or specialized software. The advantage of WiFi based navigation system function is that it can effectively locate a vehicle which is inside big campuses like universities, airports, and tunnels.
InVANET can be used as part of automotive electronics, which has to identify an optimally minimal path for navigation with minimal traffic intensity. The system can also be used as a city guide to locate and identify landmarks in a new city.
Communication capabilities in vehicles are the basis of an envisioned InVANET or intelligent transportation systems (ITS). Vehicles are enabled to communicate among themselves (vehicle-to-vehicle, V2V) and via roadside access points (vehicle-to-roadside, V2R) also called as Road Side Units (RSUs). Vehicular communication is expected to contribute to safer and more efficient roads by providing timely information to drivers, and also to make travel more convenient. The integration of V2V and V2R communication is beneficial because V2R provides better service sparse networks and long-distance communication, whereas V2V enables direct communication for small to medium distances/areas and at locations where roadside access points are not available.
Providing vehicle–vehicle and vehicle–roa |
https://en.wikipedia.org/wiki/Capacitively%20coupled%20plasma | A capacitively coupled plasma (CCP) is one of the most common types of industrial plasma sources. It essentially consists of two metal electrodes separated by a small distance, placed in a reactor. The gas pressure in the reactor can be lower than atmosphere or it can be atmospheric.
Description
A typical CCP system is driven by a single radio-frequency (RF) power supply, typically at 13.56 MHz. One of two electrodes is connected to the power supply, and the other one is grounded. As this configuration is similar in principle to a capacitor in an electric circuit, the plasma formed in this configuration is called a capacitively coupled plasma.
When an electric field is generated between electrodes, atoms are ionized and release electrons. The electrons in the gas are accelerated by the RF field and can ionize the gas directly or indirectly by collisions, producing secondary electrons. When the electric field is strong enough, it can lead to what is known as electron avalanche. After avalanche breakdown, the gas becomes electrically conductive due to abundant free electrons. Often it accompanies light emission from excited atoms or molecules in the gas. When visible light is produced, plasma generation can be indirectly observed even with bare eyes.
A variation on capacitively coupled plasma involves isolating one of the electrodes, usually with a capacitor. The capacitor appears like a short circuit to the high frequency RF field, but like an open circuit to direct current (DC) field. Electrons impinge on the electrode in the sheath, and the electrode quickly acquires a negative charge (or self-bias) because the capacitor does not allow it to discharge to ground. This sets up a secondary, DC field across the plasma in addition to the alternating current (AC) field. Massive ions are unable to react to the quickly changing AC field, but the strong, persistent DC field accelerates them toward the self-biased electrode. These energetic ions are exploited in m |
https://en.wikipedia.org/wiki/Martin%27s%20maximum | In set theory, a branch of mathematical logic, Martin's maximum, introduced by and named after Donald Martin, is a generalization of the proper forcing axiom, itself a generalization of Martin's axiom. It represents the broadest class of forcings for which a forcing axiom is consistent.
Martin's maximum states that if D is a collection of dense subsets of a notion of forcing that preserves stationary subsets of ω1, then there is a D-generic filter. Forcing with a ccc notion of forcing preserves stationary subsets of ω1, thus extends . If (P,≤) is not a stationary set preserving notion of forcing, i.e., there is a stationary subset of ω1, which becomes nonstationary when forcing with (P,≤), then there is a collection D of dense subsets of (P,≤), such that there is no D-generic filter. This is why is called the maximal extension of Martin's axiom.
The existence of a supercompact cardinal implies the consistency of Martin's maximum. The proof uses Shelah's theories of semiproper forcing and iteration with revised countable supports.
implies that the value of the continuum is and that the ideal of nonstationary sets on ω1 is -saturated. It further implies stationary reflection, i.e., if S is a stationary subset of some regular cardinal κ ≥ ω2 and every element of S has countable cofinality, then there is an ordinal α < κ such that S ∩ α is stationary in α. In fact, S contains a closed subset of order type ω1.
Notes |
https://en.wikipedia.org/wiki/Rhizopus%20stolonifer | Rhizopus stolonifer is commonly known as black bread mold. It is a member of Zygomycota and considered the most important species in the genus Rhizopus. It is one of the most common fungi in the world and has a global distribution although it is most commonly found in tropical and subtropical regions. It is a common agent of decomposition of stored foods. Like other members of the genus Rhizopus, R. stolonifer grows rapidly, mostly in indoor environments.
History
This fungus was first discovered by the German scientist Christian Gottfried Ehrenberg in 1818 as Rhizopus nigricans. The name was changed in 1902 to Rhizopus stolonifer by the French mycologist J. P. Vuillemin.
Habitat and ecology
Rhizopus stolonifer is a worldwide distributed species. It is found on all types of mouldy materials. It is often one of the first molds to appear on stale bread. It can exist in the soil as well as in the air. A variety of natural substrata are colonized by this species because R. stolonifer can tolerate broad variations in the concentration of essential nutrients and can use carbon and nitrogen combined in diverse forms.
In the laboratory, this fungus grows well on different media, including those that contain ammonium salts or amino compounds. However, R. stolonifer will not grow on Czapek's agar because it cannot utilize nitrogen in the form of nitrate. Rhizopus lives in hyphae and matured spores.
Growth and physiology
This species is known as a saprotroph and plays an important role in the early colonization of substrata in soil. Nonetheless, it can also behave as a parasite of plant tissues causing a rot of vegetables and fruits. Like other species of Rhizopus, R. stolonifer grows rapidly and spreads by means of the stolons. The stolons provide an aerial structure for the growth of the mycelium and the occupation of large areas. They can climb vertically as well as horizontally. Rhizopus species periodically produce rhizoids, which anchor it to the substrate and u |
https://en.wikipedia.org/wiki/JXplorer | JXplorer is a free, open-source client for browsing Lightweight Directory Access Protocol (LDAP) servers and LDAP Data Interchange Format (LDIF) files. It is released under an Apache-equivalent license. JXplorer is written in Java and is platform independent, configurable, and has been translated into a number of languages. In total, as of 2018, JXplorer has been downloaded over 2 million times from SourceForge and is bundled with several Linux distributions.
Several common Linux distributions include JXplorer Software for LDAP server administration. The software also runs on BSD-variants, AIX, HP-UX, OS X, Solaris, Windows (2000, XP) and z/OS.
Key features are:
SSL, SASL and GSSAPI
DSML
LDIF
Localisation (currently available in German, French, Japanese, Traditional Chinese, Simplified Chinese, Hungarian);
Optional LDAP filter constructor GUI; extensible architecture
The primary authors and maintainers are Chris Betts and Trudi Ersvaer, originally both working in the CA (then Computer Associates) Directory (now CA Directory) software lab in Melbourne, Australia. Version 3.3, the '10th Anniversary Edition' was released in July 2012.
See also
List of LDAP software |
https://en.wikipedia.org/wiki/Metachromasia | Metachromasia (var. metachromasy) is a characteristical change in the color of staining carried out in biological tissues, exhibited by certain dyes when they bind to particular substances present in these tissues, called chromotropes. For example, toluidine blue becomes dark blue (with a colour range from blue-red dependent on glycosaminoglycan content) when bound to cartilage. Other widely used metachromatic stains are the haematological Giemsa and May-Grunwald stains that also contain thiazine dyes. The white cell nucleus stains purple, basophil granules intense magenta, whilst the cytoplasms (of mononuclear cells) stains blue. The absence of color change in staining is named orthochromasia.
The underlying mechanism for metachromasia requires the presence of polyanions within the tissue. When these tissues are stained with a concentrated basic dye solution, such as toluidine blue, the bound dye molecules are close enough to form dimeric and polymeric aggregates. The light absorption spectra of these stacked dye aggregates differ from those of the individual monomeric dye molecules. Cell and tissue structures that have high concentrations of ionized sulfate and phosphate groups—such as the ground substance of cartilage, heparin-containing granules of mast cells, and rough endoplasmic reticulum of plasma cells—exhibit metachromasia. This depends on the charge density of the negative sulfate and carboxylate anions in the glycosaminoglycan (GAG). The GAG polyanion stabilizes the stacked, positively-charged dye molecules, resulting in a spectral shift as the conjugated double bond π-orbitals of adjacent dye molecules overlap. The greater the degree of stacking, the greater the metachromatic shift. Thus, hyaluronic acid, lacking sulphate groups and with only moderate charge density, causes slight metachromasia; chondroitin sulfate, with an additional sulfate residue per GAG saccharide dimer, is an effective metachromatic substrate, whilst heparin, with further N- |
https://en.wikipedia.org/wiki/Time-resolved%20photon%20emission | Time-resolved photon emission (TRPE) is used to measure timing waveforms on semiconductor devices. TRPE measurements are performed on the back side of the semiconductor device. The substrate of the device-under-test (DUT) must first be thinned mechanically. The device is mounted on a movable X-Y stage in an enclosure which shields it from all sources of light. The DUT is connected to an active electrical stimulus. The stimulus pattern is continuously looped and a trigger signal is sent to the TRPE instrument in order to tell it when the pattern repeats. A TRPE prober operates in a manner similar to a sampling oscilloscope, and is used to perform semiconductor failure analysis.
Theory of operation
As the electrical stimulus pattern is repetitively applied to the DUT, internal transistors switch on and off. As pMOS and nMOS transistors switch on or off, they emit photons. These photons emissions are recorded by a sensitive photon detector. By counting the number of photons emitted for a specific transistor across a period of time, a photon histogram may be constructed. The photon histogram records an increase in photon emissions during times that the transistor switches on or off. By detecting the combined photon emissions of pairs p- and n-channel transistors contained in logic gates, it is possible to use the resulting histogram to determine the locations in time of the rising and falling edges of the signal at that node. The waveform produced is not representative of a true voltage waveform, but more accurately represents the derivative of the waveform, with photon spikes being seen only at rising or falling edges. |
https://en.wikipedia.org/wiki/EuroFlow | EuroFlow consortium was founded in 2005 as 2U-FP6 funded project and launched in spring 2006. At first, EuroFlow was composed of 18 diagnostic research groups and two SMEs (small/medium enterprises) from eight different European countries with complementary knowledge and skills in the field of flow cytometry and immunophenotyping. During 2012 both SMEs left the project so it obtained full scientific independence. The goal of EuroFlow consortium is to innovate and standardize flow cytometry leading to global improvement and progress in diagnostics of haematological malignancies and individualisation of treatment.
Background
Since the '90s immunophenotyping (staining cells with antibodies conjugated with fluorochromes and detection with flow cytometer) became the preferred method in diagnostics of haematological malignancies. The advantages of this method are speed and simplicity, possibility to measure more than 6 parameters at a time, precise focusing on malignant population and also broad applicability in diagnostics. Because there is a great progress in development of antibodies, fluorochromes and multicolor digital flow cytometers, it became a question of how to interpret cytometric data and how to achieve comparable results between facilities. Even though a consensus of recommendations and guidelines was established, standardization was only partial because there was no regard of different antibody clones, fluorochromes and their optimal combinations or of sample preparation. On that account cytometry is perceived as method highly dependent on level of expertise and with limited reproducibility in multicentric studies.
Goals of Euroflow
These goals were set out in the journal Leukemia in 2012.
Development and evaluation of new antibodies
Establishment of new immunobead assay technology
Development of new software tools followed by new analysing approaches for recognition of complex immunofenotype patterns.
Design of new multicolor protocols and standard |
https://en.wikipedia.org/wiki/An%20Experiment%20on%20a%20Bird%20in%20the%20Air%20Pump | An Experiment on a Bird in the Air Pump is a 1768 oil-on-canvas painting by Joseph Wright of Derby, one of a number of candlelit scenes that Wright painted during the 1760s. The painting departed from convention of the time by depicting a scientific subject in the reverential manner formerly reserved for scenes of historical or religious significance. Wright was intimately involved in depicting the Industrial Revolution and the scientific advances of the Enlightenment. While his paintings were recognized as exceptional by his contemporaries, his provincial status and choice of subjects meant the style was never widely imitated. The picture has been owned by the National Gallery in London since 1863 and is regarded as a masterpiece of British art.
The painting depicts a natural philosopher, a forerunner of the modern scientist, recreating one of Robert Boyle's air pump experiments, in which a bird is deprived of air, before a varied group of onlookers. The group exhibits a variety of reactions, but for most of the audience scientific curiosity overcomes concern for the bird. The central figure looks out of the picture as if inviting the viewer's participation in the outcome.
Historical background
In 1659, Robert Boyle commissioned the construction of an air pump, then described as a "pneumatic engine", which is known today as a "vacuum pump". The air pump was invented by Otto von Guericke in 1650, though its high cost deterred most contemporary scientists from constructing the apparatus. Boyle, the son of the Earl of Cork, had no such concerns—after its construction, he donated the initial 1659 model to the Royal Society and had a further two redesigned machines built for his personal use. Aside from Boyle's three pumps, there were probably no more than four others in existence during the 1660s: Christiaan Huygens had one in The Hague, Henry Power may have had one at Halifax, and there may have been pumps at Christ's College, Cambridge, and the Montmor Academy in |
https://en.wikipedia.org/wiki/Nutrient%20cycle | A nutrient cycle (or ecological recycling) is the movement and exchange of inorganic and organic matter back into the production of matter. Energy flow is a unidirectional and noncyclic pathway, whereas the movement of mineral nutrients is cyclic. Mineral cycles include the carbon cycle, sulfur cycle, nitrogen cycle, water cycle, phosphorus cycle, oxygen cycle, among others that continually recycle along with other mineral nutrients into productive ecological nutrition.
Overview
The nutrient cycle is nature's recycling system. All forms of recycling have feedback loops that use energy in the process of putting material resources back into use. Recycling in ecology is regulated to a large extent during the process of decomposition. Ecosystems employ biodiversity in the food webs that recycle natural materials, such as mineral nutrients, which includes water. Recycling in natural systems is one of the many ecosystem services that sustain and contribute to the well-being of human societies.
There is much overlap between the terms for the biogeochemical cycle and nutrient cycle. Most textbooks integrate the two and seem to treat them as synonymous terms. However, the terms often appear independently. Nutrient cycle is more often used in direct reference to the idea of an intra-system cycle, where an ecosystem functions as a unit. From a practical point, it does not make sense to assess a terrestrial ecosystem by considering the full column of air above it as well as the great depths of Earth below it. While an ecosystem often has no clear boundary, as a working model it is practical to consider the functional community where the bulk of matter and energy transfer occurs. Nutrient cycling occurs in ecosystems that participate in the "larger biogeochemical cycles of the earth through a system of inputs and outputs."
Complete and closed loop
Ecosystems are capable of complete recycling. Complete recycling means that 100% of the waste material can be reconstituted inde |
https://en.wikipedia.org/wiki/Human%20information%20interaction | Human-information interaction or HII is the formal term for information behavior research in archival science; the term was invented by Nahum Gershon in 1995. HII is not transferable from analog to digital research because nonprofessional researchers greatly emphasize the need for further elaboration of context and scope finding aid elements. Researchers in HII take on many tasks, including helping to design information systems from a biological perspective that conform to the requirements of different segments of society, along with other behaviour intended to improve interaction between humans and information systems. HII is generally considered to be multi-disciplinary as different disciplines have different viewpoints on these interactions and their consequences. HII is considered especially important due to humanity's dependence on information and the technology needed to access it. |
https://en.wikipedia.org/wiki/Help%20Scout | Help Scout, legally Help Scout PBC, is a global remote company which is a provider of help desk software and is headquartered in Boston, Massachusetts. The company provides an email-based customer support platform, knowledge base tool, and an embeddable search/contact widget for customer service professionals. Help Scout's main product (also called Help Scout) is a web-based SaaS (software as a service) HIPAA-compliant help desk.
Founded in 2011, the company serves more than 10,000 customers in over 140 countries including Buffer, Basecamp, Trello, Reddit, and AngelList. Many employees are remote workers with over 100 employees living in more than 80 cities worldwide. In 2020, the company has sublet their office in Boston and turned the company into a distributed company (entirely remote).
History
Nick Francis, Jared McDaniel, and Denny Swindle ran Brightwurks, a web design consultancy in Nashville, Tennessee for six years before building Help Scout, a help desk software.
The co-founders launched Help Scout publicly while participating in Techstars Boston, a startup accelerator program. Help Scout raised $800k in angel funding soon after Techstars and was profitable for nearly two years before raising $6 million in Series A funding in 2015, from Foundry Group and Converge Venture Partners. Brightwurks was renamed Help Scout in 2015.
Features and technology
Help Desk
Help Scout's support software operates like a shared email inbox. Help Scout enables large teams to provide customer support via email with their tool. The platform features integrations with live chat, phone systems, CRMs, and email marketing tools.
Docs
In addition to its help desk platform, Help Scout offers a feature called Docs, a self-service knowledge base for customers to find answers to support questions. Docs was launched as a feature in 2013.
Beacon
This embeddable widget provides users on a website with quick access to Docs and/or serves as a contact form.
Mobile
Help Scout launc |
https://en.wikipedia.org/wiki/Spaces%20%28social%20network%29 | Spaces is a Russian social network service that targets mobile phone users.
History
The website was launched in 2006.
In 2009, it was ranked 7th on Opera's top ten visited social networking sites.
Konstantin Vladimirovich Rak, who was sued by Epic Games for allegedly creating cheats to Fortnite, was a user of the service.
Currently, the site is going through not very good times of blocking in Russia, frequent changes of site addresses and constant DDOSS attacks. In 2022, the social network changed 7 domain names, due to tightening Internet censorship and frequent blocking in Russia.
See also
List of social networking services
VK (service)
Facebook |
https://en.wikipedia.org/wiki/Leiden%20scale | The Leiden scale (°L) was used to calibrate low-temperature indirect measurements in the early twentieth century, by providing conventional values (in kelvins, then termed "degrees Kelvin") of helium vapour pressure. It was used below −183 °C, the starting point of the International Temperature Scale in the 1930s (Awbery 1934). |
https://en.wikipedia.org/wiki/Anterior%20cardinal%20vein | The anterior cardinal veins (precardinal veins) contribute to the formation of the internal jugular veins and together with the common cardinal vein form the superior vena cava.
The anastomosis between the two anterior cardinal veins develops into the left brachiocephalic vein.
Additional images
See also
Posterior cardinal vein |
https://en.wikipedia.org/wiki/Dirichlet%20problem | In mathematics, a Dirichlet problem is the problem of finding a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region.
The Dirichlet problem can be solved for many PDEs, although originally it was posed for Laplace's equation. In that case the problem can be stated as follows:
Given a function f that has values everywhere on the boundary of a region in Rn, is there a unique continuous function u twice continuously differentiable in the interior and continuous on the boundary, such that u is harmonic in the interior and u = f on the boundary?
This requirement is called the Dirichlet boundary condition. The main issue is to prove the existence of a solution; uniqueness can be proven using the maximum principle.
History
The Dirichlet problem goes back to George Green, who studied the problem on general domains with general boundary conditions in his Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, published in 1828. He reduced the problem into a problem of constructing what we now call Green's functions, and argued that Green's function exists for any domain. His methods were not rigorous by today's standards, but the ideas were highly influential in the subsequent developments. The next steps in the study of the Dirichlet's problem were taken by Karl Friedrich Gauss, William Thomson (Lord Kelvin) and Peter Gustav Lejeune Dirichlet, after whom the problem was named, and the solution to the problem (at least for the ball) using the Poisson kernel was known to Dirichlet (judging by his 1850 paper submitted to the Prussian academy). Lord Kelvin and Dirichlet suggested a solution to the problem by a variational method based on the minimization of "Dirichlet's energy". According to Hans Freudenthal (in the Dictionary of Scientific Biography, vol. 11), Bernhard Riemann was the first mathematician who solved this va |
https://en.wikipedia.org/wiki/Production%20%28computer%20science%29 | A production or production rule in computer science is a rewrite rule specifying a symbol substitution that can be recursively performed to generate new symbol sequences. A finite set of productions is the main component in the specification of a formal grammar (specifically a generative grammar). The other components are a finite set of nonterminal symbols, a finite set (known as an alphabet) of terminal symbols that is disjoint from and a distinguished symbol that is the start symbol.
In an unrestricted grammar, a production is of the form , where and are arbitrary strings of terminals and nonterminals, and may not be the empty string. If is the empty string, this is denoted by the symbol , or (rather than leave the right-hand side blank). So productions are members of the cartesian product
,
where is the vocabulary, is the Kleene star operator, indicates concatenation, denotes set union, and denotes set minus or set difference. If we do not allow the start symbol to occur in (the word on the right side), we have to replace by on the right side of the cartesian product symbol.
The other types of formal grammar in the Chomsky hierarchy impose additional restrictions on what constitutes a production. Notably in a context-free grammar, the left-hand side of a production must be a single nonterminal symbol. So productions are of the form:
Grammar generation
To generate a string in the language, one begins with a string consisting of only a single start symbol, and then successively applies the rules (any number of times, in any order) to rewrite this string. This stops when we obtain a string containing only terminals. The language consists of all the strings that can be generated in this manner. Any particular sequence of legal choices taken during this rewriting process yields one particular string in the language. If there are multiple different ways of generating this single string, then the grammar is said to be ambiguous.
For example, |
https://en.wikipedia.org/wiki/Multiple%20orthogonal%20polynomials | In mathematics, the multiple orthogonal polynomials (MOPs) are orthogonal polynomials in one variable that are orthogonal with respect to a finite family of measures. The polynomials are divided into two classes named type 1 and type 2.
In the literature, MOPs are also called -orthogonal polynomials, Hermite-Padé polynomials or polyorthogonal polynomials. MOPs should not be confused with multivariate orthogonal polynomials.
Multiple orthogonal polynomials
Consider a multiindex and positive measures over the reals. As usual .
MOP of type 1
Polynomials for are of type 1 if the -th polynomial has at most degree such that
and
Explanation
This defines a system of equations for the coefficients of the polynomials .
MOP of type 2
A monic polynomial is of type 2 if it has degree such that
Explanation
If we write out, we get the following definition
Literature
López-Lagomasino, G. (2021). An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation. In: Marcellán, F., Huertas, E.J. (eds) Orthogonal Polynomials: Current Trends and Applications. SEMA SIMAI Springer Series, vol 22. Springer, Cham. https://doi.org/10.1007/978-3-030-56190-1_9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.