id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
6,617 | https://en.wikipedia.org/wiki/Compactification%20%28mathematics%29 | In mathematics, in general topology, compactification is the process or result of making a topological space into a compact space. A compact space is a space in which every open cover of the space contains a finite subcover. The methods of compactification are various, but each is a way of controlling points from "going off to infinity" by in some way adding "points at infinity" or preventing such an "escape".
An example
Consider the real line with its ordinary topology. This space is not compact; in a sense, points can go off to infinity to the left or to the right. It is possible to turn the real line into a compact space by adding a single "point at infinity" which we will denote by ∞. The resulting compactification is homeomorphic to a circle in the plane (which, as a closed and bounded subset of the Euclidean plane, is compact). Every sequence that ran off to infinity in the real line will then converge to ∞ in this compactification. The direction in which a number approaches infinity on the number line (either in the - direction or + direction) is still preserved on the circle; for if a number approaches towards infinity from the - direction on the number line, then the corresponding point on the circle can approach ∞ from the left for example. Then if a number approaches infinity from the + direction on the number line, then the corresponding point on the circle can approach ∞ from the right.
Intuitively, the process can be pictured as follows: first shrink the real line to the open interval on the x-axis; then bend the ends of this interval upwards (in positive y-direction) and move them towards each other, until you get a circle with one point (the topmost one) missing. This point is our new point ∞ "at infinity"; adding it in completes the compact circle.
A bit more formally: we represent a point on the unit circle by its angle, in radians, going from − to for simplicity. Identify each such point θ on the circle with the corresponding point on the real line tan(θ/2). This function is undefined at the point , since tan(/2) is undefined; we will identify this point with our point ∞.
Since tangents and inverse tangents are both continuous, our identification function is a homeomorphism between the real line and the unit circle without ∞. What we have constructed is called the Alexandroff one-point compactification of the real line, discussed in more generality below. It is also possible to compactify the real line by adding two points, +∞ and −∞; this results in the extended real line.
Definition
An embedding of a topological space X as a dense subset of a compact space is called a compactification of X. It is often useful to embed topological spaces in compact spaces, because of the special properties compact spaces have.
Embeddings into compact Hausdorff spaces may be of particular interest. Since every compact Hausdorff space is a Tychonoff space, and every subspace of a Tychonoff space is Tychonoff, we conclude that any space possessing a Hausdorff compactification must be a Tychonoff space. In fact, the converse is also true; being a Tychonoff space is both necessary and sufficient for possessing a Hausdorff compactification.
The fact that large and interesting classes of non-compact spaces do in fact have compactifications of particular sorts makes compactification a common technique in topology.
Alexandroff one-point compactification
For any noncompact topological space X the (Alexandroff) one-point compactification αX of X is obtained by adding one extra point ∞ (often called a point at infinity) and defining the open sets of the new space to be the open sets of X together with the sets of the form G ∪ , where G is an open subset of X such that is closed and compact. The one-point compactification of X is Hausdorff if and only if X is Hausdorff and locally compact.
Stone–Čech compactification
Of particular interest are Hausdorff compactifications, i.e., compactifications in which the compact space is Hausdorff. A topological space has a Hausdorff compactification if and only if it is Tychonoff. In this case, there is a unique (up to homeomorphism) "most general" Hausdorff compactification, the Stone–Čech compactification of X, denoted by βX; formally, this exhibits the category of Compact Hausdorff spaces and continuous maps as a reflective subcategory of the category of Tychonoff spaces and continuous maps.
"Most general" or formally "reflective" means that the space βX is characterized by the universal property that any continuous function from X to a compact Hausdorff space K can be extended to a continuous function from βX to K in a unique way. More explicitly, βX is a compact Hausdorff space containing X such that the induced topology on X by βX is the same as the given topology on X, and for any continuous map , where K is a compact Hausdorff space, there is a unique continuous map for which g restricted to X is identically f.
The Stone–Čech compactification can be constructed explicitly as follows: let C be the set of continuous functions from X to the closed interval . Then each point in X can be identified with an evaluation function on C. Thus X can be identified with a subset of , the space of all functions from C to . Since the latter is compact by Tychonoff's theorem, the closure of X as a subset of that space will also be compact. This is the Stone–Čech compactification.
Spacetime compactification
Walter Benz and Isaak Yaglom have shown how stereographic projection onto a single-sheet hyperboloid can be used to provide a compactification for split complex numbers. In fact, the hyperboloid is part of a quadric in real projective four-space. The method is similar to that used to provide a base manifold for group action of the conformal group of spacetime.
Projective space
Real projective space RPn is a compactification of Euclidean space Rn. For each possible "direction" in which points in Rn can "escape", one new point at infinity is added (but each direction is identified with its opposite). The Alexandroff one-point compactification of R we constructed in the example above is in fact homeomorphic to RP1. Note however that the projective plane RP2 is not the one-point compactification of the plane R2 since more than one point is added.
Complex projective space CPn is also a compactification of Cn; the Alexandroff one-point compactification of the plane C is (homeomorphic to) the complex projective line CP1, which in turn can be identified with a sphere, the Riemann sphere.
Passing to projective space is a common tool in algebraic geometry because the added points at infinity lead to simpler formulations of many theorems. For example, any two different lines in RP2 intersect in precisely one point, a statement that is not true in R2. More generally, Bézout's theorem, which is fundamental in intersection theory, holds in projective space but not affine space. This distinct behavior of intersections in affine space and projective space is reflected in algebraic topology in the cohomology rings – the cohomology of affine space is trivial, while the cohomology of projective space is non-trivial and reflects the key features of intersection theory (dimension and degree of a subvariety, with intersection being Poincaré dual to the cup product).
Compactification of moduli spaces generally require allowing certain degeneracies – for example, allowing certain singularities or reducible varieties. This is notably used in the Deligne–Mumford compactification of the moduli space of algebraic curves.
Compactification and discrete subgroups of Lie groups
In the study of discrete subgroups of Lie groups, the quotient space of cosets is often a candidate for more subtle compactification to preserve structure at a richer level than just topological.
For example, modular curves are compactified by the addition of single points for each cusp, making them Riemann surfaces (and so, since they are compact, algebraic curves). Here the cusps are there for a good reason: the curves parametrize a space of lattices, and those lattices can degenerate ('go off to infinity'), often in a number of ways (taking into account some auxiliary structure of level). The cusps stand in for those different 'directions to infinity'.
That is all for lattices in the plane. In -dimensional Euclidean space the same questions can be posed, for example about This is harder to compactify. There are a variety of compactifications, such as the Borel–Serre compactification, the reductive Borel–Serre compactification, and the Satake compactifications, that can be formed.
Other compactification theories
The theories of ends of a space and prime ends.
Some 'boundary' theories such as the collaring of an open manifold, Martin boundary, Shilov boundary and Furstenberg boundary.
The Bohr compactification of a topological group arises from the consideration of almost periodic functions.
The projective line over a ring for a topological ring may compactify it.
The Baily–Borel compactification of a quotient of a Hermitian symmetric space.
The wonderful compactification of a quotient of algebraic groups.
The compactifications that are simultaneously convex subsets in a locally convex space are called convex compactifications, their additional linear structure allowing e.g. for developing a differential calculus and more advanced considerations e.g. in relaxation in variational calculus or optimization theory.
See also
References | Compactification (mathematics) | [
"Mathematics"
] | 2,042 | [
"Topology",
"Compactification (mathematics)"
] |
6,620 | https://en.wikipedia.org/wiki/Cotangent%20space | In differential geometry, the cotangent space is a vector space associated with a point on a smooth (or differentiable) manifold ; one can define a cotangent space for every point on a smooth manifold. Typically, the cotangent space, is defined as the dual space of the tangent space at , , although there are more direct definitions (see below). The elements of the cotangent space are called cotangent vectors or tangent covectors.
Properties
All cotangent spaces at points on a connected manifold have the same dimension, equal to the dimension of the manifold. All the cotangent spaces of a manifold can be "glued together" (i.e. unioned and endowed with a topology) to form a new differentiable manifold of twice the dimension, the cotangent bundle of the manifold.
The tangent space and the cotangent space at a point are both real vector spaces of the same dimension and therefore isomorphic to each other via many possible isomorphisms. The introduction of a Riemannian metric or a symplectic form gives rise to a natural isomorphism between the tangent space and the cotangent space at a point, associating to any tangent covector a canonical tangent vector.
Formal definitions
Definition as linear functionals
Let be a smooth manifold and let be a point in . Let be the tangent space at . Then the cotangent space at x is defined as the dual space of
Concretely, elements of the cotangent space are linear functionals on . That is, every element is a linear map
where is the underlying field of the vector space being considered, for example, the field of real numbers. The elements of are called cotangent vectors.
Alternative definition
In some cases, one might like to have a direct definition of the cotangent space without reference to the tangent space. Such a definition can be formulated in terms of equivalence classes of smooth functions on . Informally, we will say that two smooth functions f and g are equivalent at a point if they have the same first-order behavior near , analogous to their linear Taylor polynomials; two functions f and g have the same first order behavior near if and only if the derivative of the function f − g vanishes at . The cotangent space will then consist of all the possible first-order behaviors of a function near .
Let be a smooth manifold and let x be a point in . Let be the ideal of all functions in vanishing at , and let be the set of functions of the form , where . Then and are both real vector spaces and the cotangent space can be defined as the quotient space by showing that the two spaces are isomorphic to each other.
This formulation is analogous to the construction of the cotangent space to define the Zariski tangent space in algebraic geometry. The construction also generalizes to locally ringed spaces.
The differential of a function
Let be a smooth manifold and let be a smooth function. The differential of at a point is the map
where is a tangent vector at , thought of as a derivation. That is is the Lie derivative of in the direction , and one has . Equivalently, we can think of tangent vectors as tangents to curves, and write
In either case, is a linear map on and hence it is a tangent covector at .
We can then define the differential map at a point as the map which sends to . Properties of the differential map include:
is a linear map: for constants and ,
The differential map provides the link between the two alternate definitions of the cotangent space given above. Since for all there exist such that , we have, i.e. All function in have differential zero, it follows that for every two functions , , we have . We can now construct an isomorphism between and by sending linear maps to the corresponding cosets . Since there is a unique linear map for a given kernel and slope, this is an isomorphism, establishing the equivalence of the two definitions.
The pullback of a smooth map
Just as every differentiable map between manifolds induces a linear map (called the pushforward or derivative) between the tangent spaces
every such map induces a linear map (called the pullback) between the cotangent spaces, only this time in the reverse direction:
The pullback is naturally defined as the dual (or transpose) of the pushforward. Unraveling the definition, this means the following:
where and . Note carefully where everything lives.
If we define tangent covectors in terms of equivalence classes of smooth maps vanishing at a point then the definition of the pullback is even more straightforward. Let be a smooth function on vanishing at . Then the pullback of the covector determined by (denoted ) is given by
That is, it is the equivalence class of functions on vanishing at determined by .
Exterior powers
The -th exterior power of the cotangent space, denoted , is another important object in differential and algebraic geometry. Vectors in the -th exterior power, or more precisely sections of the -th exterior power of the cotangent bundle, are called differential -forms. They can be thought of as alternating, multilinear maps on tangent vectors.
For this reason, tangent covectors are frequently called one-forms.
References
Differential topology
Tensors | Cotangent space | [
"Mathematics",
"Engineering"
] | 1,077 | [
"Tensors",
"Topology",
"Differential topology"
] |
6,627 | https://en.wikipedia.org/wiki/Common%20Desktop%20Environment | The Common Desktop Environment (CDE) is a desktop environment for Unix and OpenVMS, based on the Motif widget toolkit. It was part of the UNIX 98 Workstation Product Standard, and was for a long time the Unix desktop associated with commercial Unix workstations. It helped to influence early implementations of successor projects such as KDE and GNOME, which largely replaced CDE following the turn of the century.
After a long history as proprietary software, CDE was released as free software on August 6, 2012, under the GNU Lesser General Public License, version 2.0 or later. Since its release as free software, CDE has been ported to Linux and BSD derivatives.
History
Hewlett-Packard, IBM, SunSoft, and USL announced CDE in June 1993 as a joint development within the Common Open Software Environment (COSE) initiative. Each development group contributed its own technology to CDE:
HP contributed the primary environment for CDE, which was based on HP's Visual User Environment (VUE). HP VUE was itself derived from the Motif Window Manager.
IBM contributed its Common User Access model from OS/2's Workplace Shell.
Sun contributed its ToolTalk application interaction framework and a port of its DeskSet productivity tools, including mail and calendar clients, from its OpenWindows environment.
USL provided desktop manager components and scalable systems technologies from UNIX System V.
After its release, HP endorsed CDE as the new standard desktop for Unix, and provided documentation and software for migrating HP VUE customizations to CDE.
In March 1994 CDE became the responsibility of the "new OSF", a merger of the Open Software Foundation and Unix International;
in September 1995, the merger of Motif and CDE into a single project, CDE/Motif, was announced. OSF became part of the newly formed Open Group in 1996.
In February 1997, the Open Group released their last major version of CDE, version 2.1.
Red Hat Linux was the only Linux distribution that proprietary CDE was ported to. In 1997, Red Hat began offering a version of CDE licensed from TriTeal Corporation. In 1998, Xi Graphics, a company specializing in the X Windowing System, offered a version of CDE bundled with Red Hat Linux, called Xi Graphics maXimum cde/OS. These were phased out, and Red Hat moved to the GNOME desktop.
Until about 2000, users of Unix desktops regarded CDE as the de facto standard, but at that time, other desktop environments such as GNOME and K Desktop Environment 2 were quickly becoming mature, and became widespread on Linux systems.
In 2001, Sun Microsystems announced that they would phase out CDE as the standard desktop environment in Solaris in favor of GNOME. Solaris 10, released in early 2005, includes both CDE and the GNOME-based Java Desktop System. The OpenSolaris project, begun around the same time, did not include CDE, and had no intent to make Solaris CDE available as open-source. The original release of Solaris 11 in November 2011 only contained GNOME as standard desktop, though some CDE libraries, such as Motif and ToolTalk, remained for binary compatibility but Oracle Solaris 11.4, released in August 2018, removed support for the CDE runtime environment and background services.
Systems that provided proprietary CDE
IBM AIX
Digital UNIX
HP-UX: from version 10.10, released in 1996.
IRIX: for a short time CDE was an alternative to IRIX Interactive Desktop.
OpenVMS: available in OpenVMS Alpha V7.1 and onwards, referred to as the "DECWindows Motif New Desktop"
Solaris: available starting with 2.3, standard in 2.6 to 10.
Tru64 UNIX
UnixWare
UXP/DS
Red Hat Linux: Two versions ported by Triteal and Xi Graphics
License history
From its launch until 2012, CDE was proprietary software.
Motif, the toolkit on which CDE is built, was released by The Open Group in 2000 as "Open Motif," under a "revenue sharing" license. That license did not meet either the open source or free software definitions. The Open Group had wished to make Motif open source, but did not succeed doing so at that time.
Release under the GNU LGPL
In 2006, a petition was created asking The Open Group to release the source code for CDE and Motif under a free license. On August 6, 2012, CDE was released under the LGPL-2.0-or-later license. The CDE source code was then released to SourceForge.
The free software project OpenCDE had been started in 2010 to reproduce the look and feel, organization, and feature set of CDE. In August 2012, when CDE was released as free software, OpenCDE was officially deprecated in favor of CDE.
On October 23, 2012, the Motif widget toolkit was also released under the LGPL-2.1-or-later license. This allowed CDE to become a completely free and open source desktop environment.
Shortly after CDE was released as free software, a Linux live CD was created based on Debian 6 with CDE 2.2.0c pre-installed, called CDEbian. The live CD has since been discontinued.
The Debian-based Linux distribution SparkyLinux offers binary packages of CDE that can be installed with APT. As of March 2023, CDE is also included in the NuTyX GNU/Linux distribution which offers an ISO download image with it, in FreeBSD and in source form in pkgsrc which is the default package manager of NetBSD.
Development under CDE project
In March 2014, the first stable release of CDE, version 2.2.1, was made since its release as free software.
Beginning with version 2.2.2, released in July 2014, CDE is able to compile under FreeBSD 10 with the default Clang compiler.
Since version 2.3.0, released in July 2018, CDE uses TIRPC on Linux, so that the portmapper rpcbind does not need to be run in insecure mode. It does not use Xprint anymore, and can be compiled on the BSDs without installing first a custom version of Motif. Multihead display support with Xinerama has been improved.
Since its release as free software, CDE has been ported to:
Linux distributions including:
Debian
Red Hat Enterprise Linux
Slackware Linux
Ubuntu
Arch Linux
FreeBSD
NetBSD
OpenBSD
OpenIndiana
Solaris 11 (x86-64)
Future project goals of the CDE project include:
Increased portability to more Linux, BSD, and Unix platforms.
Further internationalization into other languages.
Gallery
See also
dtlogin
IRIX Interactive Desktop
Motif
References
External links
Open Group – CDE
Modern and functional CDE desktop based on FVWM.
1993 software
Desktop environments
Formerly proprietary software
Open Group standards
OpenVMS
Software that uses Motif (software)
Sun Microsystems software
X window managers | Common Desktop Environment | [
"Technology"
] | 1,462 | [
"OpenVMS",
"Computing platforms"
] |
6,631 | https://en.wikipedia.org/wiki/Bus%20%28computing%29 | In computer architecture, a bus (historically also called a data highway or databus) is a communication system that transfers data between components inside a computer or between computers. It encompasses both hardware (e.g., wires, optical fiber) and software, including communication protocols. At its core, a bus is a shared physical pathway, typically composed of wires, traces on a circuit board, or busbars, that allows multiple devices to communicate. To prevent conflicts and ensure orderly data exchange, buses rely on a communication protocol to manage which device can transmit data at a given time.
Buses are categorized based on their role, such as system buses (also known as internal buses, internal data buses, or memory buses) connecting the CPU and memory. Expansion buses, also called peripheral buses, extend the system to connect additional devices, including peripherals. Examples of widely used buses include PCI Express (PCIe) for high-speed internal connections and Universal Serial Bus (USB) for connecting external devices.
Modern buses utilize both parallel and serial communication, employing advanced encoding methods to maximize speed and efficiency. Features such as direct memory access (DMA) further enhance performance by allowing data transfers directly between devices and memory without requiring CPU intervention.
Address bus
An address bus is a bus that is used to specify a physical address. When a processor or DMA-enabled device needs to read or write to a memory location, it specifies that memory location on the address bus (the value to be read or written is sent on the data bus). The width of the address bus determines the amount of memory a system can address. For example, a system with a 32-bit address bus can address 232 (4,294,967,296) memory locations. If each memory location holds one byte, the addressable memory space is 4 GB.
Address multiplexing
Early processors used a wire for each bit of the address width. For example, a 16-bit address bus had 16 physical wires making up the bus. As the buses became wider and lengthier, this approach became expensive in terms of the number of chip pins and board traces. Beginning with the Mostek 4096 DRAM, address multiplexing implemented with multiplexers became common. In a multiplexed address scheme, the address is sent in two equal parts on alternate bus cycles. This halves the number of address bus signals required to connect to the memory. For example, a 32-bit address bus can be implemented by using 16 lines and sending the first half of the memory address, immediately followed by the second half memory address.
Typically two additional pins in the control busrow-address strobe (RAS) and column-address strobe (CAS)are used to tell the DRAM whether the address bus is currently sending the first half of the memory address or the second half.
Implementation
Accessing an individual byte frequently requires reading or writing the full bus width (a word) at once. In these instances the least significant bits of the address bus may not even be implemented - it is instead the responsibility of the controlling device to isolate the individual byte required from the complete word transmitted. This is the case, for instance, with the VESA Local Bus which lacks the two least significant bits, limiting this bus to aligned 32-bit transfers.
Historically, there were also some examples of computers that were only able to address wordsword machines.
Memory bus
The memory bus is the bus that connects the main memory to the memory controller in computer systems. Originally, general-purpose buses like VMEbus and the S-100 bus were used, but to reduce latency, modern memory buses are designed to connect directly to DRAM chips, and thus are defined by chip standards bodies such as JEDEC. Examples are the various generations of SDRAM, and serial point-to-point buses like SLDRAM and RDRAM.
Implementation details
Buses can be parallel buses, which carry data words in parallel on multiple wires, or serial buses, which carry data in bit-serial form. The addition of extra power and control connections, differential drivers, and data connections in each direction usually means that most serial buses have more conductors than the minimum of one used in 1-Wire and UNI/O. As data rates increase, the problems of timing skew, power consumption, electromagnetic interference and crosstalk across parallel buses become more and more difficult to circumvent. One partial solution to this problem has been to double pump the bus. Often, a serial bus can be operated at higher overall data rates than a parallel bus, despite having fewer electrical connections, because a serial bus inherently has no timing skew or crosstalk. USB, FireWire, and Serial ATA are examples of this. Multidrop connections do not work well for fast serial buses, so most modern serial buses use daisy-chain or hub designs.
The transition from parallel to serial buses was allowed by Moore's law which allowed for the incorporation of SerDes in integrated circuits which are used in computers.
Network connections such as Ethernet are not generally regarded as buses, although the difference is largely conceptual rather than practical. An attribute generally used to characterize a bus is that power is provided by the bus for the connected hardware. This emphasizes the busbar origins of bus architecture as supplying switched or distributed power. This excludes, as buses, schemes such as serial RS-232, parallel Centronics, IEEE 1284 interfaces and Ethernet, since these devices also needed separate power supplies. Universal Serial Bus devices may use the bus supplied power, but often use a separate power source. This distinction is exemplified by a telephone system with a connected modem, where the RJ11 connection and associated modulated signalling scheme is not considered a bus, and is analogous to an Ethernet connection. A phone line connection scheme is not considered to be a bus with respect to signals, but the Central Office uses buses with cross-bar switches for connections between phones.
However, this distinctionthat power is provided by the busis not the case in many avionic systems, where data connections such as ARINC 429, ARINC 629, MIL-STD-1553B (STANAG 3838), and EFABus (STANAG 3910) are commonly referred to as “data buses” or, sometimes, "databuses". Such avionic data buses are usually characterized by having several equipments or Line Replaceable Items/Units (LRI/LRUs) connected to a common, shared media. They may, as with ARINC 429, be simplex, i.e. have a single source LRI/LRU or, as with ARINC 629, MIL-STD-1553B, and STANAG 3910, be duplex, allow all the connected LRI/LRUs to act, at different times (half duplex), as transmitters and receivers of data.
The frequency or the speed of a bus is measured in Hz such as MHz and determines how many clock cycles there are per second; there can be one or more data transfers per clock cycle. If there is a single transfer per clock cycle it is known as Single Data Rate (SDR), and if there are two transfers per clock cycle it is known as Double Data Rate (DDR) although the use of signalling other than SDR is uncommon outside of RAM. An example of this is PCIe which uses SDR. Within each data transfer there can be multiple bits of data. This is described as the width of a bus which is the number of bits the bus can transfer per clock cycle and can be synonymous with the number of physical electrical conductors the bus has if each conductor transfers one bit at a time. The data rate in bits per second can be obtained by multiplying the number of bits per clock cycle times the frequency times the number of transfers per clock cycle. Alternatively a bus such as PCIe can use modulation or encoding such as PAM4 which groups 2 bits into symbols which are then transferred instead of the bits themselves, and allows for an increase in data transfer speed without increasing the frequency of the bus. The effective or real data transfer speed/rate may be lower due to the use of encoding that also allows for error correction such as 128/130b (b for bit) encoding. The data transfer speed is also known as the bandwidth.
Bus multiplexing
The simplest system bus has completely separate input data lines, output data lines, and address lines.
To reduce cost, most microcomputers have a bidirectional data bus, re-using the same wires for input and output at different times.
Some processors use a dedicated wire for each bit of the address bus, data bus, and the control bus.
For example, the 64-pin STEbus is composed of 8 physical wires dedicated to the 8-bit data bus, 20 physical wires dedicated to the 20-bit address bus, 21 physical wires dedicated to the control bus, and 15 physical wires dedicated to various power buses.
Bus multiplexing requires fewer wires, which reduces costs in many early microprocessors and DRAM chips.
One common multiplexing scheme, address multiplexing, has already been mentioned.
Another multiplexing scheme re-uses the address bus pins as the data bus pins, an approach used by conventional PCI and the 8086.
The various "serial buses" can be seen as the ultimate limit of multiplexing, sending each of the address bits and each of the data bits, one at a time, through a single pin (or a single differential pair).
History
Over time, several groups of people worked on various computer bus standards, including the IEEE Bus Architecture Standards Committee (BASC), the IEEE "Superbus" study group, the open microprocessor initiative (OMI), the open microsystems initiative (OMI), the "Gang of Nine" that developed EISA, etc.
First generation
Early computer buses were bundles of wire that attached computer memory and peripherals. Anecdotally termed the "digit trunk" in the early Australian CSIRAC computer, they were named after electrical power buses, or busbars. Almost always, there was one bus for memory, and one or more separate buses for peripherals. These were accessed by separate instructions, with completely different timings and protocols.
One of the first complications was the use of interrupts. Early computer programs performed I/O by waiting in a loop for the peripheral to become ready. This was a waste of time for programs that had other tasks to do. Also, if the program attempted to perform those other tasks, it might take too long for the program to check again, resulting in loss of data. Engineers thus arranged for the peripherals to interrupt the CPU. The interrupts had to be prioritized, because the CPU can only execute code for one peripheral at a time, and some devices are more time-critical than others.
High-end systems introduced the idea of channel controllers, which were essentially small computers dedicated to handling the input and output of a given bus. IBM introduced these on the IBM 709 in 1958, and they became a common feature of their platforms. Other high-performance vendors like Control Data Corporation implemented similar designs. Generally, the channel controllers would do their best to run all of the bus operations internally, moving data when the CPU was known to be busy elsewhere if possible, and only using interrupts when necessary. This greatly reduced CPU load, and provided better overall system performance.
To provide modularity, memory and I/O buses can be combined into a unified system bus. In this case, a single mechanical and electrical system can be used to connect together many of the system components, or in some cases, all of them.
Later computer programs began to share memory common to several CPUs. Access to this memory bus had to be prioritized, as well. The simple way to prioritize interrupts or bus access was with a daisy chain. In this case signals will naturally flow through the bus in physical or logical order, eliminating the need for complex scheduling.
Minis and micros
Digital Equipment Corporation (DEC) further reduced cost for mass-produced minicomputers, and mapped peripherals into the memory bus, so that the input and output devices appeared to be memory locations. This was implemented in the Unibus of the PDP-11 around 1969.
Early microcomputer bus systems were essentially a passive backplane connected directly or through buffer amplifiers to the pins of the CPU. Memory and other devices would be added to the bus using the same address and data pins as the CPU itself used, connected in parallel. Communication was controlled by the CPU, which read and wrote data from the devices as if they are blocks of memory, using the same instructions, all timed by a central clock controlling the speed of the CPU. Still, devices interrupted the CPU by signaling on separate CPU pins.
For instance, a disk drive controller would signal the CPU that new data was ready to be read, at which point the CPU would move the data by reading the "memory location" that corresponded to the disk drive. Almost all early microcomputers were built in this fashion, starting with the S-100 bus in the Altair 8800 computer system.
In some instances, most notably in the IBM PC, although similar physical architecture can be employed, instructions to access peripherals (in and out) and memory (mov and others) have not been made uniform at all, and still generate distinct CPU signals, that could be used to implement a separate I/O bus.
These simple bus systems had a serious drawback when used for general-purpose computers. All the equipment on the bus had to talk at the same speed, as it shared a single clock.
Increasing the speed of the CPU becomes harder, because the speed of all the devices must increase as well. When it is not practical or economical to have all devices as fast as the CPU, the CPU must either enter a wait state, or work at a slower clock frequency temporarily, to talk to other devices in the computer. While acceptable in embedded systems, this problem was not tolerated for long in general-purpose, user-expandable computers.
Such bus systems are also difficult to configure when constructed from common off-the-shelf equipment. Typically each added expansion card requires many jumpers in order to set memory addresses, I/O addresses, interrupt priorities, and interrupt numbers.
Second generation
"Second generation" bus systems like NuBus addressed some of these problems. They typically separated the computer into two "worlds", the CPU and memory on one side, and the various devices on the other. A bus controller accepted data from the CPU side to be moved to the peripherals side, thus shifting the communications protocol burden from the CPU itself. This allowed the CPU and memory side to evolve separately from the device bus, or just "bus". Devices on the bus could talk to each other with no CPU intervention. This led to much better "real world" performance, but also required the cards to be much more complex. These buses also often addressed speed issues by being "bigger" in terms of the size of the data path, moving from 8-bit parallel buses in the first generation, to 16 or 32-bit in the second, as well as adding software setup (now standardised as Plug-n-play) to supplant or replace the jumpers.
However, these newer systems shared one quality with their earlier cousins, in that everyone on the bus had to talk at the same speed. While the CPU was now isolated and could increase speed, CPUs and memory continued to increase in speed much faster than the buses they talked to. The result was that the bus speeds were now much slower than what a modern system needed, and the machines were left starved for data. A particularly common example of this problem was that video cards quickly outran even the newer bus systems like PCI, and computers began to include AGP just to drive the video card. By 2004 AGP was outgrown again by high-end video cards and other peripherals and has been replaced by the new PCI Express bus.
An increasing number of external devices started employing their own bus systems as well. When disk drives were first introduced, they would be added to the machine with a card plugged into the bus, which is why computers have so many slots on the bus. But through the 1980s and 1990s, new systems like SCSI and IDE were introduced to serve this need, leaving most slots in modern systems empty. Today there are likely to be about five different buses in the typical machine, supporting various devices.
Third generation
"Third generation" buses have been emerging into the market since about 2001, including HyperTransport and InfiniBand. They also tend to be very flexible in terms of their physical connections, allowing them to be used both as internal buses, as well as connecting different machines together. This can lead to complex problems when trying to service different requests, so much of the work on these systems concerns software design, as opposed to the hardware itself. In general, these third generation buses tend to look more like a network than the original concept of a bus, with a higher protocol overhead needed than early systems, while also allowing multiple devices to use the bus at once.
Buses such as Wishbone have been developed by the open source hardware movement in an attempt to further remove legal and patent constraints from computer design.
The Compute Express Link (CXL) is an open standard interconnect for high-speed CPU-to-device and CPU-to-memory, designed to accelerate next-generation data center performance.
Examples of internal computer buses
Parallel
Asus Media Bus proprietary, used on some Asus Socket 7 motherboards
Computer Automated Measurement and Control (CAMAC) for instrumentation systems
Extended ISA or EISA
Industry Standard Architecture or ISA
Low Pin Count or LPC
MBus
MicroChannel or MCA
Multibus for industrial systems
NuBus or IEEE 1196
OPTi local bus used on early Intel 80486 motherboards.
Peripheral Component Interconnect or Conventional PCI
Parallel ATA (also known as Advanced Technology Attachment, ATA, PATA, IDE, EIDE, ATAPI, etc.), Hard disk drive, optical disk drive, tape drive peripheral attachment bus
S-100 bus or IEEE 696, used in the Altair 8800 and similar microcomputers
SBus or IEEE 1496
SS-50 Bus
Runway bus, a proprietary front side CPU bus developed by Hewlett-Packard for use by its PA-RISC microprocessor family
GSC/HSC, a proprietary peripheral bus developed by Hewlett-Packard for use by its PA-RISC microprocessor family
Precision Bus, a proprietary bus developed by Hewlett-Packard for use by its HP3000 computer family
STEbus
STD Bus (for STD-80 [8-bit] and STD32 [16-/32-bit]), FAQ
Unibus, a proprietary bus developed by Digital Equipment Corporation for their PDP-11 and early VAX computers.
Q-Bus, a proprietary bus developed by Digital Equipment Corporation for their PDP and later VAX computers.
VESA Local Bus or VLB or VL-bus
VMEbus, the VERSAmodule Eurocard bus
PC/104
PC/104-Plus
PCI-104
PCI/104-Express
PCI/104
Zorro II and Zorro III, used in Amiga computer systems
Serial
1-Wire
HyperTransport
I²C
I3C (bus)
SLIMbus
PCI Express or PCIe
Serial ATA (SATA), Hard disk drive, solid-state drive, optical disc drive, tape drive peripheral attachment bus
Serial Peripheral Interface (SPI) bus
UNI/O
SMBus
Advanced eXtensible Interface
M-PHY
Examples of external computer buses
Parallel
HIPPI High Performance Parallel Interface
IEEE-488 (also known as GPIB, General-Purpose Interface Bus, and HPIB, Hewlett-Packard Instrumentation Bus)
PC Card, previously known as PCMCIA, much used in laptop computers and other portables, but fading with the introduction of USB and built-in network and modem connections
Serial
Many field buses are serial data buses (not to be confused with the parallel "data bus" section of a system bus or expansion card), several of which use the RS-485 electrical characteristics and then specify their own protocol and connector:
CAN bus ("Controller Area Network")
Modbus
ARINC 429
MIL-STD-1553
IEEE 1355
Other serial buses include:
Camera Link
eSATA
ExpressCard
IEEE 1394 interface (FireWire)
RS-232
Thunderbolt
USB
Examples of internal/external computer buses
Futurebus
InfiniBand
PCI Express External Cabling
QuickRing
Scalable Coherent Interface (SCI)
Small Computer System Interface (SCSI), Hard disk drive and tape drive peripheral attachment bus
Serial Attached SCSI (SAS) and other serial SCSI buses
Thunderbolt
Yapbus, a proprietary bus developed for the Pixar Image Computer
See also
Address decoder
Bus contention
Bus error
Bus mastering
Communication endpoint
Control bus
Crossbar switch
Memory address
Front-side bus (FSB)
External Bus Interface (EBI)
Harvard architecture
Master/slave (technology)
Network on chip
List of device bandwidths
List of network buses
Software bus
References
External links
Computer hardware buses and slots pinouts with brief descriptions
Digital electronics
Motherboard
Communication interfaces | Bus (computing) | [
"Technology",
"Engineering"
] | 4,378 | [
"Electronic engineering",
"Interfaces",
"Communication interfaces",
"Digital electronics"
] |
6,641 | https://en.wikipedia.org/wiki/Cane%20toad | The cane toad (Rhinella marina), also known as the giant neotropical toad or marine toad, is a large, terrestrial true toad native to South and mainland Central America, but which has been introduced to various islands throughout Oceania and the Caribbean, as well as Northern Australia. It is a member of the genus Rhinella, which includes many true toad species found throughout Central and South America, but it was formerly assigned to the genus Bufo.
A fossil toad (specimen UCMP 41159) from the La Venta fauna of the late Miocene in Colombia is morphologically indistinguishable from modern cane toads from northern South America. It was discovered in a floodplain deposit, which suggests the R. marina habitat preferences have long been for open areas. The cane toad is a prolific breeder; females lay single-clump spawns with thousands of eggs. Its reproductive success is partly because of opportunistic feeding: it has a diet, unusual among anurans, of both dead and living matter. Adults average in length; the largest recorded specimen had a snout-vent length of .
The cane toad has poison glands, and the tadpoles are highly toxic to most animals if ingested. Its toxic skin can kill many animals, both wild and domesticated, and cane toads are particularly dangerous to dogs. Because of its voracious appetite, the cane toad has been introduced to many regions of the Pacific and the Caribbean islands as a method of agricultural pest control. The common name of the species is derived from its use against the cane beetle (Dermolepida albohirtum), which damages sugar cane. The cane toad is now considered a pest and an invasive species in many of its introduced regions. The 1988 film Cane Toads: An Unnatural History documented the trials and tribulations of the introduction of cane toads in Australia.
Taxonomy
Historically, the cane toad was used to eradicate pests from sugarcane, giving rise to its common name. The cane toad has many other common names, including "giant toad" and "marine toad"; the former refers to its size, and the latter to the binomial name, R. marina. It was one of many species described by Carl Linnaeus in his 18th-century work Systema Naturae (1758). Linnaeus based the specific epithet marina on an illustration by Dutch zoologist Albertus Seba, who mistakenly believed the cane toad to inhabit both terrestrial and marine environments. Other common names include "giant neotropical toad", "Dominican toad", "giant marine toad", and "South American cane toad". In Trinidadian English, they are commonly called crapaud, the French word for toad.
The genus Rhinella is considered to constitute a distinct genus of its own, thus changing the scientific name of the cane toad. In this case, the specific name marinus (masculine) changes to marina (feminine) to conform with the rules of gender agreement as set out by the International Code of Zoological Nomenclature, changing the binomial name from Bufo marinus to Rhinella marina; the binomial Rhinella marinus was subsequently introduced as a synonym through misspelling by Pramuk, Robertson, Sites, and Noonan (2008). Though controversial (with many traditional herpetologists still using Bufo marinus) the binomial Rhinella marina is gaining in acceptance with such bodies as the IUCN, Encyclopaedia of Life, Amphibian Species of the World and increasing numbers of scientific publications adopting its usage.
Since 2016, cane toad populations native to Mesoamerica and northwestern South America are sometimes considered to be a separate species, Rhinella horribilis.
In Australia, the adults may be confused with large native frogs from the genera Limnodynastes, Cyclorana, and Mixophyes. These species can be distinguished from the cane toad by the absence of large parotoid glands behind their eyes and the lack of a ridge between the nostril and the eye. Cane toads have been confused with the giant burrowing frog (Heleioporus australiacus), because both are large and warty in appearance; however, the latter can be readily distinguished from the former by its vertical pupils and its silver-grey (as opposed to gold) irises. Juvenile cane toads may be confused with species of the genus Uperoleia, but their adult colleagues can be distinguished by the lack of bright colouring on the groin and thighs.
In the United States, the cane toad closely resembles many bufonid species. In particular, it could be confused with the southern toad (Bufo terrestris), which can be distinguished by the presence of two bulbs in front of the parotoid glands.
Taxonomy and evolution
The cane toad genome has been sequenced and certain Australian academics believe this will help in understanding how the toad can quickly evolve to adapt to new environments, the workings of its infamous toxin, and hopefully provide new options for halting this species' march across Australia and other places it has spread as an invasive pest.
Studies of the genome confirm its evolutionary origins in northern part of South America and its close genetic relation to Rhinella diptycha and other similar species of the genus. Recent studies suggest that R. marina diverged between 2.75 and 9.40 million years ago.
A recent split in the species into further subspecies may have occurred approximately 2.7 million years ago following the isolation of population groups by the rising Venezuelan Andes.
Description
Considered the largest species in the Bufonidae, the cane toad is very large; the females are significantly longer than males, reaching a typical length of , with a maximum of . Larger toads tend to be found in areas of lower population density. They have a life expectancy of 10 to 15 years in the wild, and can live considerably longer in captivity, with one specimen reportedly surviving for 35 years.
The skin of the cane toad is dry and warty. Distinct ridges above the eyes run down the snout. Individual cane toads can be grey, yellowish, red-brown, or olive-brown, with varying patterns. A large parotoid gland lies behind each eye. The ventral surface is cream-coloured and may have blotches in shades of black or brown. The pupils are horizontal and the irises golden. The toes have a fleshy webbing at their base, and the fingers are free of webbing.
Typically, juvenile cane toads have smooth, dark skin, although some specimens have a red wash. Juveniles lack the adults' large parotoid glands, so they are usually less poisonous. The tadpoles are small and uniformly black, and are bottom-dwellers, tending to form schools. Tadpoles range from in length.
Ecology, behaviour and life history
The common name "marine toad" and the scientific name Rhinella marina suggest a link to marine life, but cane toads do not live in the sea. However, laboratory experiments suggest that tadpoles can tolerate salt concentrations equivalent to 15% of seawater (~5.4‰), and recent field observations found living tadpoles and toadlets at salinities of 27.5‰ on Coiba Island, Panama. The cane toad inhabits open grassland and woodland, and has displayed a "distinct preference" for areas modified by humans, such as gardens and drainage ditches. In their native habitats, the toads can be found in subtropical forests, although dense foliage tends to limit their dispersal.
The cane toad begins life as an egg, which is laid as part of long strings of jelly in water. A female lays 8,000–25,000 eggs at once and the strings can stretch up to in length. The black eggs are covered by a membrane and their diameter is about . The rate at which an egg grows into a tadpole increases with temperature. Tadpoles typically hatch within 48 hours, but the period can vary from 14 hours to almost a week. This process usually involves thousands of tadpoles—which are small, black, and have short tails—forming into groups. Between 12 and 60 days are needed for the tadpoles to develop into juveniles, with four weeks being typical. Similarly to their adult counterparts, eggs and tadpoles are toxic to many animals.
When they emerge, toadlets typically are about in length, and grow rapidly. While the rate of growth varies by region, time of year, and sex, an average initial growth rate of per day is seen, followed by an average rate of per day. Growth typically slows once the toads reach sexual maturity. This rapid growth is important for their survival; in the period between metamorphosis and subadulthood, the young toads lose the toxicity that protected them as eggs and tadpoles, but have yet to fully develop the parotoid glands that produce bufotoxin. Only an estimated 0.5% of cane toads reach adulthood, in part because they lack this key defense—but also due to tadpole cannibalism. Although cannibalism does occur in the native population in South America, the rapid evolution occurring in the unnaturally large population in Australia has produced tadpoles 30x more likely to be interested in cannibalising their siblings, and 2.6x more likely to actually do so. They have also evolved to shorten their tadpole phase in response to the presence of older tadpoles. These changes are likely genetic, although no genetic basis has been determined.
As with rates of growth, the point at which the toads become sexually mature varies across different regions. In New Guinea, sexual maturity is reached by female toads with a snout–vent length between , while toads in Panama achieve maturity when they are between in length. In tropical regions, such as their native habitats, breeding occurs throughout the year, but in subtropical areas, breeding occurs only during warmer periods that coincide with the onset of the wet season.
The cane toad is estimated to have a critical thermal maximum of and a minimum of around . The ranges can change due to adaptation to the local environment. Cane toads from some populations can adjust their thermal tolerance within a few hours of encountering low temperatures. The toad is able to rapidly acclimate to the cold using physiological plasticity, though there is also evidence that more northerly populations of cane toads in the United States are better cold-adapted than more southerly populations. These adaptations have allowed the cane toad to establish invasive populations across the world. The toad's ability to rapidly acclimate to thermal changes suggests that current models may underestimate the potential range of habitats that the toad can populate. The cane toad has a high tolerance to water loss; some can withstand a 52.6% loss of body water, allowing them to survive outside tropical environments.
Diet
Most frogs identify prey by movement, and vision appears to be the primary method by which the cane toad detects prey; however, it can also locate food using its sense of smell. They eat a wide range of material; in addition to the normal prey of small rodents, other small mammals, reptiles, other amphibians, birds, and even bats and a range of invertebrates (such as ants, beetles, earwigs, dragonflies, grasshoppers, true bugs, crustaceans, and gastropods), they also eat plants, dog food, cat food, feces, and household refuse.
Defences
The skin of the adult cane toad is toxic, as well as the enlarged parotoid glands behind the eyes, and other glands across its back. When the toad is threatened, its glands secrete a milky-white fluid known as bufotoxin. Components of bufotoxin are toxic to many animals; even human deaths have been recorded due to the consumption of cane toads. Dogs are especially prone to be poisoned by licking or biting toads. Pets showing excessive drooling, extremely red gums, head-shaking, crying, loss of coordination, and/or convulsions require immediate veterinary attention.
Bufotenin, one of the chemicals excreted by the cane toad, is classified as a schedule 9 drug under Australian law, alongside heroin and LSD. The effects of bufotenin are thought to be similar to those of mild poisoning; the stimulation, which includes mild hallucinations, lasts less than an hour. As the cane toad excretes bufotenin in small amounts, and other toxins in relatively large quantities, toad licking could result in serious illness or death.
In addition to releasing toxin, the cane toad is capable of inflating its lungs, puffing up, and lifting its body off the ground to appear taller and larger to a potential predator.
Since 2011, experimenters in the Kimberley region of Western Australia have used poisonous sausages containing toad meat in an attempt to protect native animals from cane toads' deadly impact. The Western Australian Department of Environment and Conservation, along with the University of Sydney, developed these sausage-shaped baits as a tool in order to train native animals not to eat the toads. By blending bits of toad with a nausea-inducing chemical, the baits train the animals to stay away from the amphibians.
Young cane toads that aren't lethal upon ingestion have also been used to teach native predators avoidance, namely yellow-spotted monitors. 200,000 metamorphs, tadpoles, and eggs in total were released in areas ahead of inevitable invasion fronts. Following invasion by wild cane toads, yellow-spotted monitors in control areas bereft of the "teacher toads" were virtually wiped out, but experimental areas still contained substantial populations of yellow-spotted monitors.
Predators
Many species prey on the cane toad and its tadpoles in its native habitat, including the broad-snouted caiman (Caiman latirostris), the banded cat-eyed snake (Leptodeira annulata), eels (family Anguillidae), various species of killifish, and Paraponera clavata (bullet ants).
Predators outside the cane toad's native range include the rock flagtail (Kuhlia rupestris), some species of catfish (order Siluriformes), some species of ibis (subfamily Threskiornithinae), the whistling kite (Haliastur sphenurus), the rakali (Hydromys chrysogaster), the black rat (Rattus rattus) and the water monitor (Varanus salvator). The tawny frogmouth (Podargus strigoides) and the Papuan frogmouth (Podargus papuensis) have been reported as feeding on cane toads; some Australian crows (Corvus spp.) have also learned strategies allowing them to feed on cane toads, such as using their beak to flip toads onto their backs. Kookaburras also prey on the amphibians.
Opossums of the genus Didelphis likely can eat cane toads with impunity. Meat ants are unaffected by the cane toads' toxins, so are able to kill them. The cane toad's normal response to attack is to stand still and let its toxin kill or repel the attacker, which allows the ants to attack and eat the toad. Saw-shelled turtles have also been seen successfully and safely eating cane toads.
In Australia rakali (Australian water rats) in two years learnt how to eat cane toads safely. They select the largest toads, turn them over, remove the poisonous gallbladder, and eat the heart and other organs with "surgical precision". They remove the toxic skin and eat the thigh muscle. Other animals such as crows and kites turn cane toads inside out and eat non-poisonous organs, also thus avoiding the skin.
Distribution
The cane toad is native to the Americas, and its range stretches from the Rio Grande Valley in South Texas to the central Amazon and southeastern Peru, and some of the continental islands near Venezuela (such as Trinidad and Tobago). This area encompasses both tropical and semiarid environments. The density of the cane toad is significantly lower within its native distribution than in places where it has been introduced. In South America, the density was recorded to be 20 adults per of shoreline, 1 to 2% of the density in Australia.
As an introduced species
The cane toad has been introduced to many regions of the world—particularly the Pacific—for the biological control of agricultural pests. These introductions have generally been well documented, and the cane toad may be one of the most studied of any introduced species.
Before the early 1840s, the cane toad had been introduced into Martinique and Barbados, from French Guiana and Guyana. An introduction to Jamaica was made in 1844 in an attempt to reduce the rat population. Despite its failure to control the rodents, the cane toad was introduced to Puerto Rico in the early 20th century in the hope that it would counter a beetle infestation ravaging the sugarcane plantations. The Puerto Rican scheme was successful and halted the economic damage caused by the beetles, prompting scientists in the 1930s to promote it as an ideal solution to agricultural pests.
As a result, many countries in the Pacific region emulated the lead of Puerto Rico and introduced the toad in the 1930s. Introduced populations are in Australia, Florida, Papua New Guinea, the Philippines, the Ogasawara, Ishigaki Island and the Daitō Islands of Japan, Taiwan Nantou Caotun, most Caribbean islands, Fiji and many other Pacific islands, including Hawaii. Since then, the cane toad has become a pest in many host countries, and poses a serious threat to native animals.
Australia
Following the apparent success of the cane toad in eating the beetles threatening the sugarcane plantations of Puerto Rico, and the fruitful introductions into Hawaiʻi and the Philippines, a strong push was made for the cane toad to be released in Australia to negate the pests ravaging the Queensland cane fields. As a result, 102 toads were collected from Hawaiʻi and brought to Australia. Queensland's sugar scientists released the toad into cane fields in August 1935. After this initial release, the Commonwealth Department of Health decided to ban future introductions until a study was conducted into the feeding habits of the toad. The study was completed in 1936 and the ban lifted, when large-scale releases were undertaken; by March 1937, 62,000 toadlets had been released into the wild. The toads became firmly established in Queensland, increasing exponentially in number and extending their range into the Northern Territory and New South Wales. In 2010, one was found on the far western coast in Broome, Western Australia.
However, the toad was generally unsuccessful in reducing the targeted grey-backed cane beetles (Dermolepida albohirtum), in part because the cane fields provided insufficient shelter for the predators during the day, and in part because the beetles live at the tops of sugar cane—and cane toads are not good climbers. Since its original introduction, the cane toad has had a particularly marked effect on Australian biodiversity. The population of a number of native predatory reptiles has declined, such as the varanid lizards Varanus mertensi, V. mitchelli, and V. panoptes, the land snakes Pseudechis australis and Acanthophis antarcticus, and the crocodile species Crocodylus johnstoni; in contrast, the population of the agamid lizard Amphibolurus gilberti—known to be a prey item of V. panoptes—has increased. Meat ants, however, are able to kill cane toads. The cane toad has also been linked to decreases in northern quolls in the southern region of Kakadu National Park and even their local extinction.
Caribbean
The cane toad was introduced to various Caribbean islands to counter a number of pests infesting local crops. While it was able to establish itself on some islands, such as Barbados, Jamaica, Hispaniola and Puerto Rico, other introductions, such as in Cuba before 1900 and in 1946, and on the islands of Dominica and Grand Cayman, were unsuccessful.
The earliest recorded introductions were to Barbados and Martinique. The Barbados introductions were focused on the biological control of pests damaging the sugarcane crops, and while the toads became abundant, they have done even less to control the pests than in Australia. The toad was introduced to Martinique from French Guiana before 1944 and became established. Today, they reduce the mosquito and mole cricket populations. A third introduction to the region occurred in 1884, when toads appeared in Jamaica, reportedly imported from Barbados to help control the rodent population. While they had no significant effect on the rats, they nevertheless became well established. Other introductions include the release on Antigua—possibly before 1916, although this initial population may have died out by 1934 and been reintroduced at a later date—and Montserrat, which had an introduction before 1879 that led to the establishment of a solid population, which was apparently sufficient to survive the Soufrière Hills volcano eruption in 1995.
In 1920, the cane toad was introduced into Puerto Rico to control the populations of white grub (Phyllophaga spp.), a sugarcane pest. Before this, the pests were manually collected by humans, so the introduction of the toad eliminated labor costs. A second group of toads was imported in 1923, and by 1932, the cane toad was well established. The population of white grubs dramatically decreased, and this was attributed to the cane toad at the annual meeting of the International Sugar Cane Technologists in Puerto Rico. However, there may have been other factors. The six-year period after 1931—when the cane toad was most prolific, and the white grub had a dramatic decline—had the highest-ever rainfall for Puerto Rico. Nevertheless, the cane toad was assumed to have controlled the white grub; this view was reinforced by a Nature article titled "Toads save sugar crop", and this led to large-scale introductions throughout many parts of the Pacific.
The cane toad has been spotted in Carriacou and Dominica, the latter appearance occurring in spite of the failure of the earlier introductions. On September 8, 2013, the cane toad was also discovered on the island of New Providence in the Bahamas.
The Philippines
The cane toad was first introduced deliberately into the Philippines in 1930 as a biological control agent of pests in sugarcane plantations, after the success of the experimental introductions into Puerto Rico. It subsequently became the most ubiquitous amphibian in the islands. It still retains the common name of bakî or kamprag in the Visayan languages, a corruption of 'American frog', referring to its origins. It is also commonly known as "bullfrog" in Philippine English.
Fiji
The cane toad was introduced into Fiji to combat insects that infested sugarcane plantations. The introduction of the cane toad to the region was first suggested in 1933, following the successes in Puerto Rico and Hawaiʻi. After considering the possible side effects, the national government of Fiji decided to release the toad in 1953, and 67 specimens were subsequently imported from Hawaiʻi. Once the toads were established, a 1963 study concluded, as the toad's diet included both harmful and beneficial invertebrates, it was considered "economically neutral". Today, the cane toad can be found on all major islands in Fiji, although they tend to be smaller than their counterparts in other regions.
New Guinea
The cane toad was introduced into New Guinea to control the hawk moth larvae eating sweet potato crops. The first release occurred in 1937 using toads imported from Hawaiʻi, with a second release the same year using specimens from the Australian mainland. Evidence suggests a third release in 1938, consisting of toads being used for human pregnancy tests—many species of toad were found to be effective for this task, and were employed for about 20 years after the discovery was announced in 1948. Initial reports argued the toads were effective in reducing the levels of cutworms and sweet potato yields were thought to be improving. As a result, these first releases were followed by further distributions across much of the region, although their effectiveness on other crops, such as cabbages, has been questioned; when the toads were released at Wau, the cabbages provided insufficient shelter and the toads rapidly left the immediate area for the superior shelter offered by the forest. A similar situation had previously arisen in the Australian cane fields, but this experience was either unknown or ignored in New Guinea. The cane toad has since become abundant in rural and urban areas.
United States
The cane toad naturally exists in South Texas, but attempts (both deliberate and accidental) have been made to introduce the species to other parts of the country. These include introductions to Florida and to Hawaiʻi, as well as largely unsuccessful introductions to Louisiana.
Initial releases into Florida failed. Attempted introductions before 1936 and 1944, intended to control sugarcane pests, were unsuccessful as the toads failed to proliferate. Later attempts failed in the same way. However, the toad gained a foothold in the state after an accidental release by an importer at Miami International Airport in 1957, and deliberate releases by animal dealers in 1963 and 1964 established the toad in other parts of Florida. Today, the cane toad is well established in the state, from the Keys to north of Tampa, and they are gradually extending further northward. In Florida, the toad is a regarded as a threat to native species and pets; so much so, the Florida Fish and Wildlife Conservation Commission recommends residents to kill them.
Around 150 cane toads were introduced to Oʻahu in Hawaiʻi in 1932, and the population swelled to 105,517 after 17 months. The toads were sent to the other islands, and more than 100,000 toads were distributed by July 1934; eventually over 600,000 were transported.
Uses
Other than the use as a biological control for pests, the cane toad has been employed in a number of commercial and noncommercial applications. Traditionally, within the toad's natural range in South America, the Embera-Wounaan would "milk" the toads for their toxin, which was then employed as an arrow poison. The toxins may have been used as an entheogen by the Olmec people. The toad has been hunted as a food source in parts of Peru, and eaten after the careful removal of the skin and parotoid glands. When properly prepared, the meat of the toad is considered healthy and as a source of omega-3 fatty acids. More recently, the toad's toxins have been used in a number of new ways: bufotenin has been used in Japan as an aphrodisiac and a hair restorer, and in cardiac surgery in China to lower the heart rates of patients. New research has suggested that the cane toad's poison may have some applications in treating prostate cancer.
Other modern applications of the cane toad include pregnancy testing, as pets, laboratory research, and the production of leather goods. Pregnancy testing was conducted in the mid-20th century by injecting urine from a woman into a male toad's lymph sacs, and if spermatozoa appeared in the toad's urine, the patient was deemed to be pregnant. The tests using toads were faster than those employing mammals; the toads were easier to raise, and, although the initial 1948 discovery employed Bufo arenarum for the tests, it soon became clear that a variety of anuran species were suitable, including the cane toad. As a result, toads were employed in this task for around 20 years. As a laboratory animal, the cane toad has numerous advantages: they are plentiful, and easy and inexpensive to maintain and handle. The use of the cane toad in experiments started in the 1950s, and by the end of the 1960s, large numbers were being collected and exported to high schools and universities. Since then, a number of Australian states have introduced or tightened importation regulations.
There are several commercial uses for dead cane toads. Cane toad skin is made into leather and novelty items. Stuffed cane toads, posed and accessorised, are merchandised at souvenir shops for tourists. Attempts have been made to produce fertiliser from toad carcasses.
References
Citations
Bibliography
External links
Species Profile – Cane Toad (Rhinella marina), National Invasive Species Information Center, United States National Agricultural Library. Lists general information and resources for cane toad.
Rhinella
Taxa named by Carl Linnaeus
Agricultural pests
Amphibians described in 1758
Amphibians of Central America
Amphibians of the Dominican Republic
Amphibians of Guyana
Amphibians of Japan
Amphibians of Mauritius
Amphibians of New South Wales
Amphibians of the Northern Territory
Amphibians of Queensland
Amphibians of Trinidad and Tobago
Fauna of Barbados
Fauna of the Rio Grande valleys
Frogs of Australia
Frogs of Brazil | Cane toad | [
"Biology"
] | 5,923 | [
"Pests (organism)",
"Agricultural pests"
] |
6,670 | https://en.wikipedia.org/wiki/Cement | A cement is a binder, a chemical substance used for construction that sets, hardens, and adheres to other materials to bind them together. Cement is seldom used on its own, but rather to bind sand and gravel (aggregate) together. Cement mixed with fine aggregate produces mortar for masonry, or with sand and gravel, produces concrete. Concrete is the most widely used material in existence and is behind only water as the planet's most-consumed resource.
Cements used in construction are usually inorganic, often lime- or calcium silicate-based, and are either hydraulic or less commonly non-hydraulic, depending on the ability of the cement to set in the presence of water (see hydraulic and non-hydraulic lime plaster).
Hydraulic cements (e.g., Portland cement) set and become adhesive through a chemical reaction between the dry ingredients and water. The chemical reaction results in mineral hydrates that are not very water-soluble. This allows setting in wet conditions or under water and further protects the hardened material from chemical attack. The chemical process for hydraulic cement was found by ancient Romans who used volcanic ash (pozzolana) with added lime (calcium oxide).
Non-hydraulic cement (less common) does not set in wet conditions or under water. Rather, it sets as it dries and reacts with carbon dioxide in the air. It is resistant to attack by chemicals after setting.
The word "cement" can be traced back to the Ancient Roman term , used to describe masonry resembling modern concrete that was made from crushed rock with burnt lime as binder. The volcanic ash and pulverized brick supplements that were added to the burnt lime, to obtain a hydraulic binder, were later referred to as , , cäment, and cement. In modern times, organic polymers are sometimes used as cements in concrete.
World production of cement is about 4.4 billion tonnes per year (2021, estimation), of which about half is made in China, followed by India and Vietnam.
The cement production process is responsible for nearly 8% (2018) of global emissions, which includes heating raw materials in a cement kiln by fuel combustion and release of stored in the calcium carbonate (calcination process). Its hydrated products, such as concrete, gradually reabsorb atmospheric (carbonation process), compensating for approximately 30% of the initial emissions.
Chemistry
Cement materials can be classified into two distinct categories: hydraulic cements and non-hydraulic cements according to their respective setting and hardening mechanisms. Hydraulic cement setting and hardening involves hydration reactions and therefore requires water, while non-hydraulic cements only react with a gas and can directly set under air.
Hydraulic cement
By far the most common type of cement is hydraulic cement, which hardens by hydration of the clinker minerals when water is added. Hydraulic cements (such as Portland cement) are made of a mixture of silicates and oxides, the four main mineral phases of the clinker, abbreviated in the cement chemist notation, being:
C3S: alite (3CaO·SiO2);
C2S: belite (2CaO·SiO2);
C3A: tricalcium aluminate (3CaO·Al2O3) (historically, and still occasionally, called celite);
C4AF: brownmillerite (4CaO·Al2O3·Fe2O3).
The silicates are responsible for the cement's mechanical properties — the tricalcium aluminate and brownmillerite are essential for the formation of the liquid phase during the sintering (firing) process of clinker at high temperature in the kiln. The chemistry of these reactions is not completely clear and is still the object of research.
First, the limestone (calcium carbonate) is burned to remove its carbon, producing lime (calcium oxide) in what is known as a calcination reaction. This single chemical reaction is a major emitter of global carbon dioxide emissions.
CaCO3 -> CaO + CO2
The lime reacts with silicon dioxide to produce dicalcium silicate and tricalcium silicate.
2CaO + SiO2 -> 2CaO.SiO2
3CaO + SiO2 -> 3CaO.SiO2
The lime also reacts with aluminium oxide to form tricalcium aluminate.
3CaO + Al2O3 -> 3CaO.Al2O3
In the last step, calcium oxide, aluminium oxide, and ferric oxide react together to form brownmillerite.
4CaO + Al2O3 + Fe2O3 -> 4CaO.Al2O3.Fe2O3
Non-hydraulic cement
A less common form of cement is non-hydraulic cement, such as slaked lime (calcium oxide mixed with water), which hardens by carbonation in contact with carbon dioxide, which is present in the air (~ 412 vol. ppm ≃ 0.04 vol. %). First calcium oxide (lime) is produced from calcium carbonate (limestone or chalk) by calcination at temperatures above 825 °C (1,517 °F) for about 10 hours at atmospheric pressure:
CaCO3 -> CaO + CO2
The calcium oxide is then spent (slaked) by mixing it with water to make slaked lime (calcium hydroxide):
CaO + H2O -> Ca(OH)2
Once the excess water is completely evaporated (this process is technically called setting), the carbonation starts:
Ca(OH)2 + CO2 -> CaCO3 + H2O
This reaction is slow, because the partial pressure of carbon dioxide in the air is low (~ 0.4 millibar). The carbonation reaction requires that the dry cement be exposed to air, so the slaked lime is a non-hydraulic cement and cannot be used under water. This process is called the lime cycle.
History
Perhaps the earliest known occurrence of cement is from twelve million years ago. A deposit of cement was formed after an occurrence of oil shale located adjacent to a bed of limestone burned by natural causes. These ancient deposits were investigated in the 1960s and 1970s.
Alternatives to cement used in antiquity
Cement, chemically speaking, is a product that includes lime as the primary binding ingredient, but is far from the first material used for cementation. The Babylonians and Assyrians used bitumen (asphalt or pitch) to bind together burnt brick or alabaster slabs. In Ancient Egypt, stone blocks were cemented together with a mortar made of sand and roughly burnt gypsum (CaSO4 · 2H2O), which is plaster of Paris, which often contained calcium carbonate (CaCO3),
Ancient Greece and Rome
Lime (calcium oxide) was used on Crete and by the Ancient Greeks. There is evidence that the Minoans of Crete used crushed potsherds as an artificial pozzolan for hydraulic cement. Nobody knows who first discovered that a combination of hydrated non-hydraulic lime and a pozzolan produces a hydraulic mixture (see also: Pozzolanic reaction), but such concrete was used by the Greeks, specifically the Ancient Macedonians, and three centuries later on a large scale by Roman engineers.
The Greeks used volcanic tuff from the island of Thera as their pozzolan and the Romans used crushed volcanic ash (activated aluminium silicates) with lime. This mixture could set under water, increasing its resistance to corrosion like rust. The material was called pozzolana from the town of Pozzuoli, west of Naples where volcanic ash was extracted. In the absence of pozzolanic ash, the Romans used powdered brick or pottery as a substitute and they may have used crushed tiles for this purpose before discovering natural sources near Rome. The huge dome of the Pantheon in Rome and the massive Baths of Caracalla are examples of ancient structures made from these concretes, many of which still stand. The vast system of Roman aqueducts also made extensive use of hydraulic cement. Roman concrete was rarely used on the outside of buildings. The normal technique was to use brick facing material as the formwork for an infill of mortar mixed with an aggregate of broken pieces of stone, brick, potsherds, recycled chunks of concrete, or other building rubble.
Mesoamerica
Lightweight concrete was designed and used for the construction of structural elements by the pre-Columbian builders who lived in a very advanced civilisation in El Tajin near Mexico City, in Mexico. A detailed study of the composition of the aggregate and binder show that the aggregate was pumice and the binder was a pozzolanic cement made with volcanic ash and lime.
Middle Ages
Any preservation of this knowledge in literature from the Middle Ages is unknown, but medieval masons and some military engineers actively used hydraulic cement in structures such as canals, fortresses, harbors, and shipbuilding facilities. A mixture of lime mortar and aggregate with brick or stone facing material was used in the Eastern Roman Empire as well as in the West into the Gothic period. The German Rhineland continued to use hydraulic mortar throughout the Middle Ages, having local pozzolana deposits called trass.
16th century
Tabby is a building material made from oyster shell lime, sand, and whole oyster shells to form a concrete. The Spanish introduced it to the Americas in the sixteenth century.
18th century
The technical knowledge for making hydraulic cement was formalized by French and British engineers in the 18th century.
John Smeaton made an important contribution to the development of cements while planning the construction of the third Eddystone Lighthouse (1755–59) in the English Channel now known as Smeaton's Tower. He needed a hydraulic mortar that would set and develop some strength in the twelve-hour period between successive high tides. He performed experiments with combinations of different limestones and additives including trass and pozzolanas and did exhaustive market research on the available hydraulic limes, visiting their production sites, and noted that the "hydraulicity" of the lime was directly related to the clay content of the limestone used to make it. Smeaton was a civil engineer by profession, and took the idea no further.
In the South Atlantic seaboard of the United States, tabby relying on the oyster-shell middens of earlier Native American populations was used in house construction from the 1730s to the 1860s.
In Britain particularly, good quality building stone became ever more expensive during a period of rapid growth, and it became a common practice to construct prestige buildings from the new industrial bricks, and to finish them with a stucco to imitate stone. Hydraulic limes were favored for this, but the need for a fast set time encouraged the development of new cements. Most famous was Parker's "Roman cement". This was developed by James Parker in the 1780s, and finally patented in 1796. It was, in fact, nothing like material used by the Romans, but was a "natural cement" made by burning septaria – nodules that are found in certain clay deposits, and that contain both clay minerals and calcium carbonate. The burnt nodules were ground to a fine powder. This product, made into a mortar with sand, set in 5–15 minutes. The success of "Roman cement" led other manufacturers to develop rival products by burning artificial hydraulic lime cements of clay and chalk.
Roman cement quickly became popular but was largely replaced by Portland cement in the 1850s.
19th century
Apparently unaware of Smeaton's work, the same principle was identified by Frenchman Louis Vicat in the first decade of the nineteenth century. Vicat went on to devise a method of combining chalk and clay into an intimate mixture, and, burning this, produced an "artificial cement" in 1817 considered the "principal forerunner" of Portland cement and "...Edgar Dobbs of Southwark patented a cement of this kind in 1811."
In Russia, Egor Cheliev created a new binder by mixing lime and clay. His results were published in 1822 in his book A Treatise on the Art to Prepare a Good Mortar published in St. Petersburg. A few years later in 1825, he published another book, which described various methods of making cement and concrete, and the benefits of cement in the construction of buildings and embankments.
Portland cement, the most common type of cement in general use around the world as a basic ingredient of concrete, mortar, stucco, and non-speciality grout, was developed in England in the mid 19th century, and usually originates from limestone. James Frost produced what he called "British cement" in a similar manner around the same time, but did not obtain a patent until 1822. In 1824, Joseph Aspdin patented a similar material, which he called Portland cement, because the render made from it was in color similar to the prestigious Portland stone quarried on the Isle of Portland, Dorset, England. However, Aspdins' cement was nothing like modern Portland cement but was a first step in its development, called a proto-Portland cement. Joseph Aspdins' son William Aspdin had left his father's company and in his cement manufacturing apparently accidentally produced calcium silicates in the 1840s, a middle step in the development of Portland cement. William Aspdin's innovation was counterintuitive for manufacturers of "artificial cements", because they required more lime in the mix (a problem for his father), a much higher kiln temperature (and therefore more fuel), and the resulting clinker was very hard and rapidly wore down the millstones, which were the only available grinding technology of the time. Manufacturing costs were therefore considerably higher, but the product set reasonably slowly and developed strength quickly, thus opening up a market for use in concrete. The use of concrete in construction grew rapidly from 1850 onward, and was soon the dominant use for cements. Thus Portland cement began its predominant role. Isaac Charles Johnson further refined the production of meso-Portland cement (middle stage of development) and claimed he was the real father of Portland cement.
Setting time and "early strength" are important characteristics of cements. Hydraulic limes, "natural" cements, and "artificial" cements all rely on their belite (2 CaO · SiO2, abbreviated as C2S) content for strength development. Belite develops strength slowly. Because they were burned at temperatures below , they contained no alite (3 CaO · SiO2, abbreviated as C3S), which is responsible for early strength in modern cements. The first cement to consistently contain alite was made by William Aspdin in the early 1840s: This was what we call today "modern" Portland cement. Because of the air of mystery with which William Aspdin surrounded his product, others (e.g., Vicat and Johnson) have claimed precedence in this invention, but recent analysis of both his concrete and raw cement have shown that William Aspdin's product made at Northfleet, Kent was a true alite-based cement. However, Aspdin's methods were "rule-of-thumb": Vicat is responsible for establishing the chemical basis of these cements, and Johnson established the importance of sintering the mix in the kiln.
In the US the first large-scale use of cement was Rosendale cement, a natural cement mined from a massive deposit of dolomite discovered in the early 19th century near Rosendale, New York. Rosendale cement was extremely popular for the foundation of buildings (e.g., Statue of Liberty, Capitol Building, Brooklyn Bridge) and lining water pipes.
Sorel cement, or magnesia-based cement, was patented in 1867 by the Frenchman Stanislas Sorel. It was stronger than Portland cement but its poor water resistance (leaching) and corrosive properties (pitting corrosion due to the presence of leachable chloride anions and the low pH (8.5–9.5) of its pore water) limited its use as reinforced concrete for building construction.
The next development in the manufacture of Portland cement was the introduction of the rotary kiln. It produced a clinker mixture that was both stronger, because more alite (C3S) is formed at the higher temperature it achieved (1450 °C), and more homogeneous. Because raw material is constantly fed into a rotary kiln, it allowed a continuous manufacturing process to replace lower capacity batch production processes.
20th century
Calcium aluminate cements were patented in 1908 in France by Jules Bied for better resistance to sulfates. Also in 1908, Thomas Edison experimented with pre-cast concrete in houses in Union, N.J.
In the US, after World War One, the long curing time of at least a month for Rosendale cement made it unpopular for constructing highways and bridges, and many states and construction firms turned to Portland cement. Because of the switch to Portland cement, by the end of the 1920s only one of the 15 Rosendale cement companies had survived. But in the early 1930s, builders discovered that, while Portland cement set faster, it was not as durable, especially for highways—to the point that some states stopped building highways and roads with cement. Bertrain H. Wait, an engineer whose company had helped construct the New York City's Catskill Aqueduct, was impressed with the durability of Rosendale cement, and came up with a blend of both Rosendale and Portland cements that had the good attributes of both. It was highly durable and had a much faster setting time. Wait convinced the New York Commissioner of Highways to construct an experimental section of highway near New Paltz, New York, using one sack of Rosendale to six sacks of Portland cement. It was a success, and for decades the Rosendale-Portland cement blend was used in concrete highway and concrete bridge construction.
Cementitious materials have been used as a nuclear waste immobilizing matrix for more than a half-century. Technologies of waste cementation have been developed and deployed at industrial scale in many countries. Cementitious wasteforms require a careful selection and design process adapted to each specific type of waste to satisfy the strict waste acceptance criteria for long-term storage and disposal.
Types
Modern development of hydraulic cement began with the start of the Industrial Revolution (around 1800), driven by three main needs:
Hydraulic cement render (stucco) for finishing brick buildings in wet climates
Hydraulic mortars for masonry construction of harbor works, etc., in contact with sea water
Development of strong concretes
Modern cements are often Portland cement or Portland cement blends, but other cement blends are used in some industrial settings.
Portland cement
Portland cement, a form of hydraulic cement, is by far the most common type of cement in general use around the world. This cement is made by heating limestone (calcium carbonate) with other materials (such as clay) to in a kiln, in a process known as calcination that liberates a molecule of carbon dioxide from the calcium carbonate to form calcium oxide, or quicklime, which then chemically combines with the other materials in the mix to form calcium silicates and other cementitious compounds. The resulting hard substance, called 'clinker', is then ground with a small amount of gypsum () into a powder to make ordinary Portland cement, the most commonly used type of cement (often referred to as OPC).
Portland cement is a basic ingredient of concrete, mortar, and most non-specialty grout. The most common use for Portland cement is to make concrete. Portland cement may be grey or white.
Portland cement blend
Portland cement blends are often available as inter-ground mixtures from cement producers, but similar formulations are often also mixed from the ground components at the concrete mixing plant.
Portland blast-furnace slag cement, or blast furnace cement (ASTM C595 and EN 197-1 nomenclature respectively), contains up to 95% ground granulated blast furnace slag, with the rest Portland clinker and a little gypsum. All compositions produce high ultimate strength, but as slag content is increased, early strength is reduced, while sulfate resistance increases and heat evolution diminishes. Used as an economic alternative to Portland sulfate-resisting and low-heat cements.
Portland-fly ash cement contains up to 40% fly ash under ASTM standards (ASTM C595), or 35% under EN standards (EN 197–1). The fly ash is pozzolanic, so that ultimate strength is maintained. Because fly ash addition allows a lower concrete water content, early strength can also be maintained. Where good quality cheap fly ash is available, this can be an economic alternative to ordinary Portland cement.
Portland pozzolan cement includes fly ash cement, since fly ash is a pozzolan, but also includes cements made from other natural or artificial pozzolans. In countries where volcanic ashes are available (e.g., Italy, Chile, Mexico, the Philippines), these cements are often the most common form in use. The maximum replacement ratios are generally defined as for Portland-fly ash cement.
Portland silica fume cement. Addition of silica fume can yield exceptionally high strengths, and cements containing 5–20% silica fume are occasionally produced, with 10% being the maximum allowed addition under EN 197–1. However, silica fume is more usually added to Portland cement at the concrete mixer.
Masonry cements are used for preparing bricklaying mortars and stuccos, and must not be used in concrete. They are usually complex proprietary formulations containing Portland clinker and a number of other ingredients that may include limestone, hydrated lime, air entrainers, retarders, waterproofers, and coloring agents. They are formulated to yield workable mortars that allow rapid and consistent masonry work. Subtle variations of masonry cement in North America are plastic cements and stucco cements. These are designed to produce a controlled bond with masonry blocks.
Expansive cements contain, in addition to Portland clinker, expansive clinkers (usually sulfoaluminate clinkers), and are designed to offset the effects of drying shrinkage normally encountered in hydraulic cements. This cement can make concrete for floor slabs (up to 60 m square) without contraction joints.
White blended cements may be made using white clinker (containing little or no iron) and white supplementary materials such as high-purity metakaolin. Colored cements serve decorative purposes. Some standards allow the addition of pigments to produce colored Portland cement. Other standards (e.g., ASTM) do not allow pigments in Portland cement, and colored cements are sold as blended hydraulic cements.
Very finely ground cements are cement mixed with sand or with slag or other pozzolan type minerals that are extremely finely ground together. Such cements can have the same physical characteristics as normal cement but with 50% less cement, particularly because there is more surface area for the chemical reaction. Even with intensive grinding they can use up to 50% less energy (and thus less carbon emissions) to fabricate than ordinary Portland cements.
Other
Pozzolan-lime cements are mixtures of ground pozzolan and lime. These are the cements the Romans used, and are present in surviving Roman structures like the Pantheon in Rome. They develop strength slowly, but their ultimate strength can be very high. The hydration products that produce strength are essentially the same as those in Portland cement.
Slag-lime cements—ground granulated blast-furnace slag—are not hydraulic on their own, but are "activated" by addition of alkalis, most economically using lime. They are similar to pozzolan lime cements in their properties. Only granulated slag (i.e., water-quenched, glassy slag) is effective as a cement component.
Supersulfated cements contain about 80% ground granulated blast furnace slag, 15% gypsum or anhydrite and a little Portland clinker or lime as an activator. They produce strength by formation of ettringite, with strength growth similar to a slow Portland cement. They exhibit good resistance to aggressive agents, including sulfate.
Calcium aluminate cements are hydraulic cements made primarily from limestone and bauxite. The active ingredients are monocalcium aluminate CaAl2O4 (CaO · Al2O3 or CA in cement chemist notation, CCN) and mayenite Ca12Al14O33 (12 CaO · 7 Al2O3, or C12A7 in CCN). Strength forms by hydration to calcium aluminate hydrates. They are well-adapted for use in refractory (high-temperature resistant) concretes, e.g., for furnace linings.
Calcium sulfoaluminate cements are made from clinkers that include ye'elimite (Ca4(AlO2)6SO4 or C4A3 in Cement chemist's notation) as a primary phase. They are used in expansive cements, in ultra-high early strength cements, and in "low-energy" cements. Hydration produces ettringite, and specialized physical properties (such as expansion or rapid reaction) are obtained by adjustment of the availability of calcium and sulfate ions. Their use as a low-energy alternative to Portland cement has been pioneered in China, where several million tonnes per year are produced. Energy requirements are lower because of the lower kiln temperatures required for reaction, and the lower amount of limestone (which must be endothermically decarbonated) in the mix. In addition, the lower limestone content and lower fuel consumption leads to a emission around half that associated with Portland clinker. However, SO2 emissions are usually significantly higher.
"Natural" cements corresponding to certain cements of the pre-Portland era, are produced by burning argillaceous limestones at moderate temperatures. The level of clay components in the limestone (around 30–35%) is such that large amounts of belite (the low-early strength, high-late strength mineral in Portland cement) are formed without the formation of excessive amounts of free lime. As with any natural material, such cements have highly variable properties.
Geopolymer cements are made from mixtures of water-soluble alkali metal silicates, and aluminosilicate mineral powders such as fly ash and metakaolin.
Polymer cements are made from organic chemicals that polymerise. Producers often use thermoset materials. While they are often significantly more expensive, they can give a water proof material that has useful tensile strength.
Sorel cement is a hard, durable cement made by combining magnesium oxide and a magnesium chloride solution
Fiber mesh cement or fiber reinforced concrete is cement that is made up of fibrous materials like synthetic fibers, glass fibers, natural fibers, and steel fibers. This type of mesh is distributed evenly throughout the wet concrete. The purpose of fiber mesh is to reduce water loss from the concrete as well as enhance its structural integrity. When used in plasters, fiber mesh increases cohesiveness, tensile strength, impact resistance, and to reduce shrinkage; ultimately, the main purpose of these combined properties is to reduce cracking.
Electric cement is proposed to be made by recycling cement from demolition wastes in an electric arc furnace as part of a steelmaking process. The recycled cement is intended to be used to replace part or all of the lime used in steelmaking, resulting in a slag-like material that is similar in mineralogy to Portland cement, eliminating most of the associated carbon emissions.
Setting, hardening and curing
Cement starts to set when mixed with water, which causes a series of hydration chemical reactions. The constituents slowly hydrate and the mineral hydrates solidify and harden. The interlocking of the hydrates gives cement its strength. Contrary to popular belief, hydraulic cement does not set by drying out — proper curing requires maintaining the appropriate moisture content necessary for the hydration reactions during the setting and the hardening processes. If hydraulic cements dry out during the curing phase, the resulting product can be insufficiently hydrated and significantly weakened. A minimum temperature of 5 °C is recommended, and no more than 30 °C. The concrete at young age must be protected against water evaporation due to direct insolation, elevated temperature, low relative humidity and wind.
The interfacial transition zone (ITZ) is a region of the cement paste around the aggregate particles in concrete. In the zone, a gradual transition in the microstructural features occurs. This zone can be up to 35 micrometer wide. Other studies have shown that the width can be up to 50 micrometer. The average content of unreacted clinker phase decreases and porosity decreases towards the aggregate surface. Similarly, the content of ettringite increases in ITZ.
Safety issues
Bags of cement routinely have health and safety warnings printed on them because not only is cement highly alkaline, but the setting process is exothermic. As a result, wet cement is strongly caustic (pH = 13.5) and can easily cause severe skin burns if not promptly washed off with water. Similarly, dry cement powder in contact with mucous membranes can cause severe eye or respiratory irritation. Some trace elements, such as chromium, from impurities naturally present in the raw materials used to produce cement may cause allergic dermatitis. Reducing agents such as ferrous sulfate (FeSO4) are often added to cement to convert the carcinogenic hexavalent chromate (CrO42−) into trivalent chromium (Cr3+), a less toxic chemical species. Cement users need also to wear appropriate gloves and protective clothing.
Cement industry in the world
In 2010, the world production of hydraulic cement was . The top three producers were China with 1,800, India with 220, and the United States with 63.5 million tonnes for a total of over half the world total by the world's three most populated states.
For the world capacity to produce cement in 2010, the situation was similar with the top three states (China, India, and the US) accounting for just under half the world total capacity.
Over 2011 and 2012, global consumption continued to climb, rising to 3585 Mt in 2011 and 3736 Mt in 2012, while annual growth rates eased to 8.3% and 4.2%, respectively.
China, representing an increasing share of world cement consumption, remains the main engine of global growth. By 2012, Chinese demand was recorded at 2160 Mt, representing 58% of world consumption. Annual growth rates, which reached 16% in 2010, appear to have softened, slowing to 5–6% over 2011 and 2012, as China's economy targets a more sustainable growth rate.
Outside of China, worldwide consumption climbed by 4.4% to 1462 Mt in 2010, 5% to 1535 Mt in 2011, and finally 2.7% to 1576 Mt in 2012.
Iran is now the 3rd largest cement producer in the world and has increased its output by over 10% from 2008 to 2011. Because of climbing energy costs in Pakistan and other major cement-producing countries, Iran is in a unique position as a trading partner, utilizing its own surplus petroleum to power clinker plants. Now a top producer in the Middle-East, Iran is further increasing its dominant position in local markets and abroad.
The performance in North America and Europe over the 2010–12 period contrasted strikingly with that of China, as the global financial crisis evolved into a sovereign debt crisis for many economies in this region and recession. Cement consumption levels for this region fell by 1.9% in 2010 to 445 Mt, recovered by 4.9% in 2011, then dipped again by 1.1% in 2012.
The performance in the rest of the world, which includes many emerging economies in Asia, Africa and Latin America and representing some 1020 Mt cement demand in 2010, was positive and more than offset the declines in North America and Europe. Annual consumption growth was recorded at 7.4% in 2010, moderating to 5.1% and 4.3% in 2011 and 2012, respectively.
As at year-end 2012, the global cement industry consisted of 5673 cement production facilities, including both integrated and grinding, of which 3900 were located in China and 1773 in the rest of the world.
Total cement capacity worldwide was recorded at 5245 Mt in 2012, with 2950 Mt located in China and 2295 Mt in the rest of the world.
China
"For the past 18 years, China consistently has produced more cement than any other country in the world. [...] (However,) China's cement export peaked in 1994 with 11 million tonnes shipped out and has been in steady decline ever since. Only 5.18 million tonnes were exported out of China in 2002. Offered at $34 a ton, Chinese cement is pricing itself out of the market as Thailand is asking as little as $20 for the same quality."
In 2006, it was estimated that China manufactured 1.235 billion tonnes of cement, which was 44% of the world total cement production. "Demand for cement in China is expected to advance 5.4% annually and exceed 1 billion tonnes in 2008, driven by slowing but healthy growth in construction expenditures. Cement consumed in China will amount to 44% of global demand, and China will remain the world's largest national consumer of cement by a large margin."
In 2010, 3.3 billion tonnes of cement was consumed globally. Of this, China accounted for 1.8 billion tonnes.
Environmental impacts
Cement manufacture causes environmental impacts at all stages of the process. These include emissions of airborne pollution in the form of dust, gases, noise and vibration when operating machinery and during blasting in quarries, and damage to countryside from quarrying. Equipment to reduce dust emissions during quarrying and manufacture of cement is widely used, and equipment to trap and separate exhaust gases are coming into increased use. Environmental protection also includes the re-integration of quarries into the countryside after they have been closed down by returning them to nature or re-cultivating them.
emissions
Carbon concentration in cement spans from ≈5% in cement structures to ≈8% in the case of roads in cement. Cement manufacturing releases in the atmosphere both directly when calcium carbonate is heated, producing lime and carbon dioxide, and also indirectly through the use of energy if its production involves the emission of . The cement industry produces about 10% of global human-made emissions, of which 60% is from the chemical process, and 40% from burning fuel. A Chatham House study from 2018 estimates that the 4 billion tonnes of cement produced annually account for 8% of worldwide emissions.
Nearly 900 kg of are emitted for every 1000 kg of Portland cement produced. In the European Union, the specific energy consumption for the production of cement clinker has been reduced by approximately 30% since the 1970s. This reduction in primary energy requirements is equivalent to approximately 11 million tonnes of coal per year with corresponding benefits in reduction of emissions. This accounts for approximately 5% of anthropogenic .
The majority of carbon dioxide emissions in the manufacture of Portland cement (approximately 60%) are produced from the chemical decomposition of limestone to lime, an ingredient in Portland cement clinker. These emissions may be reduced by lowering the clinker content of cement. They can also be reduced by alternative fabrication methods such as the intergrinding cement with sand or with slag or other pozzolan type minerals to a very fine powder.
To reduce the transport of heavier raw materials and to minimize the associated costs, it is more economical to build cement plants closer to the limestone quarries rather than to the consumer centers.
carbon capture and storage is about to be trialed, but its financial viability is uncertain.
absorption
Hydrated products of Portland cement, such as concrete and mortars, slowly reabsorb atmospheric CO2 gas, which has been released during calcination in a kiln. This natural process, reversed to calcination, is called carbonation. As it depends on CO2 diffusion into the bulk of concrete, its rate depends on many parameters, such as environmental conditions and surface area exposed to the atmosphere. Carbonation is particularly significant at the latter stages of the concrete life - after demolition and crushing of the debris. It was estimated that during the whole life-cycle of cement products, it can be reabsorbed nearly 30% of atmospheric CO2 generated by cement production.
Carbonation process is considered as a mechanism of concrete degradation. It reduces pH of concrete that promotes reinforcement steel corrosion. However, as the product of Ca(OH)2 carbonation, CaCO3, occupies a greater volume, porosity of concrete reduces. This increases strength and hardness of concrete.
There are proposals to reduce carbon footprint of hydraulic cement by adopting non-hydraulic cement, lime mortar, for certain applications. It reabsorbs some of the during hardening, and has a lower energy requirement in production than Portland cement.
A few other attempts to increase absorption of carbon dioxide include cements based on magnesium (Sorel cement).
Heavy metal emissions in the air
In some circumstances, mainly depending on the origin and the composition of the raw materials used, the high-temperature calcination process of limestone and clay minerals can release in the atmosphere gases and dust rich in volatile heavy metals, e.g. thallium, cadmium and mercury are the most toxic. Heavy metals (Tl, Cd, Hg, ...) and also selenium are often found as trace elements in common metal sulfides (pyrite (FeS2), zinc blende (ZnS), galena (PbS), ...) present as secondary minerals in most of the raw materials. Environmental regulations exist in many countries to limit these emissions. As of 2011 in the United States, cement kilns are "legally allowed to pump more toxins into the air than are hazardous-waste incinerators."
Heavy metals present in the clinker
The presence of heavy metals in the clinker arises both from the natural raw materials and from the use of recycled by-products or alternative fuels. The high pH prevailing in the cement porewater (12.5 < pH < 13.5) limits the mobility of many heavy metals by decreasing their solubility and increasing their sorption onto the cement mineral phases. Nickel, zinc and lead are commonly found in cement in non-negligible concentrations. Chromium may also directly arise as natural impurity from the raw materials or as secondary contamination from the abrasion of hard chromium steel alloys used in the ball mills when the clinker is ground. As chromate (CrO42−) is toxic and may cause severe skin allergies at trace concentration, it is sometimes reduced into trivalent Cr(III) by addition of ferrous sulfate (FeSO4).
Use of alternative fuels and by-products materials
A cement plant consumes 3 to 6 GJ of fuel per tonne of clinker produced, depending on the raw materials and the process used. Most cement kilns today use coal and petroleum coke as primary fuels, and to a lesser extent natural gas and fuel oil. Selected waste and by-products with recoverable calorific value can be used as fuels in a cement kiln (referred to as co-processing), replacing a portion of conventional fossil fuels, like coal, if they meet strict specifications. Selected waste and by-products containing useful minerals such as calcium, silica, alumina, and iron can be used as raw materials in the kiln, replacing raw materials such as clay, shale, and limestone. Because some materials have both useful mineral content and recoverable calorific value, the distinction between alternative fuels and raw materials is not always clear. For example, sewage sludge has a low but significant calorific value, and burns to give ash containing minerals useful in the clinker matrix. Scrap automobile and truck tires are useful in cement manufacturing as they have high calorific value and the iron embedded in tires is useful as a feed stock.
Clinker is manufactured by heating raw materials inside the main burner of a kiln to a temperature of 1,450 °C. The flame reaches temperatures of 1,800 °C. The material remains at 1,200 °C for 12–15 seconds at 1,800 °C or sometimes for 5–8 seconds (also referred to as residence time). These characteristics of a clinker kiln offer numerous benefits and they ensure a complete destruction of organic compounds, a total neutralization of acid gases, sulphur oxides and hydrogen chloride. Furthermore, heavy metal traces are embedded in the clinker structure and no by-products, such as ash or residues, are produced.
The EU cement industry already uses more than 40% fuels derived from waste and biomass in supplying the thermal energy to the grey clinker making process. Although the choice for this so-called alternative fuels (AF) is typically cost driven, other factors are becoming more important. Use of alternative fuels provides benefits for both society and the company: -emissions are lower than with fossil fuels, waste can be co-processed in an efficient and sustainable manner and the demand for certain virgin materials can be reduced. Yet there are large differences in the share of alternative fuels used between the European Union (EU) member states. The societal benefits could be improved if more member states increase their alternative fuels share. The Ecofys study assessed the barriers and opportunities for further uptake of alternative fuels in 14 EU member states. The Ecofys study found that local factors constrain the market potential to a much larger extent than the technical and economic feasibility of the cement industry itself.
Reduced-footprint cement
Growing environmental concerns and the increasing cost of fossil fuels have resulted, in many countries, in a sharp reduction of the resources needed to produce cement, as well as effluents (dust and exhaust gases). Reduced-footprint cement is a cementitious material that meets or exceeds the functional performance capabilities of Portland cement. Various techniques are under development. One is geopolymer cement, which incorporates recycled materials, thereby reducing consumption of raw materials, water, and energy. Another approach is to reduce or eliminate the production and release of damaging pollutants and greenhouse gasses, particularly . Recycling old cement in electric arc furnaces is another approach. Also, a team at the University of Edinburgh has developed the 'DUPE' process based on the microbial activity of Sporosarcina pasteurii, a bacterium precipitating calcium carbonate, which, when mixed with sand and urine, can produce mortar blocks with a compressive strength 70% of that of concrete. An overview of climate-friendly methods for cement production can be found here.
See also
Asphalt concrete
Calcium aluminate cements
Cement chemist notation
Cement render
Cenocell
Energetically modified cement (EMC)
Fly ash
Geopolymer cement
Portland cement
Rosendale cement
Sulfate attack in concrete and mortar
Sulfur concrete
Tiocem
List of countries by cement production
References
Further reading
Friedrich W. Locher: Cement : Principles of production and use, Düsseldorf, Germany: Verlag Bau + Technik GmbH, 2006,
Javed I. Bhatty, F. MacGregor Miller, Steven H. Kosmatka; editors: Innovations in Portland Cement Manufacturing, SP400, Portland Cement Association, Skokie, Illinois, U.S., 2004,
"Why cement emissions matter for climate change" Carbon Brief 2018
External links
Building materials
Concrete | Cement | [
"Physics",
"Engineering"
] | 8,991 | [
"Structural engineering",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Concrete",
"Matter",
"Architecture"
] |
6,678 | https://en.wikipedia.org/wiki/Cat | The cat (Felis catus), also referred to as the domestic cat, is a small domesticated carnivorous mammal. It is the only domesticated species of the family Felidae. Advances in archaeology and genetics have shown that the domestication of the cat occurred in the Near East around 7500 BC. It is commonly kept as a pet and farm cat, but also ranges freely as a feral cat avoiding human contact. It is valued by humans for companionship and its ability to kill vermin. Its retractable claws are adapted to killing small prey species such as mice and rats. It has a strong, flexible body, quick reflexes, and sharp teeth, and its night vision and sense of smell are well developed. It is a social species, but a solitary hunter and a crepuscular predator. Cat communication includes vocalizations—including meowing, purring, trilling, hissing, growling, and grunting—as well as body language. It can hear sounds too faint or too high in frequency for human ears, such as those made by small mammals. It secretes and perceives pheromones.
Female domestic cats can have kittens from spring to late autumn in temperate zones and throughout the year in equatorial regions, with litter sizes often ranging from two to five kittens. Domestic cats are bred and shown at events as registered pedigreed cats, a hobby known as cat fancy. Animal population control of cats may be achieved by spaying and neutering, but their proliferation and the abandonment of pets has resulted in large numbers of feral cats worldwide, contributing to the extinction of bird, mammal, and reptile species.
the domestic cat was the second most popular pet in the United States, with 95.6 million cats owned and around 42 million households owning at least one cat. In the United Kingdom, 26% of adults have a cat, with an estimated population of 10.9 million pet cats there were an estimated 220 million owned and 480 million stray cats in the world.
Etymology and naming
The origin of the English word cat, Old English , is thought to be the Late Latin word , which was first used at the beginning of the 6th century. The Late Latin word may be derived from an unidentified African language. The Nubian word 'wildcat' and Nobiin are possible sources or cognates.
The forms might also have derived from an ancient Germanic word that was absorbed into Latin and then into Greek, Syriac, and Arabic. The word may be derived from Germanic and Northern European languages, and ultimately be borrowed from Uralic, Northern Sámi , 'female stoat', and Hungarian , 'lady, female stoat'; from Proto-Uralic , 'female (of a furred animal)'.
The English puss, extended as pussy and pussycat, is attested from the 16th century and may have been introduced from Dutch or from Low German , related to Swedish , or Norwegian , . Similar forms exist in Lithuanian and Irish or . The etymology of this word is unknown, but it may have arisen from a sound used to attract a cat.
A male cat is called a tom or tomcat (or a gib, if neutered). A female is called a queen (or sometimes a molly, if spayed). A juvenile cat is referred to as a kitten. In Early Modern English, the word kitten was interchangeable with the now-obsolete word catling. A group of cats can be referred to as a clowder, a glaring, or a colony.
Taxonomy
The scientific name Felis catus was proposed by Carl Linnaeus in 1758 for a domestic cat. Felis catus domesticus was proposed by Johann Christian Polycarp Erxleben in 1777. Felis daemon proposed by Konstantin Satunin in 1904 was a black cat from the Transcaucasus, later identified as a domestic cat.
In 2003, the International Commission on Zoological Nomenclature ruled that the domestic cat is a distinct species, namely Felis catus. In 2007, the modern domesticated subspecies F. silvestris catus sampled worldwide was considered to have probably descended from the African wildcat (F. lybica), following results of phylogenetic research. In 2017, the IUCN Cat Classification Taskforce followed the recommendation of the ICZN in regarding the domestic cat as a distinct species, Felis catus.
Evolution
The domestic cat is a member of the Felidae, a family that had a common ancestor about . The evolutionary radiation of the Felidae began in Asia during the Miocene around . Analysis of mitochondrial DNA of all Felidae species indicates a radiation at . The genus Felis genetically diverged from other Felidae around . Results of phylogenetic research shows that the wild members of this genus evolved through sympatric or parapatric speciation, whereas the domestic cat evolved through artificial selection. The domestic cat and its closest wild ancestor are diploid and both possess 38 chromosomes and roughly 20,000 genes.
Domestication
It was long thought that the domestication of the cat began in ancient Egypt, where cats were venerated from around 3100 BC. However, the earliest known indication for the taming of an African wildcat was excavated close by a human Neolithic grave in Shillourokambos, southern Cyprus, dating to about 7500–7200 BC. Since there is no evidence of native mammalian fauna on Cyprus, the inhabitants of this Neolithic village most likely brought the cat and other wild mammals to the island from the Middle Eastern mainland. Scientists therefore assume that African wildcats were attracted to early human settlements in the Fertile Crescent by rodents, in particular the house mouse (Mus musculus), and were tamed by Neolithic farmers. This mutual relationship between early farmers and tamed cats lasted thousands of years. As agricultural practices spread, so did tame and domesticated cats. Wildcats of Egypt contributed to the maternal gene pool of the domestic cat at a later time.
The earliest known evidence for the occurrence of the domestic cat in Greece dates to around 1200 BC. Greek, Phoenician, Carthaginian and Etruscan traders introduced domestic cats to southern Europe. By the 5th century BC, they were familiar animals around settlements in Magna Graecia and Etruria. During the Roman Empire, they were introduced to Corsica and Sardinia before the beginning of the 1st century AD. By the end of the Western Roman Empire in the 5th century, the Egyptian domestic cat lineage had arrived in a Baltic Sea port in northern Germany.
The leopard cat (Prionailurus bengalensis) was tamed independently in China around 5500 BC. This line of partially domesticated cats leaves no trace in the domestic cat populations of today.
During domestication, cats have undergone only minor changes in anatomy and behavior, and they are still capable of surviving in the wild. Several natural behaviors and characteristics of wildcats may have pre-adapted them for domestication as pets. These traits include their small size, social nature, obvious body language, love of play, and high intelligence. Since they practice rigorous grooming habits and have an instinctual drive to bury and hide their urine and feces, they are generally much less messy than other domesticated animals. Captive Leopardus cats may also display affectionate behavior toward humans but were not domesticated. House cats often mate with feral cats. Hybridization between domestic and other Felinae species is also possible, producing hybrids such as the Kellas cat in Scotland.
Development of cat breeds started in the mid 19th century. An analysis of the domestic cat genome revealed that the ancestral wildcat genome was significantly altered in the process of domestication, as specific mutations were selected to develop cat breeds. Most breeds are founded on random-bred domestic cats. Genetic diversity of these breeds varies between regions, and is lowest in purebred populations, which show more than 20 deleterious genetic disorders.
Characteristics
Size
The domestic cat has a smaller skull and shorter bones than the European wildcat. It averages about in head-to-body length and in height, with about long tails. Males are larger than females. Adult domestic cats typically weigh .
Skeleton
Cats have seven cervical vertebrae (as do most mammals); 13 thoracic vertebrae (humans have 12); seven lumbar vertebrae (humans have five); three sacral vertebrae (as do most mammals, but humans have five); and a variable number of caudal vertebrae in the tail (humans have only three to five vestigial caudal vertebrae, fused into an internal coccyx). The extra lumbar and thoracic vertebrae account for the cat's spinal mobility and flexibility. Attached to the spine are 13 ribs, the shoulder, and the pelvis. Unlike human arms, cat forelimbs are attached to the shoulder by free-floating clavicle bones which allow them to pass their body through any space into which they can fit their head.
Skull
The cat skull is unusual among mammals in having very large eye sockets and a powerful specialized jaw. Within the jaw, cats have teeth adapted for killing prey and tearing meat. When it overpowers its prey, a cat delivers a lethal neck bite with its two long canine teeth, inserting them between two of the prey's vertebrae and severing its spinal cord, causing irreversible paralysis and death. Compared to other felines, domestic cats have narrowly spaced canine teeth relative to the size of their jaw, which is an adaptation to their preferred prey of small rodents, which have small vertebrae.
The premolar and first molar together compose the carnassial pair on each side of the mouth, which efficiently shears meat into small pieces, like a pair of scissors. These are vital in feeding, since cats' small molars cannot chew food effectively, and cats are largely incapable of mastication. Cats tend to have better teeth than most humans, with decay generally less likely because of a thicker protective layer of enamel, a less damaging saliva, less retention of food particles between teeth, and a diet mostly devoid of sugar. Nonetheless, they are subject to occasional tooth loss and infection.
Claws
Cats have protractible and retractable claws. In their normal, relaxed position, the claws are sheathed with the skin and fur around the paw's toe pads. This keeps the claws sharp by preventing wear from contact with the ground and allows for the silent stalking of prey. The claws on the forefeet are typically sharper than those on the hindfeet. Cats can voluntarily extend their claws on one or more paws. They may extend their claws in hunting or self-defense, climbing, kneading, or for extra traction on soft surfaces. Cats shed the outside layer of their claw sheaths when scratching rough surfaces.
Most cats have five claws on their front paws and four on their rear paws. The dewclaw is proximal to the other claws. More proximally is a protrusion which appears to be a sixth "finger". This special feature of the front paws on the inside of the wrists has no function in normal walking but is thought to be an antiskidding device used while jumping. Some cat breeds are prone to having extra digits ("polydactyly").
Ambulation
The cat is digitigrade. It walks on the toes, with the bones of the feet making up the lower part of the visible leg. Unlike most mammals, it uses a "pacing" gait and moves both legs on one side of the body before the legs on the other side. It registers directly by placing each hind paw close to the track of the corresponding fore paw, minimizing noise and visible tracks. This also provides sure footing for hind paws when navigating rough terrain. As it speeds up from walking to trotting, its gait changes to a "diagonal" gait: The diagonally opposite hind and fore legs move simultaneously.
Balance
Cats are generally fond of sitting in high places or perching. A higher place may serve as a concealed site from which to hunt; domestic cats strike prey by pouncing from a perch such as a tree branch. Another possible explanation is that height gives the cat a better observation point, allowing it to survey its territory. A cat falling from heights of up to can right itself and land on its paws.
During a fall from a high place, a cat reflexively twists its body and rights itself to land on its feet using its acute sense of balance and flexibility. This reflex is known as the cat righting reflex. A cat always rights itself in the same way during a fall, if it has enough time to do so, which is the case in falls of or more. How cats are able to right themselves when falling has been investigated as the "falling cat problem".
Coats
The cat family (Felidae) can pass down many colors and patterns to their offspring. The domestic cat genes MC1R and ASIP allow color variety in their coats. The feline ASIP gene consists of three coding exons. Three novel microsatellite markers linked to ASIP were isolated from a domestic cat BAC clone containing this gene to perform linkage analysis on 89 domestic cats segregated for melanism. The domestic cat family demonstrated a cosegregation between the ASIP allele and coat black coloration.
Senses
Vision
Cats have excellent night vision and can see at one sixth the light level required for human vision. This is partly the result of cat eyes having a tapetum lucidum, which reflects any light that passes through the retina back into the eye, thereby increasing the eye's sensitivity to dim light. Large pupils are an adaptation to dim light. The domestic cat has slit pupils, which allow it to focus bright light without chromatic aberration. At low light, a cat's pupils expand to cover most of the exposed surface of its eyes. The domestic cat has rather poor color vision and only two types of cone cells, optimized for sensitivity to blue and yellowish green; its ability to distinguish between red and green is limited. A response to middle wavelengths from a system other than the rod cells might be due to a third type of cone. This appears to be an adaptation to low light levels rather than representing true trichromatic vision. Cats also have a nictitating membrane, allowing them to blink without hindering their vision.
Hearing
The domestic cat's hearing is most acute in the range of 500 Hz to 32 kHz. It can detect an extremely broad range of frequencies ranging from 55 Hz to 79 kHz, whereas humans can only detect frequencies between 20 Hz and 20 kHz. It can hear a range of 10.5 octaves, while humans and dogs can hear ranges of about 9 octaves. Its hearing sensitivity is enhanced by its large movable outer ears, the pinnae, which amplify sounds and help detect the location of a noise. It can detect ultrasound, which enables it to detect ultrasonic calls made by rodent prey. Recent research has shown that cats have socio-spatial cognitive abilities to create mental maps of owners' locations based on hearing owners' voices.
Smell
Cats have an acute sense of smell, due in part to their well-developed olfactory bulb and a large surface of olfactory mucosa, about in area, which is about twice that of humans. Cats and many other animals have a Jacobson's organ in their mouths that is used in the behavioral process of flehmening. It allows them to sense certain aromas in a way that humans cannot. Cats are sensitive to pheromones such as 3-mercapto-3-methylbutan-1-ol, which they use to communicate through urine spraying and marking with scent glands. Many cats also respond strongly to plants that contain nepetalactone, especially catnip, as they can detect that substance at less than one part per billion. About 70–80% of cats are affected by nepetalactone. This response is also produced by other plants, such as silver vine (Actinidia polygama) and the herb valerian; it may be caused by the smell of these plants mimicking a pheromone and stimulating cats' social or sexual behaviors.
Taste
Cats have relatively few taste buds compared to humans (470 or so, compared to more than 9,000 on the human tongue). Domestic and wild cats share a taste receptor gene mutation that keeps their sweet taste buds from binding to sugary molecules, leaving them with no ability to taste sweetness. But they do have taste bud receptors specialized for acids, amino acids such as the constituents of protein, and bitter tastes.
Their taste buds possess the receptors needed to detect umami. However, these receptors contain molecular changes that make cats taste umami differently from humans. In humans, they detect the amino acids glutamic acid and aspartic acid; but in cats, they instead detect inosine monophosphate and histidine. These molecules are particularly enriched in tuna. This, it has been argued, is why cats find tuna so palatable: as put by researchers into cat taste, "the specific combination of the high IMP and free histidine contents of tuna, which produces a strong umami taste synergy that is highly preferred by cats." One of the researchers in this research has stated, "I think umami is as important for cats as sweet is for humans."
Cats also have a distinct temperature preference for their food, preferring food at a temperature around which is similar to that of a fresh kill; some cats reject cold food (which would signal to the cat that the "prey" item is long dead and therefore possibly toxic or decomposing).
Whiskers
To aid with navigation and sensation, cats have dozens of movable whiskers (vibrissae) over their body, especially their faces. These provide information on the width of gaps and on the location of objects in the dark, both by touching objects directly and by sensing air currents; they also trigger protective blink reflexes to protect the eyes from damage.
Behavior
Outdoor cats are active both day and night, although they tend to be slightly more active at night. Domestic cats spend the majority of their time in the vicinity of their homes, but they can range many hundreds of meters from this central point. They establish territories that vary considerably in size, in one study ranging . The timing of cats' activity is quite flexible and varied; but being low-light predators, they are generally crepuscular, which means they tend to be more active near dawn and dusk. However, house cats' behavior is also influenced by human activity, and they may adapt to their owners' sleeping patterns to some extent.
Cats conserve energy by sleeping more than most animals, especially as they grow older. The daily duration of sleep varies, usually between 12 and 16 hours, with 13 to 14 being the average. Some cats can sleep as much as 20 hours. The term "cat nap" for a short rest refers to the cat's tendency to fall asleep (lightly) for a brief period. While asleep, cats experience short periods of rapid eye movement sleep often accompanied by muscle twitches, which suggests they are dreaming.
A common misconception is that a cat's behavioral and personality traits correspond to its coat color. These traits instead depend on a complex interplay between genetic and environmental factors.
Sociability
The social behavior of the domestic cat ranges from widely dispersed individuals to feral cat colonies that gather around a food source, based on groups of co-operating females. Within such groups, one cat is usually dominant over the others. Each cat in a colony holds a distinct territory, with sexually active males having the largest territories, which are about 10 times larger than those of female cats and may overlap with several females' territories. These territories are marked by urine spraying, rubbing objects at head height with secretions from facial glands, and by defecation. Between these territories are neutral areas where cats watch and greet one another without territorial conflicts. Outside these neutral areas, territory holders usually chase away stranger cats, at first by staring, hissing, and growling, and, if that does not work, by short and violent, noisy attacks. Although cats do not have a social survival strategy or herd behavior, they always hunt alone.
Life in proximity to humans and other domestic animals has led to a symbiotic social adaptation in cats, and cats may express great affection toward humans or other animals. Ethologically, a cat's human keeper functions as a mother surrogate. Adult cats live their lives in a type of extended kittenhood, a form of behavioral neoteny. Their high-pitched sounds may mimic the cries of a hungry human infant, making them particularly difficult for humans to ignore. Some pet cats are poorly socialized. In particular, older cats show aggressiveness toward newly arrived kittens, which includes biting and scratching; this type of behavior is known as feline asocial aggression.
Redirected aggression is a common form of aggression which can occur in multiple cat households. In redirected aggression, there is usually something that agitates the cat: this could be a sight, sound, or another source of stimuli which causes a heightened level of anxiety or arousal. If the cat cannot attack the stimuli, it may direct anger elsewhere by attacking or directing aggression to the nearest cat, pet, human or other being.
Domestic cats' scent rubbing behavior toward humans or other cats is thought to be a feline means of social bonding.
Communication
Domestic cats use many vocalizations for communication, including purring, trilling, hissing, growling/snarling, grunting, and several different forms of meowing. Their body language, including position of ears and tail, relaxation of the whole body, and kneading of the paws, are all indicators of mood. The tail and ears are particularly important social signal mechanisms; a raised tail indicates a friendly greeting, and flattened ears indicate hostility. Tail-raising also indicates the cat's position in the group's social hierarchy, with dominant individuals raising their tails less often than subordinate ones. Feral cats are generally silent. Nose-to-nose touching is also a common greeting and may be followed by social grooming, which is solicited by one of the cats raising and tilting its head.
Purring may have developed as an evolutionary advantage as a signaling mechanism of reassurance between mother cats and nursing kittens, who are thought to use it as a care-soliciting signal. Post-nursing cats also often purr as a sign of contentment: when being petted, becoming relaxed, or eating. Although purring is popularly interpreted as indicative of pleasure, it has been recorded in a wide variety of circumstances, most of which involve physical contact between the cat and another, presumably trusted individual. Some cats have been observed to purr continuously when chronically ill or in apparent pain.
The exact mechanism by which cats purr has long been elusive, but it has been proposed that purring is generated via a series of sudden build-ups and releases of pressure as the glottis is opened and closed, which causes the vocal folds to separate forcefully. The laryngeal muscles in control of the glottis are thought to be driven by a neural oscillator which generates a cycle of contraction and release every 30–40 milliseconds (giving a frequency of 33 to 25 Hz).
Domestic cats observed in rescue facilities have 276 morphologically distinct facial expressions based on 26 facial movements; each facial expression corresponds to different social functions that are probably influenced by domestication. Facial expressions have helped researchers detect pain in cats. The feline grimace scale's five criteria—ear position, orbital tightening, muzzle tension, whisker change, and head position—indicated the presence of acute pain in cats.
Grooming
Cats are known for spending considerable amounts of time licking their coats to keep them clean. The cat's tongue has backward-facing spines about 0.5 millimeter long, called lingual papillae, which contain keratin making them rigid. The papillae act like a hairbrush, and some cats, particularly long-haired cats, occasionally regurgitate sausage-shaped long hairballs of fur that have collected in their stomachs from grooming. Hairballs can be prevented with remedies that ease elimination of the hair through the gut, as well as regular grooming of the coat with a comb or stiff brush.
Fighting
Among domestic cats, males are more likely to fight than females. Among feral cats, the most common reason for cat fighting is competition between two males to mate with a female. In such cases, most fights are won by the heavier male. Another common reason for fighting in domestic cats is the difficulty of establishing territories within a small home. Female cats also fight over territory or to defend their kittens. Neutering will decrease or eliminate this behavior in many cases, suggesting that the behavior is linked to sex hormones.
When cats become aggressive, they try to make themselves appear larger and more threatening by raising their fur, arching their backs, turning sideways, and hissing or spitting. Often, the ears are pointed down and back to avoid damage to the inner ear and potentially listen for any changes behind them while focused forward. Cats may also vocalize loudly and bare their teeth in an effort to further intimidate their opponents. Fights usually consist of grappling and delivering slaps to the face and body with the forepaws, as well as bites. Cats also throw themselves to the ground in a defensive posture to rake their opponent's belly with their hind legs.
Serious damage is rare, as the fights are usually short in duration, with the loser running away with little more than a few scratches to the face and ears. Fights for mating rights are typically more severe, and injuries may include deep puncture wounds and lacerations. Normally, serious injuries from fighting are limited to infections from scratches and bites, although these can occasionally kill cats if untreated. In addition, bites are probably the main route of transmission of the feline immunodeficiency virus. Sexually active males are usually involved in many fights during their lives and often have decidedly battered faces with obvious scars and cuts to their ears and nose. Cats are willing to threaten animals larger than them to defend their territory, such as dogs and foxes.
Hunting and feeding
The shape and structure of cats' cheeks is insufficient to allow them to take in liquids using suction. Therefore, when drinking, they lap with the tongue to draw liquid upward into their mouths. Lapping at a rate of four times a second, the cat touches the smooth tip of its tongue to the surface of the water, and quickly retracts it like a corkscrew, drawing water upward.
Feral cats and free-fed house cats consume several small meals in a day. The frequency and size of meals varies between individuals. They select food based on its temperature, smell, and texture; they dislike chilled foods and respond most strongly to moist foods rich in amino acids, which are similar to meat. Cats reject novel flavors (a response termed neophobia) and learn quickly to avoid foods that have tasted unpleasant in the past. It is also a common misconception that cats like milk/cream, as they tend to avoid sweet food and milk. Most adult cats are lactose intolerant; the sugar in milk is not easily digested and may cause soft stools or diarrhea. Some also develop odd eating habits and like to eat or chew on things such as wool, plastic, cables, paper, string, aluminum foil, or even coal. This condition, pica, can threaten their health, depending on the amount and toxicity of the items eaten.
Cats hunt small prey, primarily birds and rodents, and are often used as a form of pest control. Other common small creatures, such as lizards and snakes, may also become prey. Cats use two hunting strategies, either stalking prey actively, or waiting in ambush until an animal comes close enough to be captured. The strategy used depends on the prey species in the area, with cats waiting in ambush outside burrows, but tending to actively stalk birds. Domestic cats are a major predator of wildlife in the United States, killing an estimated 1.3 to 4.0 billion birds and 6.3 to 22.3 billion mammals annually.
Certain species appear more susceptible than others; in one English village, for example, 30% of house sparrow mortality was linked to the domestic cat. In the recovery of ringed robins (Erithacus rubecula) and dunnocks (Prunella modularis) in Britain, 31% of deaths were a result of cat predation. In parts of North America, the presence of larger carnivores such as coyotes, which prey on cats and other small predators, reduces the effect of predation by cats and other small predators such as opossums and raccoons on bird numbers and variety.
Another poorly understood element of cat hunting behavior is the presentation of prey to human guardians. One explanation is that cats adopt humans into their social group and share excess kill with others in the group according to the dominance hierarchy, in which humans are reacted to as if they are at or near the top. Another explanation is that they attempt to teach their guardians to hunt or to help their human as if feeding "an elderly cat, or an inept kitten". This hypothesis is inconsistent with the fact that male cats also bring home prey, despite males having negligible involvement in raising kittens.
Play
Domestic cats, especially young kittens, are known for their love of play. This behavior mimics hunting and is important in helping kittens learn to stalk, capture, and kill prey. Cats also engage in play fighting, both with each other and with humans. This behavior may be a way for cats to practice the skills needed for real combat, and it might also reduce the fear that they associate with launching attacks on other animals.
Cats also tend to play with toys more when they are hungry. Owing to the close similarity between play and hunting, cats prefer to play with objects that resemble prey, such as small furry toys that move rapidly, but rapidly lose interest. They become habituated to a toy they have played with before. String is often used as a toy, but if it is eaten, it can become caught at the base of the cat's tongue and then move into the intestines, a medical emergency which can cause serious illness, even death.
Reproduction
The cat secretes and perceives pheromones. Female cats, called queens, are polyestrous with several estrus cycles during a year, lasting usually 21 days. They are usually ready to mate between early February and August in northern temperate zones and throughout the year in equatorial regions.
Several males, called tomcats, are attracted to a female in heat. They fight over her, and the victor wins the right to mate. At first, the female rejects the male, but eventually, the female allows the male to mate. The female utters a loud yowl as the male pulls out of her because a male cat's penis has a band of about 120–150 backward-pointing penile spines, which are about long; upon withdrawal of the penis, the spines may provide the female with increased sexual stimulation, which acts to induce ovulation.
After mating, the female cleans her vulva thoroughly. If a male attempts to mate with her at this point, the female attacks him. After about 20 to 30 minutes, once the female is finished grooming, the cycle will repeat. Because ovulation is not always triggered by a single mating, females may not be impregnated by the first male with which they mate. Furthermore, cats are superfecund; that is, a female may mate with more than one male when she is in heat, with the result that different kittens in a litter may have different fathers.
The morula forms 124 hours after conception. At 148 hours, early blastocysts form. At 10–12 days, implantation occurs. The gestation of queens lasts between 64 and 67 days, with an average of 65 days.
Based on a study of 2,300 free-ranging queens conducted from May 1998 and October2000, they had one to six kittens per litter, with an average of three kittens. They produced a mean of 1.4 litters per year, but a maximum of three litters in a year. Of 169 kittens, 127 died before they were six months old due to a trauma caused in most cases by dog attacks and road accidents. The first litter is usually smaller than subsequent litters. Kittens are weaned between six and seven weeks of age. Queens normally reach sexual maturity at 5–10 months, and males at 5–7 months. This varies depending on breed. Kittens reach puberty at the age of 9–10 months.
Cats are ready to go to new homes at about 12 weeks of age, when they are ready to leave their mother. They can be surgically sterilized (spayed or castrated) as early as seven weeks to limit unwanted reproduction. This surgery also prevents undesirable sex-related behavior, such as aggression, territory marking (spraying urine) in males, and yowling (calling) in females. Traditionally, this surgery was performed at around six to nine months of age, but it is increasingly being performed before puberty, at about three to six months. In the United States, about 80% of household cats are neutered.
Lifespan and health
The average lifespan of pet cats has risen in recent decades. In the early 1980s, it was about seven years, rising to 9.4 years in 1995 and an average of about 13 years as of 2014 and 2023.
Neutering increases life expectancy; one study found castrated male cats live twice as long as intact males, while spayed female cats live 62% longer than intact females. Having a cat neutered confers some health benefits, such as a greater life expectancy and a decreased incidence of reproductive neoplasia. However, neutering decreases metabolism and increases food intake, both of which can cause obesity in neutered cats. Pre-pubertal neutering (neutering at 4 months or earlier) was only recommended by 28% of American veterinarians in one study. Some concerns of early neutering were metabolic, retarded physeal closure, and urinary tract disease related.
Disease
About 250 heritable genetic disorders have been identified in cats; many are similar to human inborn errors of metabolism. The high level of similarity among the metabolism of mammals allows many of these feline diseases to be diagnosed using genetic tests that were originally developed for use in humans, as well as the use of cats as animal models in the study of the human diseases. Diseases affecting domestic cats include acute infections, parasitic infestations, injuries, and chronic diseases such as kidney disease, thyroid disease, and arthritis. Vaccinations are available for many infectious diseases, as are treatments to eliminate parasites such as worms, ticks, and fleas.
Ecology
Habitats
The domestic cat is a cosmopolitan species and occurs across much of the world. It is adaptable and now present on all continents except Antarctica, and on 118 of the 131 main groups of islands, even on the remote Kerguelen Islands. Due to its ability to thrive in almost any terrestrial habitat, it is among the world's most invasive species. It lives on small islands with no human inhabitants. Feral cats can live in forests, grasslands, tundra, coastal areas, agricultural land, scrublands, urban areas, and wetlands.
The unwantedness that leads to the domestic cat being treated as an invasive species is twofold. As it is little altered from the wildcat, it can readily interbreed with the wildcat. This hybridization poses a danger to the genetic distinctiveness of some wildcat populations, particularly in Scotland and Hungary, possibly also the Iberian Peninsula, and where protected natural areas are close to human-dominated landscapes, such as Kruger National Park in South Africa. However, its introduction to places where no native felines are present also contributes to the decline of native species.
Ferality
Feral cats are domestic cats that were born in or have reverted to a wild state. They are unfamiliar with and wary of humans and roam freely in urban and rural areas. The numbers of feral cats are not known, but estimates of the United States feral population range from 25 to 60 million. Feral cats may live alone, but most are found in large colonies, which occupy a specific territory and are usually associated with a source of food. Famous feral cat colonies are found in Rome around the Colosseum and Forum Romanum, with cats at some of these sites being fed and given medical attention by volunteers.
Public attitudes toward feral cats vary widely, from seeing them as free-ranging pets to regarding them as vermin.
Impact on wildlife
On islands, birds can contribute as much as 60% of a cat's diet. In nearly all cases, the cat cannot be identified as the sole cause for reducing the numbers of island birds, and in some instances, eradication of cats has caused a "mesopredator release" effect; where the suppression of top carnivores creates an abundance of smaller predators that cause a severe decline in their shared prey. Domestic cats are a contributing factor to the decline of several species, a factor that has ultimately led, in some cases, to extinction. The South Island piopio, Chatham rail, and the New Zealand merganser are a few from a long list, with the most extreme case being the flightless Lyall's wren, which was driven to extinction only a few years after its discovery. One feral cat in New Zealand killed 102 New Zealand lesser short-tailed bats in seven days. In the United States, feral and free-ranging domestic cats kill an estimated 6.3–22.3 billion mammals annually.
In Australia, one study found feral cats to kill 466 million reptiles per year. More than 258 reptile species were identified as being predated by cats. Cats have contributed to the extinction of the Navassa curly-tailed lizard and Chioninia coctei.
Interaction with humans
Cats are common pets throughout the world, and their worldwide population as of 2007 exceeded 500 million. the domestic cat was the second most popular pet in the United States, with 95.6 million cats owned and around 42 million households owning at least one cat. In the United Kingdom, 26% of adults have a cat, with an estimated population of 10.9 million pet cats there were an estimated 220 million owned and 480 million stray cats in the world.
Cats have been used for millennia to control rodents, notably around grain stores and aboard ships, and both uses extend to the present day. Cats are also used in the international fur trade and leather industries for making coats, hats, blankets, stuffed toys, shoes, gloves, and musical instruments. About 24 cats are needed to make a cat-fur coat. This use has been outlawed in the United States since 2000 and in the European Union (as well as the United Kingdom) since 2007.
Cat pelts have been used for superstitious purposes as part of the practice of witchcraft, and they are still made into blankets in Switzerland as traditional medicines thought to cure rheumatism.
A few attempts to build a cat census have been made over the years, both through associations or national and international organizations (such as that of the Canadian Federation of Humane Societies) and over the Internet. General estimates for the global population of domestic cats range widely from anywhere between 200 million to 600 million. Walter Chandoha made his career photographing cats after his 1949 images of Loco, a stray cat, were published. He is reported to have photographed 90,000 cats during his career and maintained an archive of 225,000 images that he drew from for publications during his lifetime.
Pet humanization is a form of anthropomorphism in which cats are kept for companionship and treated more like human family members than traditional pets. This trend of pet culture involves providing cats with a higher level of care, attention and often even luxury, similar to the way humans are treated.
Shows
A cat show is a judged event in which the owners of cats compete to win titles in various cat-registering organizations by entering their cats to be judged after a breed standard. It is often required that a cat must be healthy and vaccinated to participate in a cat show. Both pedigreed and non-purebred companion ("moggy") cats are admissible, although the rules differ depending on the organization. Competing cats are compared to the applicable breed standard, and assessed for temperament.
Infection
Cats can be infected or infested with viruses, bacteria, fungus, protozoans, arthropods or worms that can transmit diseases to humans. In some cases, the cat exhibits no symptoms of the disease. The same disease can then become evident in a human. The likelihood that a person will become diseased depends on the age and immune status of the person. Humans who have cats living in their home or in close association are more likely to become infected. Others might also acquire infections from cat feces and parasites exiting the cat's body. Some of the infections of most concern include salmonella, cat-scratch disease, and toxoplasmosis.
History and mythology
In ancient Egypt, cats were revered, and the goddess Bastet often depicted in cat form, sometimes taking on the war-like aspect of a lioness. The Greek historian Herodotus reported that killing a cat was forbidden, and when a household cat died, the entire family mourned and shaved their eyebrows. Families took their dead cats to the sacred city of Bubastis, where they were embalmed and buried in sacred repositories. Herodotus expressed astonishment at the domestic cats in Egypt, because he had only ever seen wildcats.
Ancient Greeks and Romans kept weasels as pets, which were seen as the ideal rodent-killers. The earliest unmistakable evidence of the Greeks having domestic cats comes from two coins from Magna Graecia dating to the mid-fifth century BC showing Iokastos and Phalanthos, the legendary founders of Rhegion and Taras respectively, playing with their pet cats. The usual ancient Greek word for 'cat' was , meaning 'thing with the waving tail'. Cats are rarely mentioned in ancient Greek literature. Aristotle remarked in his History of Animals that "female cats are naturally lecherous". The Greeks later syncretized their own goddess Artemis with the Egyptian goddess Bastet, adopting Bastet's associations with cats and ascribing them to Artemis. In Ovid's Metamorphoses, when the deities flee to Egypt and take animal forms, the goddess Diana turns into a cat.
Cats eventually displaced weasels as the pest control of choice because they were more pleasant to have around the house and were more enthusiastic hunters of mice. During the Middle Ages, many of Artemis's associations with cats were grafted onto the Virgin Mary. Cats are often shown in icons of Annunciation and of the Holy Family and, according to Italian folklore, on the same night that Mary gave birth to Jesus, a cat in Bethlehem gave birth to a kitten. Domestic cats were spread throughout much of the rest of the world during the Age of Discovery, as ships' cats were carried on sailing ships to control shipboard rodents and as good-luck charms.
Several ancient religions believed cats are exalted souls, companions or guides for humans, that are all-knowing but mute so they cannot influence decisions made by humans. In Japan, the cat is a symbol of good fortune. In Norse mythology, Freyja, the goddess of love, beauty, and fertility, is depicted as riding a chariot drawn by cats. In Jewish legend, the first cat was living in the house of the first man Adam as a pet that got rid of mice. The cat was once partnering with the first dog before the latter broke an oath they had made which resulted in enmity between the descendants of these two animals. It is also written that neither cats nor foxes are represented in the water, while every other animal has an incarnation species in the water. Although no species are sacred in Islam, cats are revered by Muslims. Some Western writers have stated Muhammad had a favorite cat, Muezza. He is reported to have loved cats so much, "he would do without his cloak rather than disturb one that was sleeping on it". The story has no origin in early Muslim writers, and seems to confuse a story of a later Sufi saint, Ahmed ar-Rifa'i, centuries after Muhammad. One of the companions of Muhammad was known as Abu Hurayrah ("father of the kitten"), in reference to his documented affection to cats.
Superstitions and rituals
Many cultures have negative superstitions about cats. An example would be the belief that encountering a black cat ("crossing one's path") leads to bad luck, or that cats are witches' familiar spirits used to augment a witch's powers and skills. The killing of cats in medieval Ypres, Belgium, is commemorated in the innocuous present-day Kattenstoet (cat parade). In mid-16th century France, cats would allegedly be burnt alive as a form of entertainment, particularly during midsummer festivals. According to Norman Davies, the assembled people "shrieked with laughter as the animals, howling with pain, were singed, roasted, and finally carbonized". The remaining ashes were sometimes taken back home by the people for good luck.
According to a myth in many cultures, cats have multiple lives. In many countries, they are believed to have nine lives, but in Italy, Germany, Greece, Brazil and some Spanish-speaking regions, they are said to have seven lives, while in Arabic traditions, the number of lives is six. An early mention of the myth can be found in John Heywood's The Proverbs of John Heywood (1546):
The myth is attributed to the natural suppleness and swiftness cats exhibit to escape life-threatening situations. Also lending credence to this myth is the fact that falling cats often land on their feet, using an instinctive righting reflex to twist their bodies around. Nonetheless, cats can still be injured or killed by a high fall.
See also
Aging in cats
Ailurophobia
Animal testing on cats
Cancer in cats
Cat bite
Cat café
Cat collar
Cat fancy
Cat lady
Cat food
Cat meat
Cat repeller
Cats and the Internet
Cats in Australia
Cats in New Zealand
Cats in the United States
Cat–dog relationship
Dog
Dried cat
Feral cats in Istanbul
List of cat breeds
List of cat documentaries, television series and cartoons
List of individual cats
List of fictional felines
List of feline diseases
Neko-dera
Perlorian
Pet door
Pet first aid
Popular cat names
Notes
References
External links
Biodiversity Heritage Library bibliography for Felis catus
View the cat genome in Ensembl
High-resolution images of the cat's brain
Scientific American. "The Origin of the Cat". 1881. p. 120.
English words
Mammals described in 1758
Animal models
Articles containing video clips
Felis
Taxa named by Carl Linnaeus
Cosmopolitan mammals | Cat | [
"Biology"
] | 9,683 | [
"Model organisms",
"Animal models"
] |
6,682 | https://en.wikipedia.org/wiki/Clade | In biological phylogenetics, a clade (), also known as a monophyletic group or natural group, is a grouping of organisms that are monophyletic – that is, composed of a common ancestor and all its lineal descendants – on a phylogenetic tree. In the taxonomical literature, sometimes the Latin form cladus (plural cladi) is used rather than the English form. Clades are the fundamental unit of cladistics, a modern approach to taxonomy adopted by most biological fields.
The common ancestor may be an individual, a population, or a species (extinct or extant). Clades are nested, one in another, as each branch in turn splits into smaller branches. These splits reflect evolutionary history as populations diverged and evolved independently. Clades are termed monophyletic (Greek: "one clan") groups.
Over the last few decades, the cladistic approach has revolutionized biological classification and revealed surprising evolutionary relationships among organisms. Increasingly, taxonomists try to avoid naming taxa that are not clades; that is, taxa that are not monophyletic. Some of the relationships between organisms that the molecular biology arm of cladistics has revealed include that fungi are closer relatives to animals than they are to plants, archaea are now considered different from bacteria, and multicellular organisms may have evolved from archaea.
The term "clade" is also used with a similar meaning in other fields besides biology, such as historical linguistics; see Cladistics § In disciplines other than biology.
Naming and etymology
The term "clade" was coined in 1957 by the biologist Julian Huxley to refer to the result of cladogenesis, the evolutionary splitting of a parent species into two distinct species, a concept Huxley borrowed from Bernhard Rensch.
Many commonly named groups – rodents and insects, for example – are clades because, in each case, the group consists of a common ancestor with all its descendant branches. Rodents, for example, are a branch of mammals that split off after the end of the period when the clade Dinosauria stopped being the dominant terrestrial vertebrates 66 million years ago. The original population and all its descendants are a clade. The rodent clade corresponds to the order Rodentia, and insects to the class Insecta. These clades include smaller clades, such as chipmunk or ant, each of which consists of even smaller clades. The clade "rodent" is in turn included in the mammal, vertebrate and animal clades.
History of nomenclature and taxonomy
The idea of a clade did not exist in pre-Darwinian Linnaean taxonomy, which was based by necessity only on internal or external morphological similarities between organisms. Many of the better known animal groups in Linnaeus's original Systema Naturae (mostly vertebrate groups) do represent clades. The phenomenon of convergent evolution is responsible for many cases of misleading similarities in the morphology of groups that evolved from different lineages.
With the increasing realization in the first half of the 19th century that species had changed and split through the ages, classification increasingly came to be seen as branches on the evolutionary tree of life. The publication of Darwin's theory of evolution in 1859 gave this view increasing weight. In 1876 Thomas Henry Huxley, an early advocate of evolutionary theory, proposed a revised taxonomy based on a concept strongly resembling clades, although the term clade itself would not be coined until 1957 by his grandson, Julian Huxley.
German biologist Emil Hans Willi Hennig (1913–1976) is considered to be the founder of cladistics.
He proposed a classification system that represented repeated branchings of the family tree, as opposed to the previous systems, which put organisms on a "ladder", with supposedly more "advanced" organisms at the top.
Taxonomists have increasingly worked to make the taxonomic system reflect evolution. When it comes to naming, this principle is not always compatible with the traditional rank-based nomenclature (in which only taxa associated with a rank can be named) because not enough ranks exist to name a long series of nested clades. For these and other reasons, phylogenetic nomenclature has been developed; it is still controversial.
As an example, see the full current classification of Anas platyrhynchos (the mallard duck) with 40 clades from Eukaryota down by following this Wikispecies link and clicking on "Expand".
The name of a clade is conventionally a plural, where the singular refers to each member individually. A unique exception is the reptile clade Dracohors, which was made by haplology from Latin "draco" and "cohors", i.e. "the dragon cohort"; its form with a suffix added should be e.g. "dracohortian".
Definition
A clade is by definition monophyletic, meaning that it contains one ancestor which can be an organism, a population, or a species and all its descendants. The ancestor can be known or unknown; any and all members of a clade can be extant or extinct.
Clades and phylogenetic trees
The science that tries to reconstruct phylogenetic trees and thus discover clades is called phylogenetics or cladistics, the latter term coined by Ernst Mayr (1965), derived from "clade". The results of phylogenetic/cladistic analyses are tree-shaped diagrams called cladograms; they, and all their branches, are phylogenetic hypotheses.
Three methods of defining clades are featured in phylogenetic nomenclature: node-, stem-, and apomorphy-based (see Phylogenetic nomenclature§Phylogenetic definitions of clade names for detailed definitions).
Terminology
The relationship between clades can be described in several ways:
A clade located within a clade is said to be nested within that clade. In the diagram, the hominoid clade, i.e. the apes and humans, is nested within the primate clade.
Two clades are sisters if they have an immediate common ancestor. In the diagram, lemurs and lorises are sister clades, while humans and tarsiers are not.
A clade A is basal to a clade B if A branches off the lineage leading to B before the first branch leading only to members of B. In the adjacent diagram, the strepsirrhine/prosimian clade, is basal to the hominoids/ape clade. In this example, both Haplorrhine as prosimians should be considered as most basal groupings. It is better to say that the prosimians are the sister group to the rest of the primates. This way one also avoids unintended and misconceived connotations about evolutionary advancement, complexity, diversity and ancestor status, e.g. due to impact of sampling diversity and extinction. Basal clades should not be confused with stem groupings, as the latter is associated with paraphyletic or unresolved groupings.
Age
The age of a clade can be described based on two different reference points, crown age and stem age. The crown age of a clade refers to the age of the most recent common ancestor of all of the species in the clade. The stem age of a clade refers to the time that the ancestral lineage of the clade diverged from its sister clade. A clade's stem age is either the same as or older than its crown age. Ages of clades cannot be directly observed. They are inferred, either from stratigraphy of fossils, or from molecular clock estimates.
Viruses
Viruses, and particularly RNA viruses form clades. These are useful in tracking the spread of viral infections. HIV, for example, has clades called subtypes, which vary in geographical prevalence. HIV subtype (clade) B, for example is predominant in Europe, the Americas and Japan, whereas subtype A is more common in east Africa.
See also
Adaptive radiation
Binomial nomenclature
Biological classification
Cladistics
Crown group
Grade
Monophyly
Paraphyly
Phylogenetic network
Phylogenetic nomenclature
Phylogenetics
Polyphyly
Notes
References
Bibliography
External links
Evolving Thoughts: "Clade"
DM Hillis, D Zwickl & R Gutell. "Tree of life". An unrooted cladogram depicting around 3000 species.
"Phylogenetic systematics, an introductory slide-show on evolutionary trees" – University of California, Berkeley
Evolutionary biology terminology
Philosophy of biology
Phylogenetics
1950s neologisms | Clade | [
"Biology"
] | 1,720 | [
"Bioinformatics",
"Evolutionary biology terminology",
"Taxonomy (biology)",
"Phylogenetics"
] |
6,693 | https://en.wikipedia.org/wiki/Cofinality | In mathematics, especially in order theory, the cofinality cf(A) of a partially ordered set A is the least of the cardinalities of the cofinal subsets of A.
This definition of cofinality relies on the axiom of choice, as it uses the fact that every non-empty set of cardinal numbers has a least member. The cofinality of a partially ordered set A can alternatively be defined as the least ordinal x such that there is a function from x to A with cofinal image. This second definition makes sense without the axiom of choice. If the axiom of choice is assumed, as will be the case in the rest of this article, then the two definitions are equivalent.
Cofinality can be similarly defined for a directed set and is used to generalize the notion of a subsequence in a net.
Examples
The cofinality of a partially ordered set with greatest element is 1 as the set consisting only of the greatest element is cofinal (and must be contained in every other cofinal subset).
In particular, the cofinality of any nonzero finite ordinal, or indeed any finite directed set, is 1, since such sets have a greatest element.
Every cofinal subset of a partially ordered set must contain all maximal elements of that set. Thus the cofinality of a finite partially ordered set is equal to the number of its maximal elements.
In particular, let be a set of size and consider the set of subsets of containing no more than elements. This is partially ordered under inclusion and the subsets with elements are maximal. Thus the cofinality of this poset is choose
A subset of the natural numbers is cofinal in if and only if it is infinite, and therefore the cofinality of is Thus is a regular cardinal.
The cofinality of the real numbers with their usual ordering is since is cofinal in The usual ordering of is not order isomorphic to the cardinality of the real numbers, which has cofinality strictly greater than This demonstrates that the cofinality depends on the order; different orders on the same set may have different cofinality.
Properties
If admits a totally ordered cofinal subset, then we can find a subset that is well-ordered and cofinal in Any subset of is also well-ordered. Two cofinal subsets of with minimal cardinality (that is, their cardinality is the cofinality of ) need not be order isomorphic (for example if then both and viewed as subsets of have the countable cardinality of the cofinality of but are not order isomorphic). But cofinal subsets of with minimal order type will be order isomorphic.
Cofinality of ordinals and other well-ordered sets
The cofinality of an ordinal is the smallest ordinal that is the order type of a cofinal subset of The cofinality of a set of ordinals or any other well-ordered set is the cofinality of the order type of that set.
Thus for a limit ordinal there exists a -indexed strictly increasing sequence with limit For example, the cofinality of is because the sequence (where ranges over the natural numbers) tends to but, more generally, any countable limit ordinal has cofinality An uncountable limit ordinal may have either cofinality as does or an uncountable cofinality.
The cofinality of 0 is 0. The cofinality of any successor ordinal is 1. The cofinality of any nonzero limit ordinal is an infinite regular cardinal.
Regular and singular ordinals
A regular ordinal is an ordinal that is equal to its cofinality. A singular ordinal is any ordinal that is not regular.
Every regular ordinal is the initial ordinal of a cardinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial but need not be regular. Assuming the axiom of choice, is regular for each In this case, the ordinals and are regular, whereas and are initial ordinals that are not regular.
The cofinality of any ordinal is a regular ordinal, that is, the cofinality of the cofinality of is the same as the cofinality of So the cofinality operation is idempotent.
Cofinality of cardinals
If is an infinite cardinal number, then is the least cardinal such that there is an unbounded function from to is also the cardinality of the smallest set of strictly smaller cardinals whose sum is more precisely
That the set above is nonempty comes from the fact that
that is, the disjoint union of singleton sets. This implies immediately that
The cofinality of any totally ordered set is regular, so
Using König's theorem, one can prove and for any infinite cardinal
The last inequality implies that the cofinality of the cardinality of the continuum must be uncountable. On the other hand,
the ordinal number ω being the first infinite ordinal, so that the cofinality of is card(ω) = (In particular, is singular.) Therefore,
(Compare to the continuum hypothesis, which states )
Generalizing this argument, one can prove that for a limit ordinal
On the other hand, if the axiom of choice holds, then for a successor or zero ordinal
See also
References
Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. .
Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. .
Cardinal numbers
Order theory
Ordinal numbers
Set theory | Cofinality | [
"Mathematics"
] | 1,197 | [
"Ordinal numbers",
"Cardinal numbers",
"Set theory",
"Mathematical logic",
"Mathematical objects",
"Infinity",
"Order theory",
"Numbers"
] |
6,695 | https://en.wikipedia.org/wiki/Citadel | A citadel is the most fortified area of a town or city. It may be a castle, fortress, or fortified center. The term is a diminutive of city, meaning "little city", because it is a smaller part of the city of which it is the defensive core.
In a fortification with bastions, the citadel is the strongest part of the system, sometimes well inside the outer walls and bastions, but often forming part of the outer wall for the sake of economy. It is positioned to be the last line of defence, should the enemy breach the other components of the fortification system.
History
3300–1300 BC
Some of the oldest known structures which have served as citadels were built by the Indus Valley civilisation, where citadels represented a centralised authority. Citadels in Indus Valley were almost 12 meters tall. The purpose of these structures, however, remains debated. Though the structures found in the ruins of Mohenjo-daro were walled, it is far from clear that these structures were defensive against enemy attacks. Rather, they may have been built to divert flood waters.
Several settlements in Anatolia, including the Assyrian city of Kaneš in modern-day Kültepe, featured citadels. Kaneš' citadel contained the city's palace, temples, and official buildings. The citadel of the Greek city of Mycenae was built atop a highly-defensible rectangular hill and was later surrounded by walls in order to increase its defensive capabilities.
800 BC – 400 AD
In Ancient Greece, the Acropolis, which literally means "high city", placed on a commanding eminence, was important in the life of the people, serving as a lookout, a refuge, and a stronghold in peril, as well as containing military and food supplies, the shrine of the god and a royal palace. The most well known is the Acropolis of Athens, but nearly every Greek city-state had one – the Acrocorinth is famed as a particularly strong fortress. In a much later period, when Greece was ruled by the Latin Empire, the same strong points were used by the new feudal rulers for much the same purpose.
In the first millennium BC, the Castro culture emerged in northwestern Portugal and Spain in the region extending from the Douro river up to the Minho, but soon expanding north along the coast, and east following the river valleys. It was an autochthonous evolution of Atlantic Bronze Age communities. In 2008, the origins of the Celts were attributed to this period by John T. Koch and supported by Barry Cunliffe. The Ave River Valley in Portugal was the core region of this culture, with a large number of small settlements (the castros), but also settlements known as citadels or oppida by the Roman conquerors. These had several rings of walls and the Roman conquest of the citadels of Abobriga, Lambriaca and Cinania around 138 BC was possible only by prolonged siege. Ruins of notable citadels still exist, and are known by archaeologists as Citânia de Briteiros, Citânia de Sanfins, Cividade de Terroso and Cividade de Bagunte.
167–160 BC
Rebels who took power in a city, but with the citadel still held by the former rulers, could by no means regard their tenure of power as secure. One such incident played an important part in the history of the Maccabean Revolt against the Seleucid Empire. The Hellenistic garrison of Jerusalem and local supporters of the Seleucids held out for many years in the Acra citadel, making Maccabean rule in the rest of Jerusalem precarious. When finally gaining possession of the place, the Maccabeans pointedly destroyed and razed the Acra, though they constructed another citadel for their own use in a different part of Jerusalem.
400–1600
At various periods, and particularly during the Middle Ages and the Renaissance, the citadel – having its own fortifications, independent of the city walls – was the last defence of a besieged army, often held after the town had been conquered. Locals and defending armies have often held out citadels long after the city had fallen. For example, in the 1543 Siege of Nice the Ottoman forces led by Barbarossa conquered and pillaged the town and took many captives, but the citadel held out.
In the Philippines, the Ivatan people of the northern islands of Batanes often built fortifications to protect themselves during times of war. They built their so-called idjangs on hills and elevated areas. These fortifications were likened to European castles because of their purpose. Usually, the only entrance to the castles would be via a rope ladder that would only be lowered for the villagers and could be kept away when invaders arrived.
1600 to the present
In times of war, the citadel in many cases afforded retreat to the people living in the areas around the town. However, citadels were often used also to protect a garrison or political power from the inhabitants of the town where it was located, being designed to ensure loyalty from the town that they defended. This was used, for example, during the Dutch Wars of 1664–1667, King Charles II of England constructed a Royal Citadel at Plymouth, an important channel port which needed to be defended from a possible naval attack. However, due to Plymouth's support for the Parliamentarians, in the then-recent English Civil War, the Plymouth Citadel was so designed that its guns could fire on the town as well as on the sea approaches.
Barcelona had a great citadel built in 1714 to intimidate the Catalans against repeating their mid-17th- and early-18th-century rebellions against the Spanish central government. In the 19th century, when the political climate had liberalized enough to permit it, the people of Barcelona had the citadel torn down, and replaced it with the city's main central park, the Parc de la Ciutadella. A similar example is the Citadella in Budapest, Hungary.
The attack on the Bastille in the French Revolution – though afterwards remembered mainly for the release of the handful of prisoners incarcerated there – was to considerable degree motivated by the structure's being a Royal citadel in the midst of revolutionary Paris.
Similarly, after Garibaldi's overthrow of Bourbon rule in Palermo, during the 1860 Unification of Italy, Palermo's Castellamare Citadel – a symbol of the hated and oppressive former rule – was ceremoniously demolished.
Following Belgium gaining its independence in 1830, a Dutch garrison under General David Hendrik Chassé held out in Antwerp Citadel between 1830 and 1832, while the city had already become part of independent Belgium.
The Siege of the Alcázar in the Spanish Civil War, in which the Nationalists held out against a much larger Republican force for two months until relieved, shows that in some cases a citadel can be effective even in modern warfare; a similar case is the Battle of Huế during the Vietnam War, where a North Vietnamese Army division held the citadel of Huế for 26 days against roughly their own numbers of much better-equipped US and South Vietnamese troops.
Modern usage
The Citadelle of Québec (the construction was started in 1673 and completed in 1820) still survives as the largest citadel still in official military operation in North America. It is home to the Royal 22nd Regiment of the Canadian Army and forms part of the Ramparts of Quebec City dating back to 1620s.
Since the mid 20th century, citadels have commonly enclosed military command and control centres, rather than cities or strategic points of defence on the boundaries of a country. These modern citadels are built to protect the command centre from heavy attacks, such as aerial or nuclear bombardment. The military citadels under London in the UK, including the massive underground complex Pindar beneath the Ministry of Defence, are examples, as is the Cheyenne Mountain nuclear bunker in the US.
Naval term
On armoured warships, the heavily armoured section of the ship that protects the ammunition and machinery spaces is called the armoured citadel.
A modern naval interpretation refers to the heaviest protected part of the hull as "the vitals", and the citadel is the semi-armoured freeboard above the vitals. Generally, Anglo-American and German languages follow this while Russian sources/language refer to "the vitals" as цитадель "citadel". Likewise, Russian literature often refers to the turret of a tank as the 'tower'.
The safe room on a ship is also called a citadel.
See also
List of citadels
Acropolis
Alcázar
Arx (Roman)
Fujian Tulou
Kasbah, a synonym
Kremlin (fortification)
Presidio
Rocca (fortification)
List of cities with defensive walls
List of forts
References
External links
Fortifications by type
Engineering barrages
Military strategy | Citadel | [
"Engineering"
] | 1,788 | [
"Military engineering",
"Engineering barrages"
] |
6,759 | https://en.wikipedia.org/wiki/Context-free%20grammar | In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules
can be applied to a nonterminal symbol regardless of its context.
In particular, in a context-free grammar, each production rule is of the form
with a single nonterminal symbol, and a string of terminals and/or nonterminals ( can be empty). Regardless of which symbols surround it, the single nonterminal on the left hand side can always be replaced by on the right hand side. This distinguishes it from a context-sensitive grammar, which can have production rules in the form with a nonterminal symbol and , , and strings of terminal and/or nonterminal symbols.
A formal grammar is essentially a set of production rules that describe all possible strings in a given formal language. Production rules are simple replacements. For example, the first rule in the picture,
replaces with . There can be multiple replacement rules for a given nonterminal symbol. The language generated by a grammar is the set of all strings of terminal symbols that can be derived, by repeated rule applications, from some particular nonterminal symbol ("start symbol").
Nonterminal symbols are used during the derivation process, but do not appear in its final result string.
Languages generated by context-free grammars are known as context-free languages (CFL). Different context-free grammars can generate the same context-free language. It is important to distinguish the properties of the language (intrinsic properties) from the properties of a particular grammar (extrinsic properties). The language equality question (do two given context-free grammars generate the same language?) is undecidable.
Context-free grammars arise in linguistics where they are used to describe the structure of sentences and words in a natural language, and they were invented by the linguist Noam Chomsky for this purpose. By contrast, in computer science, as the use of recursively-defined concepts increased, they were used more and more. In an early application, grammars are used to describe the structure of programming languages. In a newer application, they are used in an essential part of the Extensible Markup Language (XML) called the document type definition.
In linguistics, some authors use the term phrase structure grammar to refer to context-free grammars, whereby phrase-structure grammars are distinct from dependency grammars. In computer science, a popular notation for context-free grammars is Backus–Naur form, or BNF.
Background
Since at least the time of the ancient Indian scholar Pāṇini, linguists have described the grammars of languages in terms of their block structure, and described how sentences are recursively built up from smaller phrases, and eventually individual words or word elements. An essential property of these block structures is that logical units never overlap. For example, the sentence:
John, whose blue car was in the garage, walked to the grocery store.
can be logically parenthesized (with the logical metasymbols [ ]) as follows:
[John[, [whose [blue car]] [was [in [the garage]]],]] [walked [to [the [grocery store]]]].
A context-free grammar provides a simple and mathematically precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks, capturing the "block structure" of sentences in a natural way. Its simplicity makes the formalism amenable to rigorous mathematical study. Important features of natural language syntax such as agreement and reference are not part of the context-free grammar, but the basic recursive structure of sentences, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are swallowed by nouns and verbs, is described exactly.
Context-free grammars are a special form of Semi-Thue systems that in their general form date back to the work of Axel Thue.
The formalism of context-free grammars was developed in the mid-1950s by Noam Chomsky, and also their classification as a special type of formal grammar (which he called phrase-structure grammars). Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars. In Chomsky's generative grammar framework, the syntax of natural language was described by context-free rules combined with transformation rules.
Block structure was introduced into computer programming languages by the Algol project (1957–1960), which, as a consequence, also featured a context-free grammar to describe the resulting Algol syntax. This became a standard feature of computer languages, and the notation for grammars used in concrete descriptions of computer languages came to be known as Backus–Naur form, after two members of the Algol language design committee. The "block structure" aspect that context-free grammars capture is so fundamental to grammar that the terms syntax and grammar are often identified with context-free grammar rules, especially in computer science. Formal constraints not captured by the grammar are then considered to be part of the "semantics" of the language.
Context-free grammars are simple enough to allow the construction of efficient parsing algorithms that, for a given string, determine whether and how it can be generated from the grammar. An Earley parser is an example of such an algorithm, while the widely used LR and LL parsers are simpler algorithms that deal only with more restrictive subsets of context-free grammars.
Formal definitions
A context-free grammar is defined by the 4-tuple ,
where
is a finite set; each element is called a nonterminal character or a variable. Each variable represents a different type of phrase or clause in the sentence. Variables are also sometimes called syntactic categories. Each variable defines a sub-language of the language defined by .
is a finite set of terminals, disjoint from , which make up the actual content of the sentence. The set of terminals is the alphabet of the language defined by the grammar .
is a finite relation in , where the asterisk represents the Kleene star operation. The members of are called the (rewrite) rules or productions of the grammar. (also commonly symbolized by a )
is the start variable (or start symbol), used to represent the whole sentence (or program). It must be an element of .
Production rule notation
A production rule in is formalized mathematically as a pair , where is a nonterminal and is a string of variables and/or terminals; rather than using ordered pair notation, production rules are usually written using an arrow operator with as its left hand side and as its right hand side:
.
It is allowed for to be the empty string, and in this case it is customary to denote it by ε. The form is called an -production.
It is common to list all right-hand sides for the same left-hand side on the same line, using | (the vertical bar) to separate them. Rules and can hence be written as . In this case, and are called the first and second alternative, respectively.
Rule application
For any strings , we say directly yields , written as , if with and such that and . Thus, is a result of applying the rule to .
Repetitive rule application
For any strings we say yields or is derived from if there is a positive integer and strings such that . This relation is denoted , or in some textbooks. If , the relation holds. In other words, and are the reflexive transitive closure (allowing a string to yield itself) and the transitive closure (requiring at least one step) of , respectively.
Context-free language
The language of a grammar is the set
of all terminal-symbol strings derivable from the start symbol.
A language is said to be a context-free language (CFL), if there exists a CFG , such that .
Non-deterministic pushdown automata recognize exactly the context-free languages.
Examples
Words concatenated with their reverse
The grammar , with productions
,
,
,
is context-free. It is not proper since it includes an ε-production. A typical derivation in this grammar is
.
This makes it clear that
.
The language is context-free; however, it can be proved that it is not regular.
If the productions
,
,
are added, a context-free grammar for the set of all palindromes over the alphabet is obtained.
Well-formed parentheses
The canonical example of a context-free grammar is parenthesis matching, which is representative of the general case. There are two terminal symbols "(" and ")" and one nonterminal symbol S. The production rules are
,
,
The first rule allows the S symbol to multiply; the second rule allows the S symbol to become enclosed by matching parentheses; and the third rule terminates the recursion.
Well-formed nested parentheses and square brackets
A second canonical example is two different kinds of matching nested parentheses, described by the productions:
with terminal symbols [ ] ( ) and nonterminal S.
The following sequence can be derived in that grammar:
Matching pairs
In a context-free grammar, we can pair up characters the way we do with brackets. The simplest example:
This grammar generates the language , which is not regular (according to the pumping lemma for regular languages).
The special character ε stands for the empty string. By changing the above grammar to
we obtain a grammar generating the language instead. This differs only in that it contains the empty string while the original grammar did not.
Distinct number of a's and b's
A context-free grammar for the language consisting of all strings over {a,b} containing an unequal number of a's and b's:
Here, the nonterminal T can generate all strings with more a's than b's, the nonterminal U generates all strings with more b's than a's and the nonterminal V generates all strings with an equal number of a's and b's. Omitting the third alternative in the rules for T and U does not restrict the grammar's language.
Second block of b's of double size
Another example of a non-regular language is . It is context-free as it can be generated by the following context-free grammar:
First-order logic formulas
The formation rules for the terms and formulas of formal logic fit the definition of context-free grammar, except that the set of symbols may be infinite and there may be more than one start symbol.
Examples of languages that are not context free
In contrast to well-formed nested parentheses and square brackets in the previous section, there is no context-free grammar for generating all sequences of two different types of parentheses, each separately balanced disregarding the other, where the two types need not nest inside one another, for example:
or
The fact that this language is not context free can be proven using pumping lemma for context-free languages and a proof by contradiction, observing that all words of the form
should belong to the language. This language belongs instead to a more general class and can be described by a conjunctive grammar, which in turn also includes other non-context-free languages, such as the language of all words of the form
.
Regular grammars
Every regular grammar is context-free, but not all context-free grammars are regular. The following context-free grammar, for example, is also regular.
The terminals here are and , while the only nonterminal is .
The language described is all nonempty strings of s and s that end in .
This grammar is regular: no rule has more than one nonterminal in its right-hand side, and each of these nonterminals is at the same end of the right-hand side.
Every regular grammar corresponds directly to a nondeterministic finite automaton, so we know that this is a regular language.
Using vertical bars, the grammar above can be described more tersely as follows:
Derivations and syntax trees
A derivation of a string for a grammar is a sequence of grammar rule applications that transform the start symbol into the string.
A derivation proves that the string belongs to the grammar's language.
A derivation is fully determined by giving, for each step:
the rule applied in that step
the occurrence of its left-hand side to which it is applied
For clarity, the intermediate string is usually given as well.
For instance, with the grammar:
the string
can be derived from the start symbol with the following derivation:
(by rule 1. on )
(by rule 1. on the second )
(by rule 2. on the first )
(by rule 2. on the second )
(by rule 3. on the third )
Often, a strategy is followed that deterministically chooses the next nonterminal to rewrite:
in a leftmost derivation, it is always the leftmost nonterminal;
in a rightmost derivation, it is always the rightmost nonterminal.
Given such a strategy, a derivation is completely determined by the sequence of rules applied. For instance, one leftmost derivation of the same string is
(by rule 1 on the leftmost )
(by rule 2 on the leftmost )
(by rule 1 on the leftmost )
(by rule 2 on the leftmost )
(by rule 3 on the leftmost ),
which can be summarized as
rule 1
rule 2
rule 1
rule 2
rule 3.
One rightmost derivation is:
(by rule 1 on the rightmost )
(by rule 1 on the rightmost )
(by rule 3 on the rightmost )
(by rule 2 on the rightmost )
(by rule 2 on the rightmost ),
which can be summarized as
rule 1
rule 1
rule 3
rule 2
rule 2.
The distinction between leftmost derivation and rightmost derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore, it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an example LL parsers and LR parsers.
A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example, if the string "1 + 1 + a" is derived according to the leftmost derivation outlined above, the structure of the string would be:
where indicates a substring recognized as belonging to . This hierarchy can also be seen as a tree:
This tree is called a parse tree or "concrete syntax tree" of the string, by contrast with the abstract syntax tree. In this case the presented leftmost and the rightmost derivations define the same parse tree; however, there is another rightmost derivation of the same string
(by rule 1 on the rightmost )
(by rule 3 on the rightmost )
(by rule 1 on the rightmost )
(by rule 2 on the rightmost )
(by rule 2 on the rightmost ),
which defines a string with a different structure
and a different parse tree:
Note however that both parse trees can be obtained by both leftmost and rightmost derivations. For example, the last tree can be obtained with the leftmost derivation as follows:
(by rule 1 on the leftmost )
(by rule 1 on the leftmost )
(by rule 2 on the leftmost )
(by rule 2 on the leftmost )
(by rule 3 on the leftmost ),
If a string in the language of the grammar has more than one parsing tree, then the grammar is said to be an ambiguous grammar. Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply. Usually, ambiguity is a feature of the grammar, not the language, and an unambiguous grammar can be found that generates the same context-free language. However, there are certain languages that can only be generated by ambiguous grammars; such languages are called inherently ambiguous languages.
Normal forms
Every context-free grammar with no ε-production has an equivalent grammar in Chomsky normal form, and a grammar in Greibach normal form. "Equivalent" here means that the two grammars generate the same language.
The especially simple form of production rules in Chomsky normal form grammars has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky normal form to construct a polynomial-time algorithm that decides whether a given string is in the language represented by that grammar or not (the CYK algorithm).
Closure properties
Context-free languages are closed under the various operations, that is, if the languages K and L are
context-free, so is the result of the following operations:
union K ∪ L; concatenation K ∘ L; Kleene star L*
substitution (in particular homomorphism)
inverse homomorphism
intersection with a regular language
They are not closed under general intersection (hence neither under complementation) and set difference.
Decidable problems
The following are some decidable problems about context-free grammars.
Parsing
The parsing problem, checking whether a given word belongs to the language given by a context-free grammar, is decidable, using one of the general-purpose parsing algorithms:
CYK algorithm (for grammars in Chomsky normal form)
Earley parser
GLR parser
LL parser (only for the proper subclass of LL(k) grammars)
Context-free parsing for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to Boolean matrix multiplication, thus inheriting its complexity upper bound of O(n2.3728639). Conversely, Lillian Lee has shown O(n3−ε) Boolean matrix multiplication to be reducible to O(n3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter.
Reachability, productiveness, nullability
A nonterminal symbol is called productive, or generating, if there is a derivation for some string of terminal symbols. is called reachable if there is a derivation for some strings of nonterminal and terminal symbols from the start symbol. is called useless if it is unreachable or unproductive. is called nullable if there is a derivation . A rule is called an ε-production. A derivation is called a cycle.
Algorithms are known to eliminate from a given grammar, without changing its generated language,
unproductive symbols,
unreachable symbols,
ε-productions, with one possible exception, and
cycles.
In particular, an alternative containing a useless nonterminal symbol can be deleted from the right-hand side of a rule.
Such rules and alternatives are called useless.
In the depicted example grammar, the nonterminal D is unreachable, and E is unproductive, while C → C causes a cycle.
Hence, omitting the last three rules does not change the language generated by the grammar, nor does omitting the alternatives "| Cc | Ee" from the right-hand side of the rule for S.
A context-free grammar is said to be proper if it has neither useless symbols nor ε-productions nor cycles.
Combining the above algorithms, every context-free grammar not generating ε can be transformed into a weakly equivalent proper one.
Regularity and LL(k) checks
It is decidable whether a given grammar is a regular grammar, as well as whether it is an LL(k) grammar for a given k≥0. If k is not given, the latter problem is undecidable.
Given a context-free grammar, it is not decidable whether its language is regular, nor whether it is an LL(k) language for a given k.
Emptiness and finiteness
There are algorithms to decide whether the language of a given context-free grammar is empty, as well as whether it is finite.
Undecidable problems
Some questions that are undecidable for wider classes of grammars become decidable for context-free grammars; e.g. the emptiness problem (whether the grammar generates any terminal strings at all), is undecidable for context-sensitive grammars, but decidable for context-free grammars.
However, many problems are undecidable even for context-free grammars; the most prominent ones are handled in the following.
Universality
Given a CFG, does it generate the language of all strings over the alphabet of terminal symbols used in its rules?
A reduction can be demonstrated to this problem from the well-known undecidable problem of determining whether a Turing machine accepts a particular input (the halting problem). The reduction uses the concept of a computation history, a string describing an entire computation of a Turing machine. A CFG can be constructed that generates all strings that are not accepting computation histories for a particular Turing machine on a particular input, and thus it will accept all strings only if the machine does not accept that input.
Language equality
Given two CFGs, do they generate the same language?
The undecidability of this problem is a direct consequence of the previous: it is impossible to even decide whether a CFG is equivalent to the trivial CFG defining the language of all strings.
Language inclusion
Given two CFGs, can the first one generate all strings that the second one can generate?
If this problem was decidable, then language equality could be decided too: two CFGs and generate the same language if is a subset of and is a subset of .
Being in a lower or higher level of the Chomsky hierarchy
Using Greibach's theorem, it can be shown that the two following problems are undecidable:
Given a context-sensitive grammar, does it describe a context-free language?
Given a context-free grammar, does it describe a regular language?
Grammar ambiguity
Given a CFG, is it ambiguous?
The undecidability of this problem follows from the fact that if an algorithm to determine ambiguity existed, the Post correspondence problem could be decided, which is known to be undecidable. This may be proved by Ogden's lemma.
Language disjointness
Given two CFGs, is there any string derivable from both grammars?
If this problem was decidable, the undecidable Post correspondence problem (PCP) could be decided, too: given strings over some alphabet , let the grammar consist of the rule
;
where denotes the reversed string and does not occur among the ; and let grammar consist of the rule
;
Then the PCP instance given by has a solution if and only if and share a derivable string. The left of the string (before the ) will represent the top of the solution for the PCP instance while the right side will be the bottom in reverse.
Extensions
An obvious way to extend the context-free grammar formalism is to allow nonterminals to have arguments, the values of which are passed along within the rules. This allows natural language features such as agreement and reference, and programming language analogs such as the correct use and definition of identifiers, to be expressed in a natural way. E.g. we can now easily express that in English sentences, the subject and verb must agree in number. In computer science, examples of this approach include affix grammars, attribute grammars, indexed grammars, and Van Wijngaarden two-level grammars. Similar extensions exist in linguistics.
An extended context-free grammar (or regular right part grammar) is one in which the right-hand side of the production rules is allowed to be a regular expression over the grammar's terminals and nonterminals. Extended context-free grammars describe exactly the context-free languages.
Another extension is to allow additional terminal symbols to appear at the left-hand side of rules, constraining their application. This produces the formalism of context-sensitive grammars.
Subclasses
There are a number of important subclasses of the context-free grammars:
LR(k) grammars (also known as deterministic context-free grammars) allow parsing (string recognition) with deterministic pushdown automata (PDA), but they can only describe deterministic context-free languages.
Simple LR, Look-Ahead LR grammars are subclasses that allow further simplification of parsing. SLR and LALR are recognized using the same PDA as LR, but with simpler tables, in most cases.
LL(k) and LL(*) grammars allow parsing by direct construction of a leftmost derivation as described above, and describe even fewer languages.
Simple grammars are a subclass of the LL(1) grammars mostly interesting for its theoretical property that language equality of simple grammars is decidable, while language inclusion is not.
Bracketed grammars have the property that the terminal symbols are divided into left and right bracket pairs that always match up in rules.
Linear grammars have no rules with more than one nonterminal on the right-hand side.
Regular grammars are a subclass of the linear grammars and describe the regular languages, i.e. they correspond to finite automata and regular expressions.
LR parsing extends LL parsing to support a larger range of grammars; in turn, generalized LR parsing extends LR parsing to support arbitrary context-free grammars. On LL grammars and LR grammars, it essentially performs LL parsing and LR parsing, respectively, while on nondeterministic grammars, it is as efficient as can be expected. Although GLR parsing was developed in the 1980s, many new language definitions and parser generators continue to be based on LL, LALR or LR parsing up to the present day.
Linguistic applications
Chomsky initially hoped to overcome the limitations of context-free grammars by adding transformation rules.
Such rules are another standard device in traditional linguistics; e.g. passivization in English. Much of generative grammar has been devoted to finding ways of refining the descriptive mechanisms of phrase-structure grammar and transformation rules such that exactly the kinds of things can be expressed that natural language actually allows. Allowing arbitrary transformations does not meet that goal: they are much too powerful, being Turing complete unless significant restrictions are added (e.g. no transformations that introduce and then rewrite symbols in a context-free fashion).
Chomsky's general position regarding the non-context-freeness of natural language has held up since then, although his specific examples regarding the inadequacy of context-free grammars in terms of their weak generative capacity were later disproved. Gerald Gazdar and Geoffrey Pullum have argued that despite a few non-context-free constructions in natural language (such as cross-serial dependencies in Swiss German and reduplication in Bambara), the vast majority of forms in natural language are indeed context-free.
See also
Parsing expression grammar
Stochastic context-free grammar
Algorithms for context-free grammar generation
Pumping lemma for context-free languages
References
Notes
Further reading
Chapter 4: Context-Free Grammars, pp. 77–106; Chapter 6: Properties of Context-Free Languages, pp. 125–137.
Chapter 2: Context-Free Grammars, pp. 91–122; Section 4.1.2: Decidable problems concerning context-free languages, pp. 156–159; Section 5.1.1: Reductions via computation histories: pp. 176–183.
External links
Computer programmers may find the stack exchange answer to be useful.
CFG Developer created by Christopher Wong at Stanford University in 2014; modified by Kevin Gibbons in 2015.
1956 in computing
Compiler construction
Formal languages
Programming language topics
Wikipedia articles with ASCII art | Context-free grammar | [
"Mathematics",
"Engineering"
] | 5,834 | [
"Software engineering",
"Formal languages",
"Mathematical logic",
"Programming language topics"
] |
10,885,381 | https://en.wikipedia.org/wiki/Dibenzylpiperazine | Dibenzylpiperazine (DBZP) is a piperazine derivative often found as an impurity in the recreational stimulant drug benzylpiperazine (BZP). Presence of DBZP is a marker for low quality or badly made BZP. It can be made as a reaction byproduct during BZP synthesis, either because the reaction has been run at too high a temperature, or because an excess of benzyl chloride has been used.
Pharmacology and effects
It is not known to have any stimulant effects in its own right, although this has not been tested.
Toxicity
The toxicity of DBZP is unknown.
Legal status
China
As of October 2015 DBZP is a controlled substance in China.
United States
DBZP is not scheduled as a controlled substance at the federal level in the United States. It is possible that it could be considered an analog BZP, in which case, sales or possession intended for human consumption could be prosecuted under the Federal Analog Act.
Florida
DBZP is a Schedule I controlled substance in the state of Florida making it illegal to buy, sell, or possess in Florida.
See also
Substituted piperazine
References
Piperazines
Designer drugs
Benzyl compounds | Dibenzylpiperazine | [
"Chemistry"
] | 260 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
10,886,039 | https://en.wikipedia.org/wiki/SunPower | SunPower Corporation is an American provider of photovoltaic solar energy generation systems and battery energy storage products, primarily for residential customers. The company, headquartered in San Jose, California, was founded in 1985 by Richard Swanson, an electrical engineering professor from Stanford University. Cypress Semiconductor bought a majority interest in the company in 2002, growing it quickly until SunPower went public in 2005. TotalEnergies, a French energy and oil company purchased a controlling interest in SunPower for billion in 2011 but disengaged progressively until reaching 32.5%.
The company previously developed and manufactured photovoltaic panels, before spinning off that part of its business in 2020 as Maxeon Solar Technologies. The company had also previously marketed its products to commercial and industrial customers before agreeing to sell that business line to TotalEnergies in February 2022.
SunPower filed for chapter 11 bankruptcy protection in August 2024. Nasdaq trading was suspended on 16 Aug 2024. As of 26 Dec 2024, the OTC stock symbol is SPWRQ, and the share price is $0.0001.
History
Early history
SunPower was founded on April 24, 1985, by Richard Swanson, who was a Stanford University professor focused on electrical engineering. Swanson studied solar power efficiency in the Stanford Electronics Laboratory with funding from research grants. After breaking a record for solar power efficiency in lab conditions, he took a sabbatical to start SunPower and commercialize the technology. Initially, the company was called Eos and was funded with $2,000 in savings between Swanson and his friend Richard Crane. In 1989, Robert Lorenzini invested in the company, became its chairman, and changed the name to SunPower.
Some of SunPower's early revenues were from research grants and using its manufacturing facilities to create silicon wafers for semiconductor companies. Interest grew as SunPower completed prototype installations and portable electronics that use solar power became more popular. Swanson resigned from his academic position at Stanford in 1991, in order to focus on SunPower full-time. The company's revenues grew from $600,000 in 1989 to $1.4 million in 1995, and $6 million in 1996. However, by 2001 the company was anticipating having to lay off half of its employees.
Growth
SunPower founder Richard Swanson's former classmate, T.J. Rodgers, was the CEO of Cypress Semiconductor and took an interest in investing in the company. At first, the Cypress board wasn't willing to invest, so Rodgers invested $750,000 of his own money. Starting with an investment of $8 million, Cypress eventually invested about $150 million, acquiring a controlling interest in SunPower in 2002. Cypress appointed Tom Werner as the new CEO the following year.
Demand for SunPower's products increased in the early 2000s, due to rising utility costs, government subsidies, and its new A-300 solar cell. In particular, SunPower grew in Germany and California, where new government subsidies were being introduced. By 2005, SunPower was not yet profitable, but had $200 million in backlogged orders. Revenues increased from $5 million in 2003 to $78.7 million in 2005.
As the company was getting closer to profitability, it filed an initial public offering. The 2005 offering raised $138.6 million in funding. The following year, SunPower was profitable for the first time with $236.5 million in revenues. SunPower moved into a larger corporate headquarters location in San Jose, California and secured several contracts with major retailers for solar panel installations. In 2007, SunPower announced plans to expand its manufacturing facility five-fold and build a second factory.
SunPower collaborated with PowerLight to develop its roofing-tile solar product called SunTile. In order to combine their R&D efforts, SunPower acquired PowerLight for $265 million, in January 2007. Analysts estimated the acquisition doubled SunPower's size. Shortly afterwards, PowerLight secured a $330 million contract, the largest SunPower had ever done. By 2007, half of Cypress' revenues, or $775 million, was coming from its investment in SunPower. SunPower was spun-off as a separate business from Cypress in 2008.
SunPower acquired Sunray Renewable Energy, a solar panel company based in Italy, for $277 million in 2010, in order to expand in Europe. The following year, SunPower cut back production due to an overall market decline in solar power purchases. SunPower also announced the French oil and gas company Total was acquiring a majority interest in SunPower for $1.37 billion. In 2012, SunPower founder Richard Swanson retired, though he continued to serve on the SunPower advisory board.
By 2013, SunPower's revenues rebounded and it started expanding its manufacturing facilities again. That same year, it acquired Greenbotics, which developed automated cleaning systems for solar panels, and Dragonfly, which developed solar micro-inverters. This was followed by SunPower's 2014 acquisition of SolarBridge, which developed microinverters used to improve the efficiency of solar panels. In 2018, SunPower sold its microinverter business to Enphase Energy and since that time Sunpower has used Enphase microinverters in all AC module products.
In 2014, SunPower raised $220 million from Bank of America and Merrill Lynch, in order to fund customer financing options. That same year, SunPower invested $20 million in a home energy app company called Tendril. As part of the deal, the two companies began integrating their products, so the home automation software from Tendril could time heavy energy use for when the solar panels are generating the most power.
Spin-offs
In 2019, SunPower announced it was going to spin off its manufacturing division into a separate business in Singapore named Maxeon Solar Technologies. As part of the deal, Tianjin Zhonghuan Semiconductor Co invested $298 million for a 29% interest in Maxeon. The remaining SunPower business became focused on services, installation, batteries, and other products. In 2021, Tom Werner retired as CEO and Peter Faricy took his place as CEO.
In February 2022, SunPower spun-off its commercial and industrial installation divisions, which were purchased by SunPower investor TotalEnergies for $250 million. SunPower said the transaction would allow it to focus on residential installations.
Technical default and bankruptcy
The company announced on December 18, 2023, that there was a question it will be a "going concern," indicating that leadership was uncertain if the company could continue operations given the company's current financial position. The company also announced that it previously incorrectly accounted for inventory, causing a technical default; however creditors gave the company time to shore up its finances before calling those debts.
On August 5, 2024, SunPower filed for Chapter 11 bankruptcy protection. The company has entered a stalking horse bid to sell off its assets to Complete Solaria for $50 million.
References
External links
Photovoltaics manufacturers
Solar energy companies of the United States
Solar energy in California
Manufacturing companies based in California
Technology companies based in the San Francisco Bay Area
Companies based in San Jose, California
Energy companies established in 1985
Renewable resource companies established in 1985
1985 establishments in California
TotalEnergies
Companies formerly listed on the Nasdaq
Companies that filed for Chapter 11 bankruptcy in 2024
Energy in the San Francisco Bay Area
2005 initial public offerings
American companies established in 1985 | SunPower | [
"Engineering"
] | 1,508 | [
"Photovoltaics manufacturers",
"Engineering companies"
] |
10,886,610 | https://en.wikipedia.org/wiki/Trivialism | Trivialism is the logical theory that all statements (also known as propositions) are true and, consequently, that all contradictions of the form "p and not p" (e.g. the ball is red and not red) are true. In accordance with this, a trivialist is a person who believes everything is true.
In classical logic, trivialism is in direct violation of Aristotle's law of noncontradiction. In philosophy, trivialism is considered by some to be the complete opposite of skepticism. Paraconsistent logics may use "the law of non-triviality" to abstain from trivialism in logical practices that involve true contradictions.
Theoretical arguments and anecdotes have been offered for trivialism to contrast it with theories such as modal realism, dialetheism and paraconsistent logics.
Overview
Etymology
Trivialism, as a term, is derived from the Latin word trivialis, meaning commonplace, in turn derived from the trivium, the three introductory educational topics (grammar, logic, and rhetoric) expected to be learned by all freemen. In logic, from this meaning, a "trivial" theory is something regarded as defective in the face of a complex phenomenon that needs to be completely represented. Thus, literally, the trivialist theory is something expressed in the simplest possible way.
Theory
In symbolic logic, trivialism may be expressed as the following:
The above would be read as "given any proposition, it is a true proposition" through universal quantification (∀).
A claim of trivialism may always apply its fundamental truth, otherwise known as a truth predicate:
The above would be read as a "proposition if and only if a true proposition", meaning that all propositions are believed to be inherently proven as true. Without consistent use of this concept, a claim of advocating trivialism may not be seen as genuine and complete trivialism; as to claim a proposition is true but deny it as probably true may be considered inconsistent with the assumed theory.
Taxonomy of trivialisms
Luis Estrada-González in "Models of Possibilism and Trivialism" lists four types of trivialism through the concept of possible worlds, with a "world" being a possibility and "the actual world" being reality. It is theorized a trivialist simply designates a value to all propositions in equivalence to seeing all propositions and their negations as true. This taxonomy is used to demonstrate the different strengths and plausibility of trivialism in this context:
(T0) Minimal trivialism: At some world, all propositions have a designated value.
(T1) Pluralist trivialism: In some worlds, all propositions have a designated value.
(T2) Actualist trivialism: In the actual world, all propositions have a designated value.
(T3) Absolute trivialism: In all worlds, all propositions have a designated value.
Arguments against trivialism
The consensus among the majority of philosophers is descriptively a denial of trivialism, termed as non-trivialism or anti-trivialism. This is due to it being unable to produce a sound argument through the principle of explosion and it being considered an absurdity (reductio ad absurdum).
Aristotle
Aristotle's law of noncontradiction and other arguments are considered to be against trivialism. Luis Estrada-González in "Models of Possiblism and Trivialism" has interpreted Aristotle's Metaphysics Book IV as such: "A family of arguments between 1008a26 and 1007b12 of the form 'If trivialism is right, then X is the case, but if X is the case then all things are one. But it is impossible that all things are one, so trivialism is impossible.' ... these Aristotelian considerations are the seeds of virtually all subsequent suspicions against trivialism: Trivialism has to be rejected because it identifies what should not be identified, and is undesirable from a logical point of view because it identifies what is not identical, namely, truth and falsehood."
Priest
Graham Priest considers trivialism untenable: "a substantial case can be made for dialetheism; belief in [trivialism], though, would appear to be grounds for certifiable insanity".
He formulated the "law of non-triviality" as a replacement for the law of non-contradiction in paraconsistent logic and dialetheism.
Arguments for trivialism
There are theoretical arguments for trivialism argued from the position of a devil's advocate:
Argument from possibilism
Paul Kabay has argued for trivialism in "On the Plenitude of Truth" from the following:
Above, possibilism (modal realism; related to possible worlds) is the oft-debated theory that every proposition is possible. With this assumed to be true, trivialism can be assumed to be true as well according to Kabay.
Paradoxes
The liar's paradox, Curry's paradox, and the principle of explosion all can be asserted as valid and not required to be resolved and used to defend trivialism.
Philosophical implications
Comparison to skepticism
In Paul Kabay's comparison of trivialism to schools of philosophical skepticism (in "On the Plenitude of Truth")—such as Pyrrhonism—who seek to attain a form of ataraxia, or state of imperturbability; it is purported the figurative trivialist inherently attains this state. This is claimed to be justified by the figurative trivialist seeing every state of affairs being true, even in a state of anxiety. Once universally accepted as true, the trivialist is free from any further anxieties regarding whether any state of affairs is true.
Kabay compares the Pyrrhonian skeptic to the figurative trivialist and claims that as the skeptic reportedly attains a state of imperturbability through a suspension of belief, the trivialist may attain such a state through an abundance of belief.
In this case—and according to independent claims by Graham Priest—trivialism is considered the complete opposite of skepticism. However, insofar as the trivialist affirms all states of affairs as universally true, the Pyrrhonist neither affirms nor denies the truth (or falsity) of such affairs.
Impossibility of action
It is asserted by both Priest and Kabay that it is impossible for a trivialist to truly choose and thus act. Priest argues this by the following in Doubt Truth to Be a Liar: "One cannot intend to act in such a way as to bring about some state of affairs, s, if one believes s already to hold. Conversely, if one acts with the purpose of bringing s about, one cannot believe that s already obtains." Due to their suspension of determination upon striking equipollence between claims, the Pyrrhonist has also remained subject to apraxia charges.
Advocates
Paul Kabay, an Australian philosopher, in his book A Defense of Trivialism has argued that various philosophers in history have held views resembling trivialism, although he stops short of calling them trivialists. He mentions various pre-Socratic Greek philosophers as philosophers holding views resembling trivialism. He mentions that Aristotle in his book Metaphysics appears to suggest that Heraclitus and Anaxagoras advocated trivialism. He quotes Anaxagoras as saying that all things are one. Kabay also suggests Heraclitus' ideas are similar to trivialism because Heraclitus believed in a union of opposites, shown in such quotes as "the way up and down is the same". Kabay also mentions a fifteenth century Roman Catholic cardinal, Nicholas of Cusa, stating that what Cusa wrote in De Docta Ignorantia is interpreted as stating that God contained every fact, which Kabay argues would result in trivialism, but Kabay admits that mainstream Cusa scholars would not agree with interpreting Cusa as a trivialist. Kabay also mentions Spinoza as a philosopher whose views resemble trivialism. Kabay argues Spinoza was a trivialist because Spinoza believed everything was made of one substance which had infinite attributes. Kabay also mentions Hegel as a philosopher whose views resemble trivialism, quoting Hegel as stating in The Science of Logic "everything is inherently contradictory."
Azzouni
Jody Azzouni is a purported advocate of trivialism in his article The Strengthened Liar by claiming that natural language is trivial and inconsistent through the existence of the liar paradox ("This sentence is false"), and claiming that natural language has developed without central direction. Azzouni implies that every sentence in any natural language is true. "According to Azzouni, natural language is trivial, that is to say, every sentence in natural language is true...And, of course, trivialism follows straightforwardly from the triviality of natural language: after all, 'trivialism is true' is a sentence in natural language."
Anaxagoras
The Greek philosopher Anaxagoras is suggested as a possible trivialist by Graham Priest in his 2005 book Doubt Truth to Be a Liar. Priest writes, "He held that, at least at one time, everything was all mixed up so that no predicate applied to any one thing more than a contrary predicate."
Anti-trivialism
Luis Estrada-González in "Models of Possibilism and Trivialism" lists eight types of anti-trivialism (or non-trivialism) through the use of possible worlds:
(AT0) Actualist minimal anti-trivialism: In the actual world, some propositions do not have a value of true or false.
(AT1) Actualist absolute anti-trivialism: In the actual world, all propositions do not have a value of true or false.
(AT2) Minimal anti-trivialism: In some worlds, some propositions do not have a value of true or false.
(AT3) Pointed anti-trivialism (or minimal logical nihilism): In some worlds, every proposition does not have a value of true or false.
(AT4) Distributed anti-trivialism: In every world, some propositions do not have a value of true or false.
(AT5) Strong anti-trivialism: Some propositions do not have a value of true or false in every world.
(AT6) Super anti-trivialism (or moderate logical nihilism): All propositions do not have a value of true or false at some world.
(AT7) Absolute anti-trivialism (or maximal logical nihilism): All propositions do not have a value of true or false in every world.
See also
Discordianism
Doublethink
Factual relativism
Fatalism
Anekantavada
Syādvāda
Law of excluded middle
Laws of thought
Monism
Moral relativism
Principle of bivalence
References
Further reading
Concepts in logic
Non-classical logic
Philosophical schools and traditions
Theories of deduction
Theories of truth | Trivialism | [
"Mathematics"
] | 2,265 | [
"Theories of deduction"
] |
10,886,829 | https://en.wikipedia.org/wiki/Raid%20%28video%20games%29 | In video games, a raid is a type of mission in Massively multiplayer online role-playing games (MMORPGs) where a much larger number than usual of people specifically gather in an attempt to defeat either: (a) another number of people at player-vs-player (PVP), (b) a series of computer-controlled enemies (non-player characters; NPCs) in a player-vs-environment (PVE) battlefield, or (c) a very powerful boss (superboss). This type of objective usually occurs within an instance dungeon, a separate server instance from the other players in the game. A raid may be highly planned and coordinated or arise nearly spontaneously through word of mouth communications in- and out-of game.
In military real-time strategy (RTS) games like StarCraft, "raids" usually refer to the military tactic.
Origin
The term itself stems from the military definition of 'a sudden attack and/or seizure of some objective'.
Raiding originated in the class of text MUDs known as DikuMUD, which in turn heavily influenced the 1999 MMORPG EverQuest, which brought the raiding concept into modern 3D MMORPGs. As of 2019, the largest and most popular game to feature raiding is Blizzard's 2004 MMORPG World of Warcraft.
Raid tactics
The combat encounters comprising a raid usually require players to coordinate with one another while performing specific roles as members of a team. The roles of Tank, Healer, and Damage Dealer are known as the "Holy Trinity" of MMORPG group composition. Other common roles include Buffing, Crowd control, and Pulling (selectively choosing targets with which to initiate combat). A raid leader is often needed to direct the group efficiently, due to the complexities of keeping many players working well together.
Raid loot
Raids are often very rewarding in terms of virtual treasure and items that are unique or that grant exceptional stats and abilities, thus giving players an incentive to participate. Often however, there is not enough treasure to reward individually every player who participates. Players have invented various systems, such as Dragon kill points to distribute loot fairly.
Raiding guilds
Raiding is often done by associations of players called guilds or clans who maintain a consistent schedule and roster. There are two types of raiding guilds: casual guilds, defined as spending two to three days per week on average; and hardcore guilds, defined as spending four to seven days per week on average.
Noted raids
An attempted raid in the game Final Fantasy XI against the Pandemonium Warden lasted 18 hours and reportedly resulted in players "passing out and getting physically ill." .
Game raids
Game raids are commonly organized by internet celebrities with the intent to protest against the company's behavior, their actions or simply for fun. They normally consist of players creating their characters with pre-discussed appearances, and stacking up the game servers until they have their demands met, or they tire themselves out. A popular raider is Quackity, who streams his raids on Twitch.
References
Massively multiplayer online role-playing games
Video game terminology | Raid (video games) | [
"Technology"
] | 632 | [
"Computing terminology",
"Video game terminology"
] |
10,888,763 | https://en.wikipedia.org/wiki/Hoffmann%20kiln | The Hoffmann kiln is a series of batch process kilns. Hoffmann kilns are the most common kiln used in production of bricks and some other ceramic products. Patented by German Friedrich Hoffmann for brickmaking in 1858, it was later used for lime-burning, and was known as the Hoffmann continuous kiln.
Construction and operation
A Hoffmann kiln consists of a main fire passage surrounded on each side by several small rooms. Each room contains a pallet of bricks. In the main fire passage there is a fire wagon, that holds a fire that burns continuously. Each room is fired for a specific time, until the bricks are vitrified properly, and thereafter the fire wagon is rolled to the next room to be fired.
Each room is connected to the next room by a passageway carrying hot gases from the fire. In this way, the hottest gases are directed into the room that is currently being fired. Then the gases pass into the adjacent room that is scheduled to be fired next. There the gases preheat the brick. As the gases pass through the kiln circuit, they gradually cool as they transfer heat to the brick as it is preheated and dried. This is essentially a counter-current heat exchanger, which makes for a very efficient use of heat and fuel. This efficiency is a principal advantage of the Hoffmann kiln, and is one of the reasons for its original development and continued use throughout history. In addition to the inner opening to the fire passage, each room also has an outside door, through which recently fired brick is removed, and replaced with wet brick to be dried and then fired in the next firing cycle.
In a classic Hoffmann kiln, the fire may burn continuously for years, even decades; in Iran, there are kilns that are still active and have been working continuously for 35 years. Any fuel may be used in a Hoffmann kiln, including gasoline, natural gas, heavy petroleum and wood fuel. The dimensions of a typical Hoffmann kiln are completely variable, but in average about 5 m (height) x 15 m (width) x 150 m (length).
Hoffmann kiln expansion
The first kiln of this class was put into operation on November 22, 1859 in Scholwin (since 1946, Skolwin), near Stettin, which was then part of Prussia. In 1867 there were already 250 of them, most in the Prussian part of Germany, fifty in England and three in France. In Italy, their expansion began in 1870, after being shown at the Paris Exhibition. In September 1870, the first brick factory according to Hoffmann's patent was inaugurated in Australia. The first continuous Hoffmann system kilns installed in Spain would have been in 1880, near Madrid. In 1900 there were already more than 4,000 kilns of this type, distributed throughout Europe, Russia, the Americas, Africa and even the East Indies. In 1904, an oven according to the patent of the British William Sercombe and based on the Hoffmann model began operating in Palmerston North, New Zealand.
Hoffman kilns are still in use for brick production in some parts of the world, especially in places where labor costs are low and modern technology is not easily accessible.
Historic examples of Hoffmann kilns
The Hoffmann kiln is used in almost every country.
UK
In the British Isles there are only a few Hoffmann kilns remaining, some of which have been preserved.
The only ones with a chimney are at Prestongrange Industrial Heritage Museum and Llanymynech Heritage Area. The site at Llanymynech, close to Oswestry was used for lime-burning and has recently been partially restored as part of an industrial archaeology conservation project supported by English Heritage and the Heritage Lottery Fund.
Two examples in North Yorkshire, the Hoffmann lime-burning kiln at Meal Bank Quarry, Ingleton, and that at the former Craven and Murgatroyd lime works, Langcliffe, are scheduled ancient monuments.
There is an intact but abandoned Hoffmann kiln without a chimney present at Minera Limeworks; the site is abandoned but all entrances to the kiln have been grated-off, preventing access. The kiln is in a very poor state of repair, with trees growing out of the walls and the roof. Minera Quarry Trust hopes one day to develop the area into something of a tourist attraction. The Grade II listed Hoffmann brick kiln in Ilkeston, Derbyshire, is also badly neglected, although the recently installed fencing offers some protection for the building and for visitors.
At Prestongrange Museum, outside Prestonpans in East Lothian, the Hoffman kiln is still standing and visitors can listen to more about it via a mobile phone tour.
There is a nearly complete kiln in Horeb, Carmarthenshire.
There is still a working kiln at Kings Dyke in Peterborough, which is the last site of the London Brick Company, owned by Forterra PLC.
Australia
In Victoria, Australia, at the Brunswick brickworks, there are two surviving kilns converted to residences, and a chimney from a third kiln; there is another in Box Hill, Victoria; also in Melbourne.
In Adelaide, South Australia, the last remaining Hoffman kiln in the state is in at the old Hallett Brickworks site in Torrensville.
There is one at St Peters in Sydney, New South Wales.
In Western Australia, the kiln at the Maylands Brickworks in the Perth suburb of Maylands, which operated from 1927 to 1982 is the only remaining Hoffman kiln in the state.
Catalonia
Bòbila de Bellamar a Calafell.
Other countries
There is a complete kiln in the restored Tsalapatas brick Factory in Volos Greece that has been converted to an industrial museum.
There are two in New Zealand.
Kaohsiung city in Taiwan is also home to a Hoffman kiln, built by the Japanese government in 1899.
References
External links
History of Hoffman
Preston Grange tour site
Evaluation of Hoffman Kiln Technology
RCAHMS Canmore
Industrial processes
Kilns
Lime kilns
Firing techniques | Hoffmann kiln | [
"Chemistry",
"Engineering"
] | 1,235 | [
"Chemical equipment",
"Lime kilns",
"Kilns"
] |
10,889,413 | https://en.wikipedia.org/wiki/Key%20distribution%20in%20wireless%20sensor%20networks | Key distribution is an important issue in wireless sensor network (WSN) design. WSNs are networks of small, battery-powered, memory-constraint devices named sensor nodes, which have the capability of wireless communication over a restricted area. Due to memory and power constraints, they need to be well arranged to build a fully functional network.
Key distribution schemes
Key predistribution is the method of distribution of keys onto nodes before deployment. Therefore, the nodes build up the network using their secret keys after deployment, that is, when they reach their target position.
Key predistribution schemes are various methods that have been developed by academicians for a better maintenance of PEA management in WSNs. Basically a key predistribution scheme has 3 phases:
Key distribution
Shared key discovery
Path-key establishment
During these phases, secret keys are generated, placed in sensor nodes, and each sensor node searches the area in its communication range to find another node to communicate. A secure link is established when two nodes discover one or more common keys (this differs in each scheme), and communication is done on that link between those two nodes. Afterwards, paths are established connecting these links, to create a connected graph. The result is a wireless communication network functioning in its own way, according to the key predistribution scheme used in creation.
There are a number of aspects of WSNs on which key predistribution schemes are competing to achieve a better result. The most critical ones are: local and global connectivity, and resiliency.
Local connectivity means the probability that any two sensor nodes have a common key with which they can establish a secure link to communicate.
Global connectivity is the fraction of nodes that are in the largest connected graph over the number of all nodes.
Resiliency is the number of links that cannot be compromised when a number of nodes(therefore keys in them) are compromised. So it is basically the quality of resistance against the attempts to hack the network. Apart from these, two other critical issues in WSN design are computational cost and hardware cost. Computational cost is the amount of computation done during these phases. Hardware cost is generally the cost of the memory and battery in each node.
Keys may be generated randomly and then the nodes determine mutual connectivity. A structured approach based on matrices that establishes keys in a pair-wise fashion is due to Rolf Blom. Many variations to Blom's scheme exist. Thus the scheme of Du et al. combines Blom's key pre-distribution scheme with the random key pre-distribution method with it, providing better resiliency.
See also
Wireless sensor networks
Key distribution
Blom's scheme
References
External links
List of publications for Key Management in WSN
Key management | Key distribution in wireless sensor networks | [
"Technology"
] | 556 | [
"Wireless networking",
"Wireless sensor network"
] |
10,889,931 | https://en.wikipedia.org/wiki/IEEE%201613 | IEEE-1613 is the IEEE standard detailing environmental and testing requirements for communications networking devices in electric power substations. The standard is sponsored by the IEEE Power & Energy Society.
External links
(current [2009] version)
(2011 amendments to current version)
(superseded by 2009 version)
IEEE standards
Electric power infrastructure | IEEE 1613 | [
"Technology"
] | 65 | [
"Computer standards",
"IEEE standards"
] |
10,891,311 | https://en.wikipedia.org/wiki/Alan%20G.%20Marshall | Alan G. Marshall is an American analytical chemist who has devoted his scientific career to developing a scientific technique known as Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry, which he co-invented.
He was born in Bluffton, Ohio, in 1944, and earned his bachelor's in chemistry from Northwestern University (1965) and Ph.D. in chemistry from Stanford University (1970). His first academic appointment was at the University of British Columbia. In 1980, he moved to the Ohio State University where he remained until 1993.
He is the Robert O. Lawton Professor of Chemistry and Biochemistry at Florida State University and director of the Ion Cyclotron Resonance Program at the National High Magnetic Field Laboratory.
He is a fellow of the American Chemical Society, American Physical Society, and the American Association for the Advancement of Science, and has received numerous awards, including the 2000 Thomson Medal given by the International Mass Spectrometry Foundation; the 2007 Chemical Pioneer Award, given by the American Institute of Chemists; the 2012 William H. Nichols Medal, given by the New York Section of the American Chemical Society; and the 2012 Pittsburgh Analytical Chemistry Award, given by the Society for Analytical Chemists of Pittsburgh.
See also
Petroleomics
External links
Florida State University faculty profile
National High Magnetic Field Laboratory Profile
Florida State University faculty
21st-century American chemists
Living people
1944 births
Mass spectrometrists
People from Bluffton, Ohio
Fellows of the American Chemical Society
Thomson Medal recipients
Fellows of the American Physical Society | Alan G. Marshall | [
"Physics",
"Chemistry"
] | 313 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
10,892,179 | https://en.wikipedia.org/wiki/Pseudoalteromonas%20haloplanktis | Pseudoalteromonas haloplanktis is a Gram-negative, psychrophilic marine bacterium.
References
External links
Type strain of Pseudoalteromonas haloplanktis at BacDive - the Bacterial Diversity Metadatabase
Alteromonadales
Bacteria described in 1944
Psychrophiles
Gram-negative bacteria
Marine microorganisms | Pseudoalteromonas haloplanktis | [
"Biology"
] | 74 | [
"Marine microorganisms",
"Microorganisms"
] |
10,892,778 | https://en.wikipedia.org/wiki/POLDER | POLDER (POLarization and Directionality of the Earth's Reflectances) is a passive optical imaging radiometer and polarimeter instrument developed by the French space agency CNES.
Description
The device was designed to observe solar radiation reflected by Earth's atmosphere, including studies of tropospheric aerosols, sea surface reflectance, bidirectional reflectance distribution function of land surfaces, and the Earth Radiation Budget.
Specifications
POLDER has a mass of approximately , and has a power consumption of 77 W in imaging mode (with a mean consumption of 29 W).
Imaging
POLDER utilizes a push broom scanner. The device's optical system uses a telecentric lens and a charge-coupled device matrix with a resolution of 242x548 pixels. The focal length is with a focal ratio of 4.6. The field of view ranges from ±43° to ±57°, depending on the tracking method.
Spectral characteristics
The device scans between 443 and 910 nm FWHM, depending on the objective of the measurement. The shorter wavelengths (443–565 nm) typically measure ocean color, whereas the longer wavelengths (670–910 nm) are used to study vegetation and water vapor content.
Data transfer
It transmits data on 465.9875 MHz at bit rate of 200 bit/s, and receives on 401.65 MHz at 400 bit/s. The data rate is 880 kbit/s at a quantization level of 12 bits.
Missions
POLDER was first launched as a passenger instrument aboard ADEOS I on 17 August 1996. The mission ended on 30 June 1997 when communication from the host satellite failed. POLDER 2 was launched in December 2002 aboard ADEOS II. The second mission ended prematurely after 10 months when the satellite's solar panel malfunctioned.
A third generation instrument was launched on board the French PARASOL microsatellite. The satellite was maneuvered out of the A-train on 2 December 2009 and permanently shut down on 18 December 2013.
Footnotes
Sources
External links
POLDER website
Radiometry
Earth observation satellite sensors | POLDER | [
"Engineering"
] | 428 | [
"Telecommunications engineering",
"Radiometry"
] |
10,893,142 | https://en.wikipedia.org/wiki/Harappan%20architecture | Harappan architecture is the architecture of the Bronze Age Indus Valley civilization, an ancient society of people who lived during c. 3300 BCE to 1300 BCE in the Indus Valley of modern-day Pakistan and India.
The civilization's cities were noted for their urban planning, baked brick houses, elaborate drainage systems, water supply systems, clusters of large non-residential buildings, and new techniques in handicraft (carnelian products, seal carving) and metallurgy (copper, bronze, lead, and tin). Its large urban centres of Mohenjo-daro and Harappa very likely grew to containing between 30,000 and 60,000 individuals, and the civilisation itself during its florescence may have contained between one and five million individuals.
South Asian Harappan culture was heavily formed through its rich integration into international trade, commerce, and contact due to its location along the Indus River. Signs of urbanization in the Indus Valley began as early as 6000 BCE, and by 3200 BCE the region expanded with towns and cities during the Early Harappan phase. The transition between Early and Mature Harappan phases took place in the sites of Amri, Nausharo, Ghazi Shah and Banawali. By 2500 BCE in the Mature Harappan phase, the Harappan Civilization became the eastern anchor of a network of routes including the Mesopotamian city-states, the Gulf, Iranian Plateau, and Central Asia, and its urbanization emerged as a clear marker of the sociocultural complexity of the Mature Harappan Civilization. Through its urbanization, the Harappan socio-cultural context became a set of intertwined features and processes that were centered on the city while bringing together many kinds of people of different ethnic and linguistic groups into a socio-cultural whole. Due to the Harappan Civilization's participation in the art of writing, engagement in long-distance trade, and studying of abroad in Mesopotamia, it became a complex ethnic and linguistic civilization that was further felt through its architecture and town planning.
Overview
Elements of Harappan architecture
Art of the Indus Valley civilization architecture was indigenous and without any influence. Sculpture had no integral role in architecture; they were found separately. There was a concentration of utility factor rather than aesthetic factor presumably because they were primarily traders. Harappan architecture of the Indus civilization focused on functional expression rather than pure decoration. Evidence shows that the Indus culture lacked magnificent buildings such as palaces, monuments, discrepancies, and tombs, on the contrary, most buildings were large-scale public buildings, commodious houses, or practical residences, which proved to be the first complex ancient society based on egalitarianism.
Planning
A notable feature of Harappan architecture is that of a developed infrastructural city plan, in that they had sophisticated systems to control the flow of water and waste with public wells and drains that may have required advanced planning to implement. The cities were divided into rectilinear grids divided by roads which intersected at right angles, encircled by fortifications, with each block containing a network of houses and public wells. Harappan cities featured urban and social elements such as roads, fire pits, kilns, and industrial buildings, and were primarily functional in purpose rather than aesthetic. The city sewerage, plumbing, and drainage systems were distributed in the network of the grid planning by early hydro-engineers to be functionally used and maintained. The Harappan civilization seems to also be capable of astrological observation and alignment, as some evidence exists that Mohenjo-daro was aligned with the star "Rohini".
Mohenjo-daro had a planned layout with rectilinear buildings arranged on a grid plan. Most were built of fired and mortared brick; some incorporated sun-dried mud-brick and wooden superstructures.
Sites were often raised, or built on man made hills. This could be to combat flooding in the nearby areas. Another aspect of the architecture is they often built walls around their entire cities. This could have served several different needs. Many believe that the walls were built as defensive structures, where “Large and impressive construction works can be used to intimidate potential attackers (Trigger 1990)”. It was also an obvious feature to show the city was strong and powerful by being able to divert resources and labor to make such a large structure and not focus all of their energy on survival. This was not the only purpose for the wall, it is thought that the wall also served as protection from floods. There is also evidence of a tapering at the bottom of the wall to guide the water away from the city.
The city could be split into two different sections: an upper "acropolis" or citadel and a "lower town". The lower town consisted of lower valued residential buildings located on the eastern side of the city, while the upper acropolis would be on the western side of the city which contained the higher value buildings and public buildings. The acropolis was a “parallelogram that was 400–500 yards north-south and 200–300 yards east-west” It was also thought that the acropolis area would be built on the highest part of the mound in the city showing the importance and status of the area was much higher than the rest of the area. Another feature which suggests the acropolis is of higher importance is that the fortifications around the area were bigger and stronger than those around the rest of the city.
Large structures
The Harappan civilization was capable of building large structures that demanded significant engineering prowess:
Citadels (upper part or political, economy rich and VIP area). Citadels were for the elite class. Roads cut at right angles and were majorly in rectangular in shape. There were of multi-storeyed buildings. Houses were created from stone, mud-brick and wood. Assembly halls are also found there.
Public baths were used for rituals and ceremonies. There were small rooms along with the bath. No leakages and cracks on stairs. Bricks were used for making public baths.
Granaries were found in citadels and were the reason the people from the citadels were prosperous. These are also found at Lothal Dockyard to facilitate import and export. The "Great Granary" is the largest and one of the best examples of the granaries in the Harappan civilisation.
Water management was highly developed by the Harappan civilization. Large scale water works, such as drainage systems, could be covered to cure blockages. Dams were also constructed that controlled water inlets.
The Lothal Dock Yard is away from the main current to avoid deposition of silt. There is a wood lock gate system to avoid tidal flow.
Artificial lakes were cut out of stone to store water, as well as rain.
Corbelling was a technique used extensively by the Harappans to construct stone arches. There is evidence of the civilization building large vaulted culverts in Mohenjo-daro.
Water and sanitation technologies
The Harappa civilization revealed a complex mercantile society based on the well organized and comprehensive urban planning, which included sophisticated water management and sewerage systems to allow structures such as dams, wells, baths, and fountains. The plumbing and sewerage systems were formed by early hydro-engineers to allow water and sanitation within the city.
These systems were effectively used and maintained by ancient Harappan residents.
Dams were hydro-structure built along the Indus River for water management purposes such as collecting, storing and diverting water.
Water cisterns and reservoirs were used in water storage systems including aqueducts and basins for the purpose of water distribution in agricultural practices, some of which took advantage of the terrain height differences to convey and store water.
Fountains were set up and
connected by water channels to supply for households for purposes of drinking and bathing.
Drainage system and drains were built to make efficient disposal of water waste and residual solid in a sustainable way, which had inspection manholes at regular intervals to ensure efficient operation and proper management.
Materials
The materials of houses depended on the location of the building. If the house was more rural the bricks would be mud. If the house was in an urban area then the bricks would be baked. The bricks were made in standardised ratios of 1x2x4. “Houses range from 1–2 stories in height, with a central courtyard around which the rooms were arranged”
History
Early Harappan phase
The early Harappan phase, as defined by M.R. Mughal, spans roughly between 3200 and 2500 B.C.E. Between the two periods, the number of archeological sites dated to the Mature Harappan Phase was roughly over double to that of the Early Harappan, implying a significant urban growth during the Early Phase. There is not much evidence to show much urbanization, however; most Early Harappan structures were of a small scale and did not expand into public spaces or display a sense of social class. Early Harappan establishments settled in diverse landscapes, such as mountains and alluvium valleys (deposits of fertile soil).
Early to mature transition
There is evidence that the shift from the early to mature Harappan ages that point towards a gradual transition, with rapid development and geographical urban expansion. During this transition, a significant number of Harappan settlements were abandoned, perhaps due to shifting geography and climate.
Mature Harappan phase
The mature phase spans roughly between 2500 and 1900 B.C.E. and is much more reliably dated than the Early Phase. It is distinctive in its urban development, and was shaped by the behavior and activity of a sophisticated societal network. The structures reveal a hierarchy in social classes, and also evidence of extensive trading and farming.
Harappan revival
There are few buildings built in the Harappan Revival style. The best well-known is the Mohenjo-daro Museum. It is made of bricks with a very similar color to the buildings from Mohenjo-daro or Harappa. One entrance has a geometric pattern made of bricks similar to those of the original gates.
See also
Lothal
Dholavira
Mehrgarh
Sokhta Koh
Sanitation of the Indus Valley Civilisation
Notes
References
Sources
External links
Harappan Civilization: An Analysis in Modern Context
Recent Indus Discoveries
How Indus Towns Developed
INDUS VALLEY CIVILISATION ARCHITECTURE
Harappa.com on tools
Indus Valley Civilisation
Architecture in Pakistan
Pakistani architectural history
Architectural history
Indian architectural history | Harappan architecture | [
"Engineering"
] | 2,112 | [
"Architectural history",
"Architecture"
] |
10,893,277 | https://en.wikipedia.org/wiki/Microbacteriaceae | Microbacteriaceae is a family of bacteria of the order Actinomycetales. They are Gram-positive soil organisms.
Genera
The family Microbacteriaceae comprises the following genera:
Agreia Evtushenko et al. 2001
Agrococcus Groth et al. 1996
Agromyces Gledhill and Casida 1969 (Approved Lists 1980)
Allohumibacter Kim et al. 2016
Alpinimonas Schumann et al. 2012
Amnibacterium Kim and Lee 2011
Arenivirga Hamada et al. 2017
Aurantimicrobium Nakai et al. 2015
Canibacter Aravena-Román et al. 2014
Clavibacter Davis et al. 1984
Cnuibacter Zhou et al. 2016
Compostimonas Kim et al. 2012
Conyzicola Kim et al. 2014
"Crocebacterium" Rogers & Doran-Peterson 2006
Cryobacterium Suzuki et al. 1997
"Cryocola" Gavrish et al. 2003
Curtobacterium Yamada and Komagata 1972 (Approved Lists 1980)
Diaminobutyricibacter Kim et al. 2014
Diaminobutyricimonas Jang et al. 2013
Frigoribacterium Kämpfer et al. 2000
Frondihabitans Greene et al. 2009
Galbitalea Kim et al. 2014
Glaciibacter Katayama et al. 2009
Glaciihabitans Li et al. 2014
Gryllotalpicola Kim et al. 2012
Gulosibacter Manaia et al. 2004
Herbiconiux Behrendt et al. 2011
Homoserinibacter Kim et al. 2014
Homoserinimonas Kim et al. 2012
Huakuichenia Zhang et al. 2016
Humibacter Vaz-Moreira et al. 2008
Klugiella Cook et al. 2008
Labedella Lee 2007
Lacisediminihabitans Zhuo et al. 2020
Leifsonia Evtushenko et al. 2000
Leucobacter Takeuchi et al. 1996
"Luethyella" O'Neal et al. 2017
Lysinibacter Tuo et al. 2015
Lysinimonas Jang et al. 2013
"Marinisubtilis" Qin et al. 2021
Marisediminicola Li et al. 2010
Microbacterium Orla-Jensen 1919 (Approved Lists 1980)
Microcella Tiago et al. 2005
Microterricola Matsumoto et al. 2008
Mycetocola Tsukamoto et al. 2001
Naasia Weon et al. 2013
Okibacterium Evtushenko et al. 2002
Parafrigoribacterium Kong et al. 2016
Planctomonas Liu et al. 2019
"Candidatus Planktoluna" Hahn 2009
Plantibacter Behrendt et al. 2002
Pontimonas Jang et al. 2013
Protaetiibacter Heo et al. 2019
Pseudoclavibacter Manaia et al. 2004
Pseudolysinimonas Heo et al. 2019
Puzihella Sheu et al. 2017
Rathayibacter Zgurskaya et al. 1993
Rhodoglobus Sheridan et al. 2003
Rhodoluna Hahn et al. 2014
Rudaibacter Kim et al. 2013
Salinibacterium Han et al. 2003
Schumannella An et al. 2009
Subtercola Männistö et al. 2000
Terrimesophilobacter Hahn et al. 2021
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature and the phylogeny is based on whole-genome sequences.
Notes
References
Micrococcales
Soil biology | Microbacteriaceae | [
"Biology"
] | 768 | [
"Soil biology"
] |
10,893,579 | https://en.wikipedia.org/wiki/Clavibacter%20michiganensis | Clavibacter michiganensis is an aerobic non-sporulating Gram-positive plant pathogenic actinomycete of the genus Clavibacter. Clavibacter michiganensis has several subspecies. Clavibacter michiganensis subsp. michiganensis causes substantial economic losses worldwide by damaging tomatoes and potatoes.
Context
Clavibacter michiganensis, also known as Ring Rot, is an unusual genus of phytopathogenic bacteria in that it is gram-positive and does not have a type three secretion system. All Clavibacter species and subspecies have a type B2γ cell wall crosslinked at a diaminobutyrate residue. Clavibacter is an aerobic bacterium with a coryneform morphology. There is no mycelium and no spores are produced.
Clavibacter michiganensis infects the primary host in one of three ways: wounds, hydathodes, or by contaminated seed. If the bacteria reach a suitable quorum, the result is a systemic vascular infection. In the first stages of invasion, C. michiganensis resides as a biotrophic pathogen in the xylem vessels.
Clavibacter has a complex history of taxonomical names. For a long time, there was only one recognized species within the genus Clavibacter. There are nine subspecies within the michiganensis species. Recently, some strains have been reclassified into other genera. This complex history stems from the difficulty in characterizing bacteria. Unlike fungi, the morphology of bacteria is not very sufficient for taxonomical purposes. To this end, strains of a phytopathogenic bacteria, called pathovars, are distinguished by cultural (selective media), physiological, biochemical (e.g. secreted enzymes the chemical responses of the plant), or pathological characteristics (including the range of susceptible hosts).
Recently, two strains of this bacteria – subsp. sepidonicum and subsp. michiganensis – have had their genomes sequenced and annotated. There is still much to discover about this pathogen-host interaction but now that the genome has been sequenced, the rate of discoveries will likely increase. One of the main goals pertaining to research of these bacterial genomes is to develop resistant varieties. Unfortunately, no resistant varieties have yet been found.
Genetics
The species has a single chromosome.
C. m. subsp. michiganensis
C. m. subsp. michiganensis is the causative agent of bacterial wilt and canker of tomato (Lycopersicon esculentum).
Hosts and symptoms
When the infection occurs in an early stage of the tomato plant there may be wilting on leaves because Clavibacter michiganensis subsp. michiganensis enter the plant by wounds, including root wounds, and if the bacterium gets to the xylem then a systemic infection is likely that may plug the xylem vessels. The wilting may only show on one side of the leaf and may recover during cooler periods. The entire system of xylem within the plant allows the bacteria to form titers of up to 109 bacteria per gram of plant tissue. Wilting may eventually spread to all leaves and these leaves, along with their petioles, may also show distorted, curled growth. One way to diagnose a severe vascular infection is to pinch the stem. If the epidermis and outer layer of the cortex separate from the inner stem then there is severe vascular infection. These exposed parts will have a soapy feel. Canker lesions, though rare, may develop on the stem. These cankers are necrotic regions where the epidermis is gone. As the bacteria continues its colonization, the canker will deepen and expand. In terms of fruit development, tomatoes may fail to develop altogether or may look marbled because they are ripening unevenly.
If infection occurs at a late stage of plant development, plants are able to survive and generate fruits. However, the plant may appear stressed rather than wilted and may develop white interveinal areas that will develop into brown necrotic tissue. Often the seeds are infected as well.
Superficial infections increase the risk of epidemics. They occur when the bacteria multiply on the epidermis of the host, enter through stomata, or enter through a very shallow wound that does not allow the pathogen to reach the xylem tissue. The host may look like it was rubbed with cornmeal or coarse flour but it is actually a series of blisters that me be raised or sunken and appear white to pale orange. The most common leaf symptom is a dark brown spot surrounded by a sort of orange-like area on the edge of the leaf. Fruits may develop "bird's eye" spotting, which are pale green to white raised pustules that have a brown center and chlorotic halo. Pictures of these symptoms are available at the cited reference.
However, latent infections are common.
The Clavibacter michiganensis subsp. michiganensis wild type strain NCPPB382 carries two plasmids associated with virulence: pCM1 and pCM2. The avirulent strain, CMM100, does not contain these plasmids. Strains that carried one of the two plasmids were found to be virulent but wilting symptoms were delayed. The virulent and avirulent strains produced the same amount of exopolysaccharides, suggesting that EPS does not play a significant role in pathogenicity.
Disease cycle
The causal agent of bacterial wilt and canker of tomato survives in or on seeds for up to 8 months but occasionally also in plant refuse in the soil. The pathogen can be spread long distances because of its association with seeds. The risk of spreading the bacteria to healthy tomato plants is greatest during transplanting, tying, and suckering or any time when the host may be wounded. Once the bacteria enters the plant through a wound, it will move and multiply primarily in the xylem vessels. Once established, the bacteria may move into the phloem, pith, and cortex. Infection can result in either systemic or superficial disease. Systemic infections appear in 3–6 weeks and the risk of secondary infection goes up with water-splashing. The common occurrence of latent infections – presence of the pathogen within the host yet the host shows no symptoms – makes this pathogen especially dangerous.
However, the assumption that C. michiganensis does not overwinter in the soil is not without controversy. The genome of C. michiganesis has recently been sequenced and new theories will surely arise once more work has been completed. What is known is that Cmm can use hydrolysis products as carbon and energy sources by means of a number of ATP-binding cassette transporters and α- and β-glucosidases. This suggests that Cmm can survive in the soil as long as there is decaying host material present. It has also been determined that the genome of subsp. michiganensis does not have genes that encode for nitrate and nitrate reductases. This means that the bacteria depends on previously reduced nitrogen compounds or amino acids for its nitrogen source. Also lacking in the Cmm genome are genes for assimilatory sulfate reduction, which is associated with an auxotrophy for methionine – one of two amino acids that contain sulphur.
Cmm has a pathogenicity island (PI) that is encoded in the chromosome and is probably associated with colonization and plant defense evasion or suppression. This island has been subdivided into two subregions: chp and tomA. Serine proteases of the families S1A, Ppa, and PpA-E are encoded in the chp subregion as well as subtilase SbtA.
Environment
Warm temperature in the range of and the high relative humidity (>80%) are optimal environments for Clavibacter michiganesis subsp. michiganesis, a tomato bacterial canker symptom development. In humid or wet weather, slimy masses of bacteria ooze through the cracks to the surface of the stem, from which they are spread to leaves and fruits and cause secondary infections Infected host plants will show severe symptoms on hot days when there is a high transpiration rate since the bacteria may plug the xylem vessels.
Management
The best way to control a disease is use of healthy seeds that have already been acid extracted. In addition, using chemical treatments such as copper hydroxide or streptomycin in the seed bed, removing or isolating diseased crops can be helpful to reduce the rate of infection.
References
External links
Control de Cáncer Bacteriano (Clavibacter michiganensis) en el Cultivo de Tomate from Intagri S.C. In Spanish.
Clavibacter michiganensis subsp. michiganensis S 05, SO5 – Type strain – DSM 46364, ICPB CM 177, IMET 11518, LMG 7333, NCPPB 2979, PDDCC 2550 BacDiveID (id 7289)
Microbacteriaceae
Soil biology
Bacterial plant pathogens and diseases
Bacteria described in 1910 | Clavibacter michiganensis | [
"Biology"
] | 1,876 | [
"Soil biology"
] |
10,893,762 | https://en.wikipedia.org/wiki/Rathayibacter | Rathayibacter is a genus of bacteria of the order Actinomycetales which are gram-positive soil organisms.
References
Microbacteriaceae
Soil biology
Bacteria genera | Rathayibacter | [
"Biology"
] | 37 | [
"Soil biology"
] |
10,893,881 | https://en.wikipedia.org/wiki/Rathayibacter%20tritici | Rathayibacter tritici is a Gram-positive soil bacterium. It is a plant pathogen and causes spike blight in wheat.
References
External links
Type strain of Rathayibacter tritici at BacDive - the Bacterial Diversity Metadatabase
Microbacteriaceae
Soil biology
Bacteria described in 1982 | Rathayibacter tritici | [
"Biology"
] | 63 | [
"Soil biology"
] |
10,893,891 | https://en.wikipedia.org/wiki/Bauma%20%28trade%20fair%29 | The bauma (International Trade Fair for Construction Machinery, Building Material Machines, Mining Machines, Construction Vehicles and Construction Equipment) is the world's largest trade fair in the construction industry. The trade fair, which can be visited by anyone, is held every three years on the grounds of the Neue Messe München and lasts for seven days. Its organizer is Messe München.
History
The first exhibition took place in 1954 as part of the “Baumusterschau” at Theresienhöhe in Munich and was then known as the spring show for construction machinery. 58 exhibitors presented their products on a total gross area of 20,000 m2, attracting around 8,000 visitors. Two years later, the exhibition space had already doubled and the name “bauma”, which is still used today, was introduced. In the early days, the fair was a purely German exhibition. In 1958, the first exhibitors from abroad (number: 13) took part in bauma.
Due to the building boom the exhibition space quickly became too small and the fair was relocated for the first time. In 1962, bauma opened its doors on a former airport site in Oberwiesenfeld, offering 100,000 m2 more space for—in the meantime—more than 450 exhibitors. But the days of the new location were already numbered: the Olympic Park was created on this site as the 1972 Olympic Games had been awarded to Munich. In 1967, the annual bauma therefore returned to Theresienhöhe, where it remained for decades.
In 1967, bauma was transferred from private ownership to Messe München's portfolio, and in 1969, the first bauma was organized under the leadership of Messe München. Although successful right from the start, bauma then experienced an incomparable upswing: the award of the Olympic Games turned Munich into the largest construction site in Europe and brought the construction industry an unprecedented order situation. In 1998, the trade fair company moved from Theresienhöhe to Munich-Riem. Since then, bauma has also taken place there.
In 2002, bauma CHINA was launched as first foreign trade fair within the bauma network. In the meantime, bauma CHINA has become the largest capital goods fair in Asia and the second largest construction machinery fair in the world. And meanwhile there is a whole network of bauma trade fairs, including bauma CONEXPO INDIA, bauma CONEXPO AFRICA, bauma CTT RUSSIA and M&T EXPO.
Key figures
In terms of exhibition space, Bauma is both the largest trade fair in the industry and the biggest trade show in the world.
The past Bauma edition that took place from April 11 to 17, 2016 attracted 3,425 exhibitors from 58 countries (2013: 3,421 exhibitors; 2010: 3,256 exhibitors) and 583,736 visitors from 219 countries (2013: around 535,065; 2010: around 420,170). The exhibition space was 605,000 m² (2013: 575,000 m²; 2010: 555,000 m²).
Exhibitors
Both German and foreign suppliers of machinery and vehicles for construction and mining exhibit at the fair. The trade fair basically comprises four sectors. The "All around construction sites" sector includes suppliers of construction vehicles, construction machinery, construction tools, lifting appliances, formwork and scaffoldings. The exhibition sector "Mining, extraction and processing of raw materials" pools manufacturers of machines for the extraction of raw materials and mining as well as of mineral processing technology. The "Production of building materials" sector comprises machines and plants for producing concrete, asphalt, clay and similar building materials. Drive technology, testing, measurement and control technology as well as accessories including services are presented in the "Components and service suppliers" sector. Numerous scale model manufacturers exhibit scale models of the construction equipment.
References
External links
Economy of Munich
Trade fairs in Germany
Construction equipment | Bauma (trade fair) | [
"Engineering"
] | 800 | [
"Construction",
"Construction equipment",
"Industrial machinery"
] |
10,894,247 | https://en.wikipedia.org/wiki/Doublecortin | Neuronal migration protein doublecortin, also known as doublin or lissencephalin-X is a protein that in humans is encoded by the DCX gene.
Function
Doublecortin (DCX) is a microtubule-associated protein expressed by neuronal precursor cells and immature neurons in embryonic and adult cortical structures. Neuronal precursor cells begin to express DCX while actively dividing, and their neuronal daughter cells continue to express DCX for 2–3 weeks as the cells mature into neurons. Downregulation of DCX begins after 2 weeks, and occurs at the same time that these cells begin to express NeuN, a neuronal marker.
Due to the nearly exclusive expression of DCX in developing neurons, this protein has been used increasingly as a marker for neurogenesis. Indeed, levels of DCX expression increase in response to exercise, and that increase occurs in parallel with increased BrdU labeling, which is currently a "gold standard" in measuring neurogenesis.
Doublecortin was found to bind to the microtubule cytoskeleton. In vivo and in vitro assays show that Doublecortin stabilizes microtubules and causes bundling. Doublecortin is a basic protein with an iso-electric point of 10 typical of microtubule-binding proteins.
Knock out mouse
In mice where the Doublecortin gene has been knocked out, cortical layers are still correctly formed. However, the hippocampi of these mice show disorganisation in the CA3 region. The normally single layer of pyramidal cells in mutants is seen as a double layer. These mice also have different behavior than their wild type littermates and are epileptic.
Structure
The detailed sequence analysis of Doublecortin and Doublecortin-like proteins allowed the identification of a tandem repeat of evolutionarily conserved Doublecortin (DC) domains. These domains are found in the N terminus of proteins and consists of tandemly repeated copies of an around 80 amino acids region. It has been suggested that the first DC domain of Doublecortin binds tubulin and enhances microtubule polymerisation.
Doublecortin has been shown to influence the structure of microtubules. Microtubule nucleated in vitro in the presence of Doublecortin have almost exclusively 13 protofilaments, whereas microtubule nucleated without Doublecortin are present in a range of different sizes.
Interactions
Doublecortin has been shown to interact with PAFAH1B1.
Clinical significance
Doublecortin is mutated in X-linked lissencephaly and the double cortex syndrome, and the clinical manifestations are sex-linked. In males, X-linked lissencephaly produces a smooth brain due to lack of migration of immature neurons, which normally promote folding of the brain surface. Double cortex syndrome is characterized by abnormal migration of neural tissue during development which results in two bands of misplaced neurons within the subcortical white, generating two cortices, giving the name to the syndrome; this finding generally occurs in females. The mutation was discovered by Joseph Gleeson and Christopher A. Walsh in Boston. At least 49 disease-causing mutations in this gene have been discovered.
See also
Lissencephaly
References
Further reading
External links
GeneReviews/NCBI/NIH/UW entry on DCX-Related Disorders
OMIM entries on DCX-Related Disorders
Protein families
Proteins | Doublecortin | [
"Chemistry",
"Biology"
] | 708 | [
"Biomolecules by chemical classification",
"Protein classification",
"Proteins",
"Molecular biology",
"Protein families"
] |
10,894,971 | https://en.wikipedia.org/wiki/Vibrio%20natriegens | Vibrio natriegens is a Gram-negative marine bacterium. It was first isolated from salt marsh mud. It is a salt-loving organism (halophile) requiring about 2% NaCl for growth. It reacts well to the presence of sodium ions which appear to stimulate growth in Vibrio species, to stabilise the cell membrane, and to affect sodium-dependent transport and mobility. Under optimum conditions, and all nutrients provided, the doubling time of V. natriegens can be less than 10 minutes. V. natriegens is able to successfully live and rapidly divide in its coastal areas due its large range of metabolic fuel. Recent research has displayed that Vibrio natriegens has a flexible metabolism, which allows it to consume a large variety of carbon substrates, reduce nitrates, and even fix nitrogen from the atmosphere under nitrogen-limiting and anaerobic conditions. In the laboratory, the growth medium can be easily changed, thus affecting the growth rate of a culture. V. natriegens is commonly found in estuarine mud.
Aquaculture and antibiotic resistance
Many strains of Vibrio, including natriegens, are pathogenic against farmed aquacultures such as the abalone and have recently resulted in destruction of farmed abalones when aquacultures get infected. In response, fishers have taken to inoculating tanks with large amounts of antibiotics, which has resulted in Vibrio natriegens developing a potent antibiotic resistance to many drugs. In a recent study, the AbY-1805 strain of Vibrio natriegens was shown to be completely resistant against 17 of the 32 tested antibiotics and at least partially resistant against 22 of the 32.
Biochemical characteristics of V. natriegens
Colony, morphological, physiological, and biochemical characteristics of Vibrio natriegens are shown in the Table below.
Note: + = Positive, – =Negative, V =Variable (+/–)
Biotechnological uses
Owing to its rapid growth rate, ability to grow on inexpensive carbon sources, and capacity to secrete proteins into the growth media, efforts are underway to leverage this species as a host for molecular biology and biotechnology applications. Recently, V. natriegens crude extract has been shown by multiple research groups to be a promising platform for cell-free expression. Scientists are also hoping that Vibrio natriegens, with its incredible growth speed, will make microbial experiments in outer space, where time is an extremely valuable asset, much quicker. Interestingly, it has been shown that Vibrio natriegens, despite its incredibly quick doubling speed on Earth, might grow even faster in space. A recent experiment displayed that after 24 hours of growth the Vibrio cells grown in zero gravity were 60 times denser than those grown in full gravity, possibly attributable to an extended exponential growth phase in low-gravity conditions.
References
External links
Type strain of Vibrio natriegens at BacDive - the Bacterial Diversity Metadatabase
University of Marburg 2018 iGEM team
Vibrionales
Marine microorganisms | Vibrio natriegens | [
"Biology"
] | 632 | [
"Marine microorganisms",
"Microorganisms"
] |
10,895,090 | https://en.wikipedia.org/wiki/Advanced%20Highway%20Maintenance%20Construction%20Technology%20Research%20Laboratory | The Advanced Highway Maintenance Construction Technology Research Center (AHMCT) is a research institute at the University of California, Davis. They perform transportation-related research in highway maintenance, transportation infrastructure, structures, and roadways. They are funded through public and private research grants.
External links
Website
Location
ATIRC location
University of California, Davis
Transportation engineering
Research institutes in California | Advanced Highway Maintenance Construction Technology Research Laboratory | [
"Engineering"
] | 75 | [
"Civil engineering",
"Transportation engineering",
"Industrial engineering"
] |
10,895,294 | https://en.wikipedia.org/wiki/Adtran | Adtran, Inc. is an American fiber networking and telecommunications company headquartered in Huntsville, Alabama. It is a vendor of networking solutions that address a range of applications. Its customers include communications service providers, governments, enterprises and utilities.
History
Adtran was founded in 1985 by Mark C. Smith, Lonnie S. McMillian, and Larry Owen,
and began operations in 1986, following the AT&T divestiture of the Regional Bell Operating Companies (RBOCs). It supplied network equipment to both the RBOCs and independent telephone companies in the United States.
In 2006, Adtran acquired Luminous Networks, a manufacturer of access network equipment. In 2011, it acquired Bluesocket, a maker of enterprise Wi-Fi equipment based in Burlington, Massachusetts. In 2012, it acquired Nokia Siemens Networks' broadband access business based in Germany. In 2016, it acquired CommScope's active fiber business. In 2018, Adtran acquired connected home software provider SmartRG, a Vancouver, WA, based company that develops and provides carrier-oriented, open-source connected home platforms and cloud services for broadband service providers.
In 2021, Adtran entered into a business combination with ADVA Optical Networking SE, a cloud and mobile services networking company based in Munich and Meiningen in Germany. In 2022, Adtran acquired the remaining shares of Cambridge Communication Systems (CCS) Limited, a developer of wireless backhaul and transport systems for small cells. It offers an mmWave Gigabit fiber extension system along with web-based management software for planning, configuring and monitoring networks. In 2024, Adtran self-certified as "Buy America-compliant" with the U.S. Department of Commerce. This list acts as a reference for communications service providers applying for Broadband Equity Access and Deployment (BEAD) funding.
Notable products
50G PON Adtran's implementation of 50Gbit/s passive optical network (PON) technology includes its SDX 6400 Series of optical line terminals.
Quantum key distribution (QKD) In collaboration with Orange, Adtran demonstrated a 400Gbit/s data transmission system using QKD. The hybrid approach integrated quantum-safe encryption with classical cryptographic methods to secure data across a 184 km system.
Optical cesium atomic clocks Adtran's Oscilloquartz division has introduced atomic clocks that leverage optical pumping technology. The improved accuracy and stability of these devices exceed the current ITU-T G.811.1 Enhanced Primary Reference Clock (ePRC) specification. The current highest-end clocks in this range can combine with core grandmaster devices to maintain 100 nanosecond precision for up to 100 days.
Mosaic One Mosaic One is a cloud-based software-as-a-service platform that aggregates data from management systems, broadband access and in-home devices to support network operations and customer care.
Locations
Adtran's corporate headquarters is located in Huntsville, Alabama, in Cummings Research Park. It has international offices located in:
Melbourne, Australia
Berlin and Greifswald, Germany
Hyderabad, India
Tel Aviv, Israel
Milan, Italy
Riyadh, Saudi Arabia
Bratislava, Slovakia
Tunis, Tunisia
Basingstoke, Hampshire, United Kingdom
Adastral Park, Ipswich, United Kingdom
Warsaw, Poland
Munich, Bavaria, Germany
York, North Yorkshire, United Kingdom
Merger with ADVA Optical Networking
Adtran's 2022 business combination with ADVA expanded its portfolio to include end-to-end data transport products, complementing its existing focus on access networking equipment. The merger also gave Adtran ownership of ADVA's subsidiary, Oscilloquartz, a manufacturer of timing and synchronization solutions based in Switzerland.
Certifications
Adtran is certified for ISO 9001, ISO 14001, ISO 27001 and TL 9000.
References
External links
Companies based in Huntsville, Alabama
Telecommunications companies of the United States
Companies established in 1986
Networking companies of the United States
Networking hardware companies
Companies listed on the Frankfurt Stock Exchange
Companies listed on the Nasdaq
Companies formerly in the MDAX
1986 establishments in the United States
1986 establishments in Alabama
Computer companies of the United States
Computer hardware companies | Adtran | [
"Technology"
] | 864 | [
"Computer hardware companies",
"Computers"
] |
10,895,405 | https://en.wikipedia.org/wiki/Curtobacterium%20flaccumfaciens | Curtobacterium flaccumfaciens is a Gram-positive bacterium that causes disease on a variety of plants. Gram-positive bacteria characteristics include small irregular rods, lateral flagella, the ability to persist in aerobic environments, and cells containing catalase. In the interest of studying pathogenicity in plants, this species is broken down further into pathovars, which help to better describe the pathogen.
Genomics
C. flaccumfaciens is a relatively young species, diverging only 172,000 years ago.
Hosts and symptoms
Curtobacterium flaccumfacien is a bacterial wilt pathogen. The hallmark symptoms of bacterial wilt are leaf and petiole wilting. Chlorosis of the leaf and tissue occurs due to the lack of water transport. C. flaccumfaciens has a wide host range not limited to kidney beans, soybeans, tulips, and tomatoes. The species is separated into pathovars based on host range and symptoms.
One of the economically important pathovars is . This pathovar produces a bacterial wilt and its primary host range is the genus Phaseolus (beans), but the pathogen can infect many other species of the same family (Fabaceae). In beans, the symptoms can be devastating to the crop yield. These beans have severe foliage wilting and chlorosis.
One ornamental example is . The primary host are plants from the genus Tulipa (tulips). Although the host range differs, the symptoms are relatively similar. During flowering, typical symptoms of dehydration are observed. Similarly to beans, the tulips get wilt. In severe cases, the plant eventually fails to recover from wilting and dies.
Disease cycle
Survival
Curtobacterium flaccumfaciens can overwinter in plant debris, diseased plants, wild hosts, seeds, or vegetative propagative organs. The bacteria can survive only a couple of weeks as free bacteria in soil. Multiple factors go into survival of a bacterial population, including temperature, humidity, and soil characteristics. Infected seeds cannot be used for susceptible bean crops because Curtobacterium flaccumfaciens pv. flaccumfaciens has been known to survive in dried bean pods from five years and up to 24 years in laboratory conditions. Different pathovars survive in slightly different ways. For example, Curtobacterium flaccumfaciens pv. oortii survive in the vegetative propagative organs (bulbs) rather than in the seeds, like Curtobacterium flaccumfaciens pv. flaccumfaciens.
Dispersal
Curtobacterium flaccumfaciens causes wilting at high populations and disperses in many ways. The bacteria multiply relatively quickly, which increases the possibility that Curtobacterium flaccumfaciens can shed from dying or dead plant material. The pathogen is normally dispersed via agricultural practices such as, planting saved seed and through farm equipment. In the case of beans & tulips, these practices move the propagule during the overwriting phase of their life cycles. This is effective dispersal for the pathogen.
Infection
Curtobacterium flaccumfaciens usually enters the plant though a wound. Natural wounds (created by excision of flowers or genesis of lateral roots) and unnatural wounds could become entry sites. There are no reports of vectors, but the nematode Meloidogyne incognita may assist entry by providing unnatural wounds.
Management
Management varies for each between hosts. For this purpose, we will look specifically at the detection and control methods of Curtobacterium flaccumfaciens pv. flaccumfaciens. Since most plant pathogens are Gram-negative detection of Gram-positive bacterium, using methods such as the KOH test, is a beginners diagnostic tool used to identify this bacterium. Bacteria may be detected beneath the seedcoat by means of a combined cultural and slide agglutination test. Bean seed from countries where the disease is known to occur should be inspected for discoloration of the seedcoat. Immunofluorescence staining can also be used to detect the bacterium in contaminated seed lots. Control may be affected by using disease-free seed and crop rotations. Seeds grown in dry climates are usually free from infection and are, therefore, recommended for distribution. The strongest control regulations handed down by the European and Mediterranean Plant Protection Organization (EPPO) to date was a quarantine procedure. There is little resistance available commercially to C. f. pv. flaccumfaciens and antibiotics are ineffective.
See also
List of soybean diseases
References
External links
Microbacteriaceae
Soil biology
Soybean diseases | Curtobacterium flaccumfaciens | [
"Biology"
] | 991 | [
"Soil biology"
] |
10,895,466 | https://en.wikipedia.org/wiki/NGC%206520 | NGC 6520 is an open cluster of stars in the southern constellation of Sagittarius, about 4° to the east of the Galactic Center. With an apparent visual magnitude of 7.6 and an angular size of , it can be viewed with binoculars or a small telescope. Just to the west of this cluster is the dark nebula Barnard 86, dubbed the Ink Spot. Both features are viewed against the dense stary background of the Large Sagittarius Star Cloud. This cluster is located at a distance of approximately from the Sun.
This is a young open cluster of stars with age estimates yielding a values of million years. However, the presence of stars with a spectral class of B4 and B5 suggest a much younger age of 60 million years. The estimated mass of this cluster is . The cluster and the nearby dark nebula Barnard 86 have radial velocities that differ by , and hence may be unrelated.
Two type 2 chemically peculiar stars and two Lambda Bootis candidates have been found among the members. Polarization measurements of the cluster members suggests that there are three closer dust layers partially obscuring the view from the perspective of the Earth.
Gallery
References
External links
Open clusters
Sagittarius (constellation)
6520 | NGC 6520 | [
"Astronomy"
] | 250 | [
"Sagittarius (constellation)",
"Constellations"
] |
10,895,641 | https://en.wikipedia.org/wiki/NGC%206530 | NGC 6530 is a young open cluster of stars in the southern constellation of Sagittarius, located some 4,300 light years from the Sun. It exists within the H II region known as the Lagoon Nebula, or Messier 8, and spans an angular diameter of . The nebulosity was first discovered by G. B. Hodierna prior to 1654, then re-discovered by J. Flamsteed circa 1680. It was P. Loys who classified it as a cluster in 1746, as he could only resolve stars. The following year, G. Le Gentil determined it was both a nebula and a cluster.
The brightest six members of the cluster are visible in 10×50 binoculars at magnitudes 6.9 and fainter, while fifteen evenly distributed stars are visible with a 25×100 pair. More than two dozen stars are visible in an amateur telescope. The average extinction AV due to interstellar dust along the line of sight from the Earth is , with a color excess of .
In total, 3,675 stars in the field of NGC 6530 have been catalogued as candidate members, with the likely members being 2,728. As of 2019, 652 stars have been confirmed as members: 333 of these are classical T Tauri-type variable stars showing a near infrared emission excess, while the remainder are weak T Tauri stars showing a photospheric excess. Candidate stars appear in two main groups at the cluster core and the Sagittarius "Hourglass nebula", with other smaller concentrations. Two such minor concentrations are associated with the stars 7 Sgr and HD 164536.
Age estimates for the members shows a spread in values that suggests more than one burst of star formation. Initial star formation began up to 15 million years ago, but the bulk formed in the last 1–2 million years near the cluster center. Astrometric data suggests the parent molecular cloud collided with the galactic plane some four million years ago, which may have triggered the star formation. The dispersion of velocities for a sample of stars in the cluster suggests it may be gravitationally unbound and there is evidence the star population is expanding, particularly to the north and south.
References
External links
NGC 6530
NGC 6530
6530
sk:Lagúna (hmlovina)#Centrálna hviezdokopa | NGC 6530 | [
"Astronomy"
] | 479 | [
"Sagittarius (constellation)",
"Constellations"
] |
10,895,711 | https://en.wikipedia.org/wiki/NGC%206709 | NGC 6709 is an open cluster of stars in the equatorial constellation of Aquila, some 5° to the southwest of the star Zeta Aquilae. It is situated toward the center of the galaxy at a distance of .
This cluster has a Trumpler class of IV 2 m, and is considered moderately rich with 305 member stars. It is around 141 million years old; about the same as the Pleiades. The core radius of NGC 6709 is and the tidal radius . It contains two Be stars and one of them is a shell star. There is one candidate red giant member.
On the evening of November 13, 1984, David H. Levy discovered his first comet less than a degree from this cluster.
Gallery
References
External links
webda
Open clusters
Aquila (constellation)
6709 | NGC 6709 | [
"Astronomy"
] | 163 | [
"Aquila (constellation)",
"Constellations"
] |
10,895,769 | https://en.wikipedia.org/wiki/Urotensin-II | Urotensin-II (U-II) is a peptide ligand that is the strongest known vasoconstrictor. Because of the involvement of the UII system in multiple biological systems such as the cardiovascular, nervous, endocrine, and renal, it represents a promising target for the development of new drugs.
In humans, Urotensin-2 is encoded by the UTS2 gene.
Discovery
U-II was initially isolated from the neurosecretory system of the Goby fish (Gillichthys mirabilis). For many years it was thought that U-II does not exhibit significant effects in mammalian systems; a view quickly overturned when it was demonstrated that Goby U-II produces slow relaxation of mouse anococcygeus muscle, in addition to contraction of rat artery segments. In 1998, the genes for Pre-pro U-II were found in mammals proving that the peptide U-II did exist in mammals.
Structure
The U-II gene is located on chromosome 1p36. U-II peptide length varies between species due to the specific cleaving sites located at different spots depending on the species. In humans U-II length is 11 amino acids. The peptide sequence that is needed for biological function for both U-II and urotensin II-Related Peptide (URP) is known as the core. It is hexapeptide (-CYS-TYR-LYS-TRP-PHE-CYS-), and is connected at the two ends by a disulfide bond. Also just like URP the amino terminus can be modified without any loss in pharmacological activity suggesting that it is not needed for activation of the receptor. Unlike URP, U-II has an acidic amino acid (Glutamic or Aspartic) that precedes the core sequence. While the amino acid is not necessary for the activation of urotensin II receptor the fact that it is conserved in different species suggests that it has a biological function that has not been discovered.
Receptor
U-II is an agonist for the urotensin-II receptor which is a G protein-coupled receptor that primarily activates the alpha subunit Gαq11. It activates PKC which then activates PLC which increases the cytosolic calcium concentration. It is found in many peripheral tissues, blood vessels, and also the brainstem cholinergic neurons of the laterodorsal tegmental (LDT) and the pedunculopontine tegmental nuclei (PPT). It is also found in rat astrocytes.
Tissue localization
Pre-pro U-II in both humans and rats are primarily expressed in the motorneurons of the brainstem and spinal cord although it is also found in small amounts in other parts of the brain as well including the frontal lobe and the medulla oblongata. In humans U-II mRNA is also found in other peripheral tissues such as the heart, kidneys, adrenal gland, placenta, spleen, and thymus.
Function
Central nervous system
When injected intracerebroventricularly (icv) U-II causes an increase in the corticotropin releasing factor by activating the hypothalamic paraventricular neurons. This leads to increased plasma levels of adrenocorticotropic hormones and adrenaline. Rats and mice exhibit many stress related behaviors when injected with U-II which were tested by the elevated plus maze which measures anxiety-like effects, and the hole-board test which measures head dipping which is also an anxiety-like behavior.
U-II when injected icv in rats also leads to cardiovascular responses including raising mean arterial pressure (MAP) and causing tachycardia. When the arcuate nucleus, and the paraventricular nucleus, two different areas of the brain which are known to control blood pressure were injected with U-II simultaneously they caused an increase in blood pressure. When the two areas were injected separately it was discovered that U-II affected the excitatory neurons in the paraventricular nucleus and the inhibitory neurons of the arcuate nucleus.
U-II when injected icv in both rats and mice also stimulates locomotion in familiar environments. This experiment was also tested in rainbow trout (Oncorhynchus mykiss) where a stimulatory effect was also observed..
Depression-like behavior was also observed when U-II was injected in the brain by using the forced swim test and the tail suspension test which are used to compare molecules that are thought to cause anti-depressive-like effects.
Orexigenic behavior which is increased appetite and thirst were also observed after icv injection of U-II in rats.
Peripheral tissue effects
U-II has a variety of effects on different tissues. In blood vessels it can cause contraction. In rat pancreas U-II inhibits insulin secretion. It also affects the kidneys including sodium transport, lipid and glucose metabolism, and natriuretic effects. It has been linked to cardiac fibrosis and hypertrophy, heart failure, renal dysfunction, and diabetes.
References
Further reading
Peptides | Urotensin-II | [
"Chemistry"
] | 1,068 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
10,895,788 | https://en.wikipedia.org/wiki/NGC%206755 | NGC 6755 is an open cluster of stars in the equatorial constellation of Aquila, positioned about 3° to the east of the star Delta Aquilae. It was discovered by the Anglo-German astronomer William Herschel on July 30, 1785 and is located at a distance of 8,060 light years from the Sun. NGC 6756 lies to the northeast of NGC 6755, with the pair forming a visual double cluster. However, they probably do not form a binary cluster system since they have different ages and are too distant from each other.
This cluster has a Trumpler class of II2r with a visual magnitude of 7.5 and it spans an angular size . It has an estimated age of 250 million years, based on the main sequence turnoff. A total of 71 variable stars have been detected in the field of this cluster, of which 31 are eclipsing binaries, seven are pulsating variables, and 28 are most likely irregular variable red giants.
Gallery
References
External links
Open clusters
Aquila (constellation)
6755 | NGC 6755 | [
"Astronomy"
] | 215 | [
"Aquila (constellation)",
"Constellations"
] |
10,896,054 | https://en.wikipedia.org/wiki/VIA%20Eden | VIA Eden is a variant of VIA's C3/C7 x86 processors, designed to be used in embedded devices. They have smaller package sizes, lower power consumption, and somewhat lower computing performance than their C equivalents, due to reduced clock rates. They are often used in EPIA mini-ITX, nano-ITX, and Pico-ITX motherboards. In addition to x86 instruction decoding, the processors have a second undocumented Alternate Instruction Set.
The Eden is available in four main versions:
The Eden ULV 500 MHz was the first variant to achieve a TDP of 1W .
See also
List of VIA Eden microprocessors
References
External links
VIA Eden Processors - Low Power Fanless Processing
VIA's Small & Quiet Eden Platform
Eden
Computer-related introductions in 2007 | VIA Eden | [
"Technology"
] | 168 | [
"Computing stubs",
"Computer hardware stubs"
] |
10,896,351 | https://en.wikipedia.org/wiki/OBIX | oBIX (for Open Building Information Exchange) is a standard for RESTful Web Services-based interfaces to building control systems. oBIX is about reading and writing data over a network of devices using XML and URIs, within a framework specifically designed for building automation.
Building control systems include those electrical and mechanical systems that operate inside a building, including Heating and Cooling (HVAC), Security, Power Management, and Life/Safety Alarms that are in nearly all buildings as well as the myriad of special purpose systems that may be tied to particular buildings such as A/V Event Management, Theatre Lighting, Medical Gas Distribution, Fume Hoods, and many others.
oBIX is a web services interface because it does not necessarily allow deep interactions with the underlying control systems. This interface can enable communications between enterprise applications and embedded building systems as well as between two embedded building systems. Facilities and their operations to be managed as full participants in knowledge-based businesses.
oBIX is being developed within OASIS, the Organization for the Advancement of Structured Information Standards. Version 1.0 was completed as a committee standard in December 2006.
Background
Presently most mechanical and electrical systems are provided with embedded digital controls (DDC). Most of these devices are low cost and not enabled for TCP/IP. They are installed with dedicated communications wiring. Larger DDC controllers provide network communications for these dedicated controllers. There are many well established binary protocols (BACnet, LonTalk, Modbus, DALI) that are used on these dedicated networks in addition to numerous proprietary protocols. While these binary protocols can be used over TCP/IP networks - they have challenges with routers, firewalls, security, and compatibility with other network applications. There is an added challenge in that the industry is split between several largely incompatible protocols.
Because oBIX integrates with the enterprise, it enables mechanical and electrical control systems to provide continuous visibility of operational status and performance. By exposing these operations using web services, it enables owners and tenants to use the full array of standard databases and OLAP tools to analyse their performance. oBIX enables facilities operators, owners and tenants to make decisions based on a fully integrated consideration of all life-cycle, environmental, cost, and performance factors.
Scope
oBIX provides a publicly available web services interface specification that can be used to obtain data in a simple and secure manner from HVAC, access control, utilities, and other building automation systems, and to provide data exchange between facility systems and enterprise applications. Release 1 provides a normalized representation for three of elements common to control systems:
Points: representing a single scalar value and its status – typically these map to sensors, actuators, or configuration variables like a setpoint.
Alarming: modeling, routing, and acknowledgment of alarms. Alarms indicate a condition which requires notification of either a user or another application.
Histories: modeling and querying of time sampled point data. Typically edge devices collect a time stamped history of point values which can be fed into higher level applications for analysis.
oBIX 1.0 provides a low level object model which can be extended during implementation. While points are directly addressable (and thereby settable), direct interaction with the points requires too much knowledge of the underlying control system for the enterprise developer. The underlying points can be aggregated, the results named, alarm levels set, and histories begun using the oBIX contract. If oBIX exposes a low level object model for control systems, oBIX contracts create the higher level type libraries that most programmers actually want to work with.
Uses of oBIX
Tenant interactions
To keep a public space open in the evening may require a range of calls to different organizations within a building, each initiating an interaction with a separate building control system. To schedule a public meeting tonight from 7:00 to 9:00, the organizer may have to:
Call Security to warn the guard, and keep the (1) Access Control System working in day-time mode until 9:30. The guard may also need to disable the (2) Intrusion Detection System during that period.
Call Maintenance to make sure the room's (3) Environmental Controls are set properly for the event. This may include over-cooling (or heating) the room in advance to make sure that the room will be comfortable when filled with the anticipated numbers of callers.
Call the media support group to make sure the (4) A/V Event Management system is properly warmed up before the event,
In an oBIX-enabled building, these features are accomplished by instead sending an iCalendar meeting invitation to the room and/or its support systems.
Emergency response
The Common Alerting Protocol (CAP) is a standard increasingly used for relaying information from various agencies to the public and to police and first responders. One challenge that public notification faces is that the traditional Emergency Broadcasting System for transmitting information over the radio is now much less effective, now that the public is tuned
instead to personal media players such as the iPod. New versions of these protocols anticipate, for example, direct texting of all cell phones in range of a given cell tower or set of cell towers.
In a similar manner, current proposals suggest direct messaging to Intelligent Buildings to invoke named oBIX contracts, with effects ranging from temporary user security elevation, to initiating process shut down, to notifying in-building warning systems to read messages aloud.
The Open Geospatial Consortium anticipates Emergency Responders being able to access certain classes of geo-tagged sensor information from buildings from within their maps to improve situational awareness.
Emerging power markets
The GridWise Architecture Council envisions an open market or Power Providers, Transmission, Distribution, and Customer Agents negotiating freely for live power contracts based on instantaneous demand/response. The ongoing installation across the US of Electric Meters able to provide time-of-day-metering is one step to enabling this. Another is the development of Intelligent Buildings able to negotiate with the grid.
These grid negotiations are likely to be of two forms. (1) An intelligent agent residing in the building, and negotiating with the building tenants and their business processes negotiates set building system operating postures. (2) An external agent hired by the building tenants aggregates demand across multiple buildings and buys power on their behalf. Markets based on these interactions are considered to be key to creating market conditions to drive rapid innovation in on-site power storage and generation technologies.
Base level control protocols
BACnet Building Automation Control network
KNX/EIB
Modbus
LonWorks
C-Bus (protocol)
Dynet
Metasys
Digital Addressable Lighting Interface DALI
Other standards interacting with oBIX
Open Geospatial Consortium (OGC)
National Building Information Standard (NBIMS)
buildingSMART
External links
OASIS Committee site for oBIX
oBIX Organizational site
Interview with Paul Ehrlich, original chair of oBIX committee
Sourceforge repository for oBIX toolkit
CABA - the Continental Automated Buildings Association
GridWise Architecture Council
oX Framework
Building automation
Standards | OBIX | [
"Engineering"
] | 1,426 | [
"Building engineering",
"Building automation",
"Automation"
] |
10,896,911 | https://en.wikipedia.org/wiki/Chromatium | Chromatium is a genus of photoautotrophic Gram-negative bacteria which are found in water. The cells are straight rod-shaped or slightly curved. They belong to the purple sulfur bacteria and oxidize sulfide to produce sulfur which is deposited in intracellular granules of the cytoplasm.
References
External links
Chromatium J.P. Euzéby: List of Prokaryotic names with Standing in Nomenclature
Chromatiales
Phototrophic bacteria
Bacteria genera | Chromatium | [
"Chemistry",
"Biology"
] | 103 | [
"Bacteria",
"Photosynthesis",
"Phototrophic bacteria"
] |
10,896,999 | https://en.wikipedia.org/wiki/Microsite%20%28ecology%29 | A microsite is a term used in ecology to describe a pocket within an environment with unique features, conditions or characteristics. Classifying different microsites may depend on temperature, humidity, sunlight, nutrient availability, soil physical characteristics, vegetation cover, etc. Being a
sub environment within an environment, we will examine the qualities that differentiate a microsite from another within an environment in this piece.
Microsite features
Microsites being a subset of the environment can be identified with its own:
Temperature
It refers to the temperature of the surrounding environment measured in degree Fahrenheit. The temperature of one microsite may not necessarily be the same with another one even if they are closely related in terms of location.
Humidity
It refers to the relative amount of moisture that could be held in the air. The more saturated the air is with water vapor in a microsite the more relative it is in humidity.
Sunlight
Plants uses energy from the sunlight to carry on photosynthesis. The possibility of sunlight to reach a microsite is another distinguishing characteristic which creates differences between microsites. There are some areas that the sunlight doesn’t reach which creates a different environmental condition than those that the sun reaches thus making some plants to have more fitness than others.
Availability of nutrients
Some microsites are rich in nutrients while some are not. This is a great difference because seeds germinate more in microsites that have more nutrients it needs than those that lack them. This is because plants and other autotrophs get nutrients (nitrogen, phosphorus, potassium, calcium, magnesium and Sulphur) they need from soil and water available in their microsite.
Soil physical characteristics
Plants obtain hydrogen from water found in the soil. Animals are influence by the soil physical characteristics for example where a fish will survive is not the same like that of a camel or goat. All this features help differentiate one microsite from another and explains the existence of organisms in one and not in the other.
Vegetation cover
This refers to collections of plants species over a land surface. A microsite in the Savana is different from that in the Sahara because of their vegetation cover. This explains the differences existing between the type of organisms that live in both areas.
Microsite influence over habitat selection
With the many microsites that exist in an environment, organisms usually base their selection of habit on the features of the microsite they come across. Being able to choose the best microsite will positively influence the organism's survival, growth and reproduction. There choice of a good microsite has a direct relation to the future generation of the organisms.
Limitation of microsites
Not all microsites have the necessary ingredients needed by plants and animals to survive and grow. While some may have, some condition may arise to render those ingredients not available again in the environment such as pollution or invasive species. In the case of seedling; air, light, soil, humus are all needed by seedling to grow and survive. The lack of these elements will cause a growth limitation factor in the said microsite and also survival issues. Same applies to animals but however in animals they can immigrate to other areas that favors their growth and survival while those who can not will be limited in fitness.
References
Ecology terminology | Microsite (ecology) | [
"Biology"
] | 668 | [
"Ecology terminology"
] |
10,897,091 | https://en.wikipedia.org/wiki/Healer%20%28video%20games%29 | A healer is a type of character class in video gaming. When a game includes a health game mechanic and multiple classes, often one of the classes will be designed around the restoration of allies' health, known as healing, in order to delay or prevent their defeat. Such a class can be referred to as a healer. In addition to healing, healer classes are sometimes associated with buffs to assist allies in other ways, and nukes to contribute to the offense when healing is unnecessary.
When both healer and tank classes exist, a common grouping strategy is for the healer to focus healing on an allied tank, while the tank prevents other allies, including the healer, from losing health.
Healers are often represented as a fantasy spell-caster (such as a cleric, druid or shaman), a realistic combat specialist (such as a medic or paladin), a science-fiction technician (such as a repairman or engineer), or the like. Often, female gamers are associated with or stereotyped as always playing healer-class characters, with such characters being noted as often female as well.
History
NetHack, a single-player roguelike video game, first released in 1987 includes a description of healers in its accompanying guidebook. It states:
Other early examples of video games with healers in them include Chrono Trigger (1995) and Final Fantasy VII (1997). The former includes the character Marle, who is portrayed as a water mage and performs healing functions. Final Fantasy VII featured the magic-based character Aerith Gainsborough, who was able to restore chunks of health to the player's party. She would go on to become one of the more iconic healing characters in gaming. Unreal Tournament (1999) included healing in multiplayer gameplay. Healers were a markedly important facet of gameplay in the 2004 massively multiplayer online role-playing game (MMORPG) World of Warcraft. America's Army: Rise of a Soldier (2005) rewarded players for healing teammates.
Healers are often incorporated within the broader Support-class subset of characters in a game's playable roster. As such, healers and support characters are commonly associated with each other. Valve's Team Fortress 2 (2007), a first-person shooter (FPS) incorporated healers into gameplay. The game featured three support characters in general, with one being dedicated to solely healing. Team Fortress 2 featured competitive multiplayer, in which healer characters have been noted as vital in gameplay. In such competitive multiplayer, healer-class players have been noted as an underappreciated. Massively multiplayer online role-playing games (or MMORPGs) have been noted by PC Gamer to have a "usual problem of there being too few healers or tanks because most people want to be able to level and solo efficiently." Some players have been documented to prefer selecting healer-class characters in competitive multiplayer modes, who have cited a desire to help teammates and a relative accessibility as reasons why. Edwin Evans-Thirlwell of The Face wrote that "healer roles [in shooter games] stand out because they don't depend on hand-eye coordination, making them attractive both to players who find 'twitch-shooting' a turn-off and people with disabilities that affect their accuracy and reflexes." In the 2010s, a community sprung up around the concept of "healslutting", which sees some players submit to others while role-playing a healer character.
Roles and abilities
Multiplayer games featuring healing are not limited by genre, as the class is present in a variety of genres including role-playing games (RPG), first-person shooters (FPS), and multiplayer online battle arenas (MOBA).
A healer is generally tasked with restoring health, removing poison-like effects, and reviving fallen party members. Different games may include different mechanics, such as the ability to deal damage or to enhance the attributes of their allies. Healers require a degree of situational awareness, as well as resource management in regards to their kit. In shooters, healing abilities, such as throwable health packs typically aim themselves. However, there are examples of healer characters that do require shooting finesse, such as Ana of Overwatch, who is equipped with a hypodermic rifle.
In parties that include both a tank and a healer, it is customary for the latter to heal any damage taken by the former. In small groups, they may also be tasked to heal the group as well, but in large scale group-play there are typically specific healers assigned to party-wide damage (typically taken indirectly, via lesser minions, spells or environment/habitat of the boss).
Specifications
Targeting specifics
Healers fall into two major categories when it comes to targeting options: Single-Target and Multi-Target.
Single-Target healers often have much more potent spells than their Multi-Targeting counterparts, such as those that fully restore a target's Health or resurrect an ally that had previously lost all their Health.
Multi-Target healers tend to lack potency, but heal multiple allies (often the entire Party) with abilities. In Tactical RPGs or open-world games, their spells may utilize an Area of Effect (AoE) mechanic. Healers that fall into this sub-type often do not possess resurrection spells.
Healers often do not utilise only one targeting system. Targeting options tend to depend on the skill rather than the character.
Sub-jobs
Healers have a small number of roles that they can be delegated towards. Often, a healer will fill one or more of these roles. Alternatively, a healer may fill one of these roles in addition to some other job, such as damage dealing (Battle Cleric, Druid), inflicting negative statuses on enemies (Witch/Warlock), or even drawing in damage (Paladin).
Restoration: Restoring Health to allies. This tends to be the job most associated with healer classes.
Curation: Removing harmful or otherwise negative statuses from allies.
Support: Used in the context of healers, this typically refers to applying regenerative buffs or shields to allies.
Resurrection: The rarest healer archetype, focused on not preventing death, but overcoming it.
Necromancers are a blurry line against the grain of Resurrection healers. They're often more classified as a summoner, summoning skeleton or zombie themed minions to deal damage or draw enemy attacks.
In sexual roleplay
Choosing to play as a healer may sometimes be done as part of a dominant–submissive roleplaying dynamic. In "healslutting" (a combination of the words "heal" and "slut"), players engage with one another both in-game and through external avenues as one player assumes the healer role, submitting to the player who has selected an offensive- or tank-class character. The term gained widespread popularity through the 2016 first-person hero shooter Overwatch, in which the character Mercy is a dedicated healer commonly used by female players who largely wish to avoid direct combat. Aside from "healsluts", healer-class players may also conversely identify as "healdoms", in which they assume the dominant role in the dynamic as they can "control whether their partner lives or dies."
See also
Tank (video games), a common character class focused on drawing enemy damage.
References
Further reading
Character classes
Fictional healers
Video game terminology | Healer (video games) | [
"Technology"
] | 1,526 | [
"Computing terminology",
"Video game terminology"
] |
10,897,286 | https://en.wikipedia.org/wiki/Hans%20B.%20Pacejka | Hans Bastiaan Pacejka (12 September 1934 – 17 September 2017) was an expert in vehicle system dynamics and particularly in tire dynamics, fields in which his works are now standard references. He was Professor emeritus at Delft University of Technology in Delft, Netherlands.
Magic Formula tire models
Pacejka developed a series of tire design models during his career. They were named the "Magic Formula" because there is no particular physical basis for the structure of the equations chosen, but they fit a wide variety of tire constructions and operating conditions. Each tire is characterized by 10–20 coefficients for each important force that it can produce at the contact patch, typically lateral and longitudinal force, and self-aligning torque, as the best fit between experimental data and the model. These coefficients are then used to generate equations showing how much force is generated for a given vertical load on the tire, camber angle and slip angle.
The Pacejka tire models are widely used in professional vehicle dynamics simulations, and racing car games, as they are reasonably accurate, easy to program, and solve quickly. A problem with Pacejka's model is that when implemented into computer code, it doesn't work for low speeds (from around the pit-entry speed), because a velocity term in the denominator makes the formula diverge. An alternative to Pacejka tire models are brush tire models, which can be analytically derived, although empirical curve fitting is still required for good correlation, and they tend to be less accurate than the MF models.
Solving a model based on the Magic curve with high frequency can also be a problem, determined by how the input of the Pacejka curve is computed. The slipping velocity (difference between the velocity of the car and the velocity of the tire in the contact point) will change very quickly and the model becomes a stiff system (a system whose eigenvalues differ a lot), which may require a special solver.
The general form of the Magic Formula, given by Pacejka, is:
where B, C, D and E represent fitting constants and y is a force or moment resulting from a slip parameter x. The formula may be translated away from the origin of the x–y axes. The Magic Model became the basis for many variants.
Professional activities
Pacejka was a co-founder in 1972 and editor-in-chief of Vehicle System Dynamics–International Journal of Vehicle Mechanics and Mobility until 1989. At the time of the founding of the journal, Pacejka had been an associate professor at Delft University, specializing in vehicle dynamics. His 1966 doctoral thesis addressed the "wheel shimmy problem". He published approximately 90 academic papers and was advisor to 15 PhD and 170 M.Sc. graduate students.
See also
Important publications in vehicle dynamics
Bicycle and motorcycle dynamics
Bibliography
Pacejka, H. B., The wheel shimmy phenomenom: A theoretical and experimental investigation with particular reference to the nonlinear problem (Analysis of shimmy in pneumatic tires due to lateral flexibility for stationary and nonstationary conditions), Ph.D. Thesis, Delft University of Technology, Delft, 1966.
Bakker, E. ; Nyborg, L. ; Hans B. Pacejka Tyre modelling for use in vehicle dynamics studies 1987 Jan. Society of Automotive Engineers, Warrendale, PA.
Pacejka, H. B. Tire and Vehicle Dynamics, Butterworth-Heinemann, Oxford, 2002.
References
External links
Pacejka's works
Automotive engineers
Academic staff of the Delft University of Technology
Tire industry people
1934 births
2017 deaths | Hans B. Pacejka | [
"Engineering"
] | 735 | [
"Automotive engineering",
"Automotive engineers"
] |
10,897,444 | https://en.wikipedia.org/wiki/Riveting%20machine | A riveting machine is used to automatically set (squeeze) rivets in order to join materials together. The riveting machine offers greater consistency, productivity, and lower cost when compared to manual riveting.
Types
Automatic feed riveting machines include a hopper and feed track which automatically delivers and presents the rivet to the setting tools which overcomes the need for the operator to position the rivet. The downward force required to deform the rivet with an automatic riveting machine is created by a motor and flywheel combination, pneumatic cylinder, or hydraulic cylinder. Manual feed riveting machines usually have a mechanical lever to deliver the setting force from a foot pedal or hand lever.
Riveting machines can be sub-divided into two broad groups — impact riveting machines and orbital (or radial) riveting machines.
Impact riveting
Impact riveting machines set the rivet by driving the rivet downwards, through the materials to be joined and on into a forming tool (known as a rollset). This action causes the end of the rivet to roll over in the rollset which causes the end of the rivet to flare out and thus join the materials together. Impact riveting machines are very fast and a cycle time of 0.5 seconds is typical.
Orbital riveting
Orbital riveting machines have a spinning forming tool (known as a peen) which is gradually lowered into the rivet which spreads the material of the rivet into a desired shape depending upon the design of the tool. Orbital forming machines offer the user more control over the riveting cycle but the trade off is in cycle time which can be 2 or 3 seconds.
There are different types of riveting machines. Each type of machine has unique features and benefits. The orbital riveting process is different from impact riveting and spiralform riveting. Orbital riveting requires less downward force than impact or spiral riveting. Also, orbital riveting tooling typically lasts longer.
Orbital riveting machines are used in a wide range of applications including brake linings for commercial vehicles, aircraft, and locomotives, textile and leather goods, metal brackets, window and door furniture, latches and even mobile phones. Many materials can be riveted together using orbital riveting machines including delicate and brittle materials, and sensitive electrical or electronic components.
The orbital riveting process uses a forming tool mounted at a 3 or 6° angle. The forming tool contacts the material and then presses it while rotating until the final form is achieved. The final form often has height and/or diameter specifications.
Pneumatic orbital riveting machines typically provide downward force in the range. Hydraulic orbital riveting machines typically provide downward force in the range.
Radial (Spiralform) riveting
Radial riveting is subtly different from orbital forming. In most cases however, where high-quality joints are demanded, the radial riveting technology is the appropriate procedure due to the low cycle time, the little force needed and the high quality results obtained.
The riveting peen describes a rose-petal path. The rivet is deformed in three directions. Radially outwards, radially inwards and overlying also tangentially.
Excellent surface structure of the closing head: With the Radial riveting process, the tool itself does not rotate. The friction between tool and work-piece is thus at a minimum. The result is an excellent surface structure.
Low workpiece loading: Even bakelite or ceramic parts can be riveted. Lateral forces are negligible. Clamping is usually unnecessary.
Rollerform riveting
Rollerforming is a subset of orbital forming. Rollerforming uses the same powerhead as orbital forming but instead of a peen has multiple wheels that circle the workpiece and combine two similar or non-similar materials together with a seamless and smooth gentle bonding via downward pressure as the rollers move downward or inward on the piece.
Automatic drilling and riveting machine
These machines take the automation one step farther by clamping the material and drilling or countersinking the hole in addition to riveting. They are commonly used in the aerospace industry because of the large number of holes and rivets required to assemble the aircraft skin.
Applications
Riveting machines are used in a wide range of applications including brake linings for commercial vehicles, aircraft, and locomotives, textile and leather goods, metal brackets, window and door furniture, latches and even mobile phones. Many materials can be riveted together using riveting machines including delicate and brittle materials, and sensitive electrical or electronic components.
References
See also
Ring binder
Hydraulic tools
Metalworking tools
Pneumatic tools | Riveting machine | [
"Physics"
] | 971 | [
"Physical systems",
"Hydraulics",
"Hydraulic tools"
] |
14,525,776 | https://en.wikipedia.org/wiki/Neurotensin%20receptor%201 | Neurotensin receptor type 1 is a protein that in humans is encoded by the NTSR1 gene. For a crystal structure of NTS1, see pdb code 4GRV. In addition, high-resolution crystal structures have been determined in complex with the peptide full agonist NTS8-13, the non-peptide full agonist SRI-9829, the partial agonist RTI-3a, and the antagonists / inverse agonists SR48692 and SR142948A, as well as in the ligand-free apo state., see PDB codes 6YVR (NTSR1-H4X:NTS8–13), 6Z4V (NTSR1-H4bmX:NTS8–13), 6Z8N (NTSR1-H4X:SRI-9829), 6ZA8 (NTSR1-H4X:RTI-3a), 6Z4S (NTSR1-H4bmX:SR48692), 6ZIN (NTSR1-H4X:SR48692), 6Z4Q (NTSR1-H4X: SR142948A), and 6Z66 (apo NTSR1-H4X).
Function
Neurotensin receptor 1, also called NTR1, belongs to the large superfamily of G-protein coupled receptors and is considered a class-A GPCR. NTSR1 mediates multiple biological processes through modulation by neurotensin, such as low blood pressure, high blood sugar, low body temperature, antinociception, anti-neuronal damage and regulation of intestinal motility and secretion.
Ligands
ML314 – β-arrestin biased agonist
Neurotensin (NT1)
See also
Neurotensin receptor
References
Further reading
External links
G protein-coupled receptors | Neurotensin receptor 1 | [
"Chemistry"
] | 403 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,526,528 | https://en.wikipedia.org/wiki/Science%20Communication%20Prize | The Science Communication Prize is an annual award in science writing given by the European Commission.
It was begun as a Descartes Prize in 2004, but in 2007 was separated to its own prize.
It is a "prize of prizes" that is only open to winners of other award schemes from the year preceding the award. Eligible forms of science communication include public engagement, written communication including newspaper articles and popular science books, audio-visual media including TV programmes and websites, and "innovative action".
Proposals (also referred to as submissions) received are judged and a shortlist of nominees are announced, from which five Laureates (finalists) and five Winners are announced at a prize ceremony in December each year.
References
External links
Official EU site
FP6-2005-Science-and-society-18: René Descartes Prizes 2006. Call for proposal
Science communication awards
Awards established in 2004 | Science Communication Prize | [
"Technology"
] | 180 | [
"Science and technology awards",
"Science award stubs",
"Science communication awards"
] |
14,526,771 | https://en.wikipedia.org/wiki/Standards%20of%20Fundamental%20Astronomy | The Standards of Fundamental Astronomy (SOFA) software libraries are a collection of subroutines that implement official International Astronomical Union (IAU) algorithms for astronomical computations.
As of February 2009 they are available in both Fortran and C source code format.
Capabilities
The subroutines in the libraries cover the following areas:
Calendars
Time scales
Earth's rotation and sidereal time
Ephemerides (limited precision)
Precession, nutation, polar motion
Proper motion
Star catalog conversions
Astrometric transformations
Galactic Coordinates
Licensing
As of the February 2009 release, SOFA licensing changed to allow use for any purpose, provided certain requirements are met. Previously, commercial usage was specifically excluded and required written agreement of the SOFA board.
See also
Naval Observatory Vector Astrometry Subroutines
References
External links
SOFA Home Page
Scholarpedia overview of SOFA
International Astronomical Union and Working group "Standards of Fundamental Astronomy
Celestial mechanics
Astronomical coordinate systems
Numerical software
Astronomy software | Standards of Fundamental Astronomy | [
"Physics",
"Astronomy",
"Mathematics"
] | 189 | [
"Classical mechanics stubs",
"Works about astronomy",
"Classical mechanics",
"Astrophysics",
"Astronomy stubs",
"Astronomical coordinate systems",
"Astrophysics stubs",
"Astronomy software",
"Numerical software",
"Coordinate systems",
"Celestial mechanics",
"Mathematical software"
] |
14,526,859 | https://en.wikipedia.org/wiki/Installment%20sales%20method | The installment sales method is one of several approaches used to recognize revenue under the US GAAP, specifically when revenue and expense are recognized at the time of cash collection rather than at the time of sale. Under the US GAAP, it is the principal method of revenue recognition when the recognition occurs subsequently to the sale.
Installment sales method
The installment sales method, is used to recognize revenue after the sale has occurred and when sales are stipulated under very extended cash collection terms. In general, when the risk of not being able to collect is reasonably high and when there is no reasonable basis for estimating the proportion of installment accounts, revenue recognition is deferred, and the installment sales method is used. The installment sales method is typically used to account for sales of consumer durables, retail land sales, and retirement property. Under the cost recovery method, another method to recognize income after the sale is made, no profit is recognized until all the costs are recovered.
Calculation under the installment sales method
The installment sales method recognizes revenue and income proportionately as cash is collected. The amount recognized in any period is thus based on two factors:
The gross profit percentage:
The amount of cash collected on installment accounts receivable.
Below is an example of calculation of installment sales for years 2009 and 2010.
2009 income from installment sales calculation:
The income recognized in 2009 equals cash collections in 2009 multiplied by the gross profit percentage in 2009 and is calculated as follows:
$300,000×30% = $90,000
Such income is shown on the 2009 income statement as 2009 income from installment sales.
2009 Deferred Gross Profit calculation:
The deferred gross profit is an A/R contra-account and is the difference between gross profit and recognized income and is calculated as follows:
$360,000 − $90,000 = $270,000
The deferred gross profit is thus deferred and recognized in income in subsequent periods, i.e. when the installment receivables are collected in cash.
2010 income from installment sales is $288,800 and calculated as follows:
A more comprehensive table would clearly show gross profit and deferred income recognized for each year: 2009 and 2010.
Installment sales and the related costs of good sold must be tracked by individual year in order to compute the gross profit percentage that applies to each year. Furthermore, the accounting system must correctly match the cash collections with the specific sales year so that the correct gross profit percentage be applied.
On the balance sheet, "the accounts receivable - installment sales" is classified as current assets if it is due within 12 months of the balance sheet. Otherwise, it is classified as long term assets.
Under the GAAP, the interest component of the periodic cash proceeds is computed separately. In fact, interest payments are not considered when the recognized gross profit is computed on installment sales. Certain procedures differentiate between principal and interest payments on customer receivables.
Comparison to the cash and accrual method
Cash method – The cash method requires that an amount be included in gross income when it is actually or constructively received. The installment method allows greater deferral when the payment is received in the form of a negotiable note. The cash method does not allow for differing between cost recovery and gain.
Accrual method – The accrual method requires income to be recognized as soon as the taxpayer has a right to the income regardless of when the payment is actually received. As such, the taxpayer would have to recognize the full amount of the sale despite the fact that the purchase price may not be paid in full for years.
See also
Tax
Doctrine of cash equivalence
Accounting methods
Installment sale (USA)
Revenue recognition
Tax accounting
References
Sources
Accounting systems | Installment sales method | [
"Technology"
] | 750 | [
"Information systems",
"Accounting systems"
] |
14,526,992 | https://en.wikipedia.org/wiki/Adenosine%203%27%2C5%27-bisphosphate | Adenosine 3',5'-bisphosphate is a form of an adenosine nucleotide with two phosphate groups attached to different carbons in the ribose ring. This is distinct from adenosine diphosphate, where the two phosphate groups are attached in a chain to the 5' carbon atom in the ring.
Adenosine 3',5'-bisphosphate is produced as a product of sulfotransferase enzymes from the donation of a sulfate group from the coenzyme 3'-phosphoadenosine-5'-phosphosulfate. This product is then hydrolysed by 3'(2'),5'-bisphosphate nucleotidase to give adenosine monophosphate, which can then be recycled into adenosine triphosphate.
See also
Adenine
Sulfur metabolism
Acetyl-CoA
References
Nucleotides
Sulfur metabolism | Adenosine 3',5'-bisphosphate | [
"Chemistry"
] | 199 | [
"Sulfur metabolism",
"Metabolism"
] |
14,527,002 | https://en.wikipedia.org/wiki/Mukaiyama%20Taxol%20total%20synthesis | The Mukaiyama taxol total synthesis published by the group of Teruaki Mukaiyama of the Tokyo University of Science between 1997 and 1999 was the 6th successful taxol total synthesis. The total synthesis of Taxol is considered a hallmark in organic synthesis.
This version is a linear synthesis with ring formation taking place in the order C, B, A, D. Contrary to the other published methods, the tail synthesis is by an original design. Teruaki Mukaiyama is an expert on aldol reactions and not surprisingly his Taxol version contains no less than 5 of these reactions. Other key reactions encountered in this synthesis are a pinacol coupling and a Reformatskii reaction. In terms of raw materials the C20 framework is built up from L-serine (C3), isobutyric acid (C4), glycolic acid (C2), methyl bromide (C1), methyl iodide (C1), 2,3-dibromopropene (C3), acetic acid (C2) and homoallyl bromide (C4).
Synthesis C ring
The lower rim of the cyclooctane B ring containing the first 5 carbon atoms was synthesized in a semisynthesis starting from naturally occurring L-serine (scheme 1). This route started with conversion of the amino group of the serine methyl ester (1) to the diol ester 2 via diazotization (sodium nitrite/sulfuric acid). After protection of the primary alcohol group to a (t-butyldimethyl) TBS silyl ether (TBSCl / imidazole) and that of the secondary alcohol group with a (Bn) benzyl ether (benzyl imidate, triflic acid), the aldehyde 3 was reacted with the methyl ester of isobutyric acid (4) in an Aldol addition to alcohol 5 with 65% stereoselectivity. This group was protected as a PMB (p-methoxybenzyl) ether (again through an imidate) in 6 which enabled organic reduction of the ester to the aldehyde in 7 with DIBAL.
Completing the cyclooctane ring required 3 more carbon atoms that were supplied by a C2 fragment in an aldol addition and a Grignard C1 fragment (scheme 2). A Mukaiyama aldol addition (magnesium bromide / toluene) took place between aldehyde 7 and ketene silyl acetal 8 with 71% stereoselectivity to alcohol 9 which was protected as the TBS ether 10 (TBSOTf, 2,6-lutidine). The ester group was reduced with DIBAL to an alcohol and then back oxidized to aldehyde 11 by Swern oxidation. Alkylation by methyl magnesium bromide to alcohol 12 and another Swern oxidation gave ketone 13. This group was converted to the silyl enol ether 14 (LHMDS, TMSCl) enabling it to react with NBS to alkyl bromide 15. The C20 methyl group was introduced as methyl iodide in a nucleophilic substitution with a strong base (LHMDS in HMPA) to bromide 16. Then in preparation to ring-closure the TBS ether was deprotected (HCl/THF) to an alcohol which was converted to the aldehyde 17 in a Swern oxidation. The ring-closing reaction was a Reformatskii reaction with Samarium(II) iodide and acetic acid to acetate 18. The stereochemistry of this particular step was of no consequence because the acetate group is dehydrated to the alkene 19 with DBU in benzene.
Synthesis B ring
The C5 fragment 24 required for the synthesis of the C ring (scheme 3) was prepared from 2,3-dibromopropene (20) by reaction with ethyl acetate (21), n-butyllithium and a copper salt, followed by organic reduction of acetate 22 to alcohol 23 (lithium aluminium hydride) and its TES silylation. Michael addition of 24 with the cyclooctane 19 to 25 with t-BuLi was catalyzed by copper cyanide. After removal of the TES group (HCl, THF), the alcohol 26 was oxidized to aldehyde 27 (TPAP, NMO)which enabled the intramolecular Aldol reaction to bicycle 28.
Synthesis A ring
Ring A synthesis (scheme 4) started with reduction of the C9 ketone group in 28 to diol 29 with alane in toluene followed by diol protection in 30 as a dimethyl carbonate. This allowed selective oxidation of the C1 alcohol with DDQ after deprotection to ketone 31. This compound was alkylated to 32 at the C1 ketone group with the Grignard homoallyl magnesium bromide (C4 fragment completing the carbon framework) and deprotected at C11 (TBAF) to diol 33. By reaction with cyclohexylmethylsilyldichloride both alcohol groups participated in a cyclic silyl ether (34) which was again cleaved by reaction with methyl lithium exposing the C11 alcohol in 35. The A ring closure required two ketone groups for a pinacol coupling which were realized by oxidation of the C11 alcohol (TPAP, NMO) to ketone 36 and Wacker oxidation of the allyl group to diketone 37. After formation of the pinacol product 38 the benzyl groups (sodium, ammonia) and the trialkylsilyl groups (TBAF) were removed to form pentaol 39.
The pentaol 39 was protected twice: two bottom hydroxyl groups as a carbonate ester (bis(trichloromethyl)carbonate, pyridine) and the C10 hydroxyl group as the acetate forming 40. The acetonide group was removed (HCl, THF), the C7 hydroxyl group protected as a TES silyl ether and the C11 OH group oxidized (TPAP, NMO) to ketone 41. The ring A diol group was next removed in a combined elimination reaction and Barton deoxygenation with 1,1'-thiocarbonyldiimidazole forming alkene 42. Finally the C15 hydroxyl group was introduced by oxidation at the allyl position with in two steps PPC and sodium acetate (to the enone) and with K-selectride to alcohol 43 which was protected as a TES ether in 44.
Synthesis D ring
The synthesis of the D ring (scheme 6) started from 44 with allylic bromination with copper(I) bromide and benzoyl tert-butyl peroxide to bromide 45. By adding even more bromide, another bromide 46 formed (both compounds are in chemical equilibrium) with the bromine atom in an axial position. Osmium tetroxide added two hydroxyl groups to the exocyclic double bond in diol 47 and oxetane ring-closure to 48 took place with DBU in a nucleophilic substitution. Then, acylation of the C4 hydroxyl group (acetic anhydride, DMAP, pyridine) resulted in acetate 49. In the final steps phenyllithium opened the ester group to form hydroxy carbonate 50, both TES groups were removed (HF, pyr) to triol 51 (baccatin III) and the C7 hydroxyl group was back-protected to 52.
Tail synthesis
The amide tail synthesis (scheme 7) was based on an asymmetric Aldol reaction. The starting compound is the commercially available Benzyloxyacetic acid 53 which was converted to the thio ester 55 (Ethanethiol) through the acid chloride 54 (thionyl chloride, pyridine). This formed the silyl enol ether 55 (n-butyllithium, trimethylsilyl chloride, Diisopropylamine) which reacted with chiral amine catalyst 58, tin triflate and nBu2(OAc)2 in a Mukaiyama aldol addition with benzaldehyde to alcohol 59 with 99% anti selectivity and 96% ee. The next step converting the alcohol group to an amine in 60 was a Mitsunobu reaction (hydrogen azide, diethyl azodicarboxylate, triphenylphosphine with azide reduction to amine by Ph3P). The amine group was benzoylated with benzoyl chloride (61) and hydrolysis removes the thioether group in 62.
Tail addition
In the final synthetic steps (scheme 8) the amide tail 62 was added to ABCD ring 52 in an esterification catalysed by o,o'-di(2-pyridyl) thiocarbonate (DPTC) and DMAP forming ester 63. The Bn protecting group was removed by hydrogenation using palladium hydroxide on carbon (64) and finally the TES group was removed by HF and pyridine to yield Taxol 65.
See also
Danishefsky Taxol total synthesis
Holton Taxol total synthesis
Kuwajima Taxol total synthesis
Nicolaou Taxol total synthesis
Paclitaxel total synthesis
Wender Taxol total synthesis
References
Bibliography
Citations
External links
Mukaiyama Taxol Synthesis @ SynArchive.com
Total synthesis
Taxanes | Mukaiyama Taxol total synthesis | [
"Chemistry"
] | 2,030 | [
"Total synthesis",
"Chemical synthesis"
] |
14,527,315 | https://en.wikipedia.org/wiki/P2RY4 | P2Y purinoceptor 4 is a protein that in humans is encoded by the P2RY4 gene.
The product of this gene, P2Y4, belongs to the family of G-protein coupled receptors. This family has several receptor subtypes with different pharmacological selectivity, which overlaps in some cases, for various adenosine and uridine nucleotides. This receptor is responsive to uridine nucleotides, partially responsive to ATP, and not responsive to ADP.
See also
P2Y receptor
References
Further reading
External links
G protein-coupled receptors | P2RY4 | [
"Chemistry"
] | 128 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,527,928 | https://en.wikipedia.org/wiki/Dihydrobiopterin | Dihydrobiopterin (BH2) is a pteridine compound produced in the synthesis of L-DOPA, dopamine, serotonin, norepinephrine and epinephrine. It is restored to the required cofactor tetrahydrobiopterin by dihydrobiopterin reductase.
See also
Pteridine
Tetrahydrobiopterin
Biomolecules | Dihydrobiopterin | [
"Chemistry",
"Biology"
] | 92 | [
"Natural products",
"Biotechnology stubs",
"Organic compounds",
"Biochemistry stubs",
"Structural biology",
"Biomolecules",
"Biochemistry",
"Molecular biology"
] |
14,528,017 | https://en.wikipedia.org/wiki/List%20of%20incomplete%20proofs | This page lists notable examples of incomplete or incorrect published mathematical proofs. Most of these were accepted as complete or correct for several years but later discovered to contain gaps or errors. There are both examples where a complete proof was later found, or where the alleged result turned out to be false.
Results later proved rigorously
Euclid's Elements. Euclid's proofs are essentially correct, but strictly speaking sometimes contain gaps because he tacitly uses some unstated assumptions, such as the existence of intersection points. In 1899 David Hilbert gave a complete set of (second order) axioms for Euclidean geometry, called Hilbert's axioms, and between 1926 and 1959 Tarski gave some complete sets of first order axioms, called Tarski's axioms.
Isoperimetric inequality. For three dimensions it states that the shape enclosing the maximum volume for its surface area is the sphere. It was formulated by Archimedes but not proved rigorously until the 19th century, by Hermann Schwarz.
Infinitesimals. In the 18th century there was widespread use of infinitesimals in calculus, though these were not really well defined. Calculus was put on firm foundations in the 19th century, and Robinson put infinitesimals in a rigorous basis with the introduction of nonstandard analysis in the 20th century.
Fundamental theorem of algebra (see History). Many incomplete or incorrect attempts were made at proving this theorem in the 18th century, including by d'Alembert (1746), Euler (1749), de Foncenex (1759), Lagrange (1772), Laplace (1795), Wood (1798), and Gauss (1799). The first rigorous proof was published by Argand in 1806.
Dirichlet's theorem on arithmetic progressions. In 1808 Legendre published an attempt at a proof of Dirichlet's theorem, but as Dupré pointed out in 1859 one of the lemmas used by Legendre is false. Dirichlet gave a complete proof in 1837.
The proofs of the Kronecker–Weber theorem by Kronecker (1853) and Weber (1886) both had gaps. The first complete proof was given by Hilbert in 1896.
In 1879, Alfred Kempe published a purported proof of the four color theorem, whose validity as a proof was accepted for eleven years before it was refuted by Percy Heawood. Peter Guthrie Tait gave another incorrect proof in 1880 which was shown to be incorrect by Julius Petersen in 1891. Kempe's proof did, however, suffice to show the weaker five color theorem. The four-color theorem was eventually proved by Kenneth Appel and Wolfgang Haken in 1976.
Schröder–Bernstein theorem. In 1896 Schröder published a proof sketch which, however, was shown to be faulty by Alwin Reinhold Korselt in 1911 (confirmed by Schröder).
Jordan curve theorem. There has been some controversy about whether Jordan's original proof of this in 1887 contains gaps. Oswald Veblen in 1905 claimed that Jordan's proof is incomplete, but in 2007 Hales said that the gaps are minor and that Jordan's proof is essentially complete.
In 1905 Lebesgue tried to prove the (correct) result that a function implicitly defined by a Baire function is Baire, but his proof incorrectly assumed that the projection of a Borel set is Borel. Suslin pointed out the error and was inspired by it to define analytic sets as continuous images of Borel sets.
Dehn's lemma. Dehn published an attempted proof in 1910, but Kneser found a gap in 1929. It was finally proved in 1956 by Christos Papakyriakopoulos.
Hilbert's sixteenth problem about the finiteness of the number of limit cycles of a plane polynomial vector field. Henri Dulac published a partial solution to this problem in 1923, but in about 1980 Écalle and Ilyashenko independently found a serious gap, and fixed it in about 1991.
In 1929 Lazar Lyusternik and Lev Schnirelmann published a proof of the theorem of the three geodesics, which was later found to be flawed. The proof was completed by Werner Ballmann about 50 years later.
Littlewood–Richardson rule. Robinson published an incomplete proof in 1938, though the gaps were not noticed for many years. The first complete proofs were given by Marcel-Paul Schützenberger in 1977 and Thomas in 1974.
Class numbers of imaginary quadratic fields. In 1952 Heegner published a solution to this problem. His paper was not accepted as a complete proof as it contained a gap, and the first complete proofs were given in about 1967 by Baker and Stark. In 1969 Stark showed how to fill the gap in Heegner's paper.
In 1954 Igor Shafarevich published a proof that every finite solvable group is a Galois group over the rationals. However Schmidt pointed out a gap in the argument at the prime 2, which Shafarevich fixed in 1989.
Nielsen realization problem. Kravetz claimed to solve this in 1959 by first showing that Teichmüller space is negatively curved, but in 1974 Masur showed that it is not negatively curved. The Nielsen realization problem was finally solved in 1980 by Kerckhoff.
Yamabe problem. Yamabe claimed a solution in 1960, but Trudinger discovered a gap in 1968, and a complete proof was not given until 1984.
Mordell conjecture over function fields. Manin published a proof in 1963, but found and corrected a gap in the proof.
In 1973 Britton published a 282-page attempted solution of Burnside's problem. In his proof he assumed the existence of a set of parameters satisfying some inequalities, but Adian pointed out that these inequalities were inconsistent. Novikov and Adian had previously found a correct solution around 1968.
Classification of finite simple groups. In 1983, Gorenstein announced that the proof of the classification had been completed, but he had been misinformed about the status of the proof of classification of quasithin groups, which had a serious gap in it. A complete proof for this case was published by Aschbacher and Smith in 2004.
In 1986, Spencer Bloch published the paper "Algebraic Cycles and Higher K-theory" which introduced a higher Chow group, a precursor to motivic cohomology. The paper used an incorrect moving lemma; the lemma was later replaced by 30 pages of complex arguments that "took many years to be accepted as correct."
Kepler conjecture. Hsiang published an incomplete proof of this in 1993. In 1998 Hales published a proof depending on long computer calculations.
Incorrect results
In 1759 Euler claimed that there were no closed knight tours on a chess board with 3 rows, but in 1917 Ernest Bergholt found tours on 3 by 10 and 3 by 12 boards.
Euler's conjecture on Graeco-Latin squares. In the 1780s Euler conjectured that no such squares exist for any oddly even number n ≡ 2 (mod 4). In 1959, R. C. Bose and S. S. Shrikhande constructed counterexamples of order 22. Then E. T. Parker found a counterexample of order 10 using a one-hour computer search. Finally Parker, Bose, and Shrikhande showed this conjecture to be false for all n ≥ 10.
In 1798 A. M. Legendre claimed that 6 is not the sum of 2 rational cubes, which as Lamé pointed out in 1865 is false as 6 = (37/21)3 + (17/21)3.
In 1803, Gian Francesco Malfatti claimed to prove that a certain arrangement of three circles would cover the maximum possible area inside a right triangle. However, to do so he made certain unwarranted assumptions about the configuration of the circles. It was shown in 1930 that circles in a different configuration could cover a greater area, and in 1967 that Malfatti's configuration was never optimal. See Malfatti circles.
In 1806 André-Marie Ampère claimed to prove that a continuous function is differentiable at most points (though it is not entirely clear what he was claiming as he did not give a precise definition of a function). However, in 1872 Weierstrass gave an example of a continuous function that was not differentiable anywhere: The Weierstrass function.
Intersection theory. In 1848 Steiner claimed that the number of conics tangent to 5 given conics is 7776 = 65, but later realized this was wrong. The correct number 3264 was found by Berner in 1865 and by Ernest de Jonquieres around 1859 and by Chasles in 1864 using his theory of characteristics. However these results, like many others in classical intersection theory, do not seem to have been given complete proofs until the work of Fulton and Macpherson in about 1978.
Dirichlet's principle. This was used by Riemann in 1851, but Weierstrass found a counterexample to one version of this principle in 1870, and Hilbert stated and proved a correct version in 1900.
incorrectly claimed that there are three different groups of order 6. This mistake is strange because in an earlier 1854 paper he correctly stated that there are just two such groups.
Frege's foundations of mathematics in his 1879 book Begriffsschrift turned out to be inconsistent because of Russell's paradox, found in 1901.
In 1885, Evgraf Fedorov classified the convex polyhedra with congruent rhombic faces, but missed a case. Stanko Bilinski in 1960 rediscovered the Bilinski dodecahedron (forgotten after its previous 1752 publication) and proved that, with the addition of this shape, the classification was complete.
Wronskians. In 1887 Mansion claimed in his textbook that if a Wronskian of some functions vanishes everywhere then the functions are linearly dependent. In 1889 Peano pointed out the counterexample x2 and x|x|. The result is correct if the functions are analytic.
published a purported example of an algebraic curve in 3-dimensional projective space that could not be defined as the zeros of 3 polynomials, but in 1941 Perron found 3 equations defining Vahlen's curve. In 1961 Kneser showed that any algebraic curve in projective 3-space can be given as the zeros of 3 polynomials.
In 1898 Miller published a paper incorrectly claiming to prove that the Mathieu group M24 does not exist, though in 1900 he pointed out that his proof was wrong.
Little claimed in 1900 that the writhe of a reduced knot diagram is an invariant. However, in 1974 Perko discovered a counterexample called the Perko pair, a pair of knots listed as distinct in tables for many years that are in fact the same.
Hilbert's twenty-first problem. In 1908 Plemelj claimed to have shown the existence of Fuchsian differential equations with any given monodromy group, but in 1989 Bolibruch discovered a counterexample.
In 1925 Ackermann published a proof that a weak system can prove the consistency of a version of analysis, but von Neumann found an explicit mistake in it a few years later. Gödel's incompleteness theorems showed that it is not possible to prove the consistency of analysis using weaker systems.
Groups of order 64. In 1930 Miller published a paper claiming that there are 294 groups of order 64. Hall and Senior showed in 1964 that the correct number is 267.
Church's original published attempt in 1932 to define a formal system was inconsistent, as was his correction in 1933. The consistent part of his system later became the lambda calculus.
Kurt Gödel proved in 1933 that the truth of a certain class of sentences of first-order arithmetic, known in the literature as [∃*∀2∃*, all, (0)], was decidable. That is, there was a method for deciding correctly whether any statement of that form was true. In the final sentence of that paper, he asserted that the same proof would work for the decidability of the larger class [∃*∀2∃*, all, (0)]=, which also includes formulas that contain an equality predicate. However, in the mid-1960s, Stål Aanderaa showed that Gödel's proof would not go through for the larger class, and in 1982 Warren Goldfarb showed that validity of formulas from the larger class was in fact undecidable.
Grunwald–Wang theorem. Wilhelm Grunwald published an incorrect proof in 1933 of an incorrect theorem, and George Whaples later published another incorrect proof. Shianghao Wang found a counterexample in 1948 and published a corrected version of the theorem in 1950.
In 1934 Severi claimed that the space of rational equivalence classes of cycles on an algebraic surface is finite-dimensional, but showed that this is false for surfaces of positive geometric genus.
Quine published his original description of the system Mathematical Logic in 1940, but in 1942 Rosser showed it was inconsistent. Wang found a correction in 1950; the consistency of this revised system is still unclear.
One of many examples from algebraic geometry in the first half of the 20th century: claimed that a degree-n surface in 3-dimensional projective space has at most ()−4 nodes, B. Segre pointed out that this was wrong; for example, for degree 6 the maximum number of nodes is 65, achieved by the Barth sextic, which is more than the maximum of 52 claimed by Severi.
Rokhlin invariant. Rokhlin incorrectly claimed in 1951 that the third stable stem of the homotopy groups of spheres is of order 12. In 1952 he discovered his error: it is in fact cyclic of order 24. The difference is crucial as it results in the existence of the Rokhlin invariant, a fundamental tool in the theory of 3- and 4-dimensional manifolds.
In 1961, Jan-Erik Roos published an incorrect theorem about the vanishing of the first derived functor of the inverse limit functor under certain general conditions. However, in 2002, Amnon Neeman constructed a counterexample. Roos showed in 2006 that the theorem holds if one adds the assumption that the category has a set of generators.
The Schur multiplier of the Mathieu group M22 is particularly notorious as it was miscalculated more than once: first claimed it had order 3, then in a 1968 correction claimed it had order 6; its order is in fact (currently believed to be) 12. This caused an error in the title of Janko's paper A new finite simple group of order 86,775,570,046,077,562,880 which possesses M24 and the full covering group of M22 as subgroup on J4: it does not have the full covering group as a subgroup, as the full covering group is larger than was realized at the time.
The original statement of the classification of N-groups by Thompson in 1968 accidentally omitted the Tits group, though he soon fixed this.
In 1967 Reinhardt proposed Reinhardt cardinals, which Kunen showed to be inconsistent with ZFC in 1971, though they are not known to be inconsistent with ZF.
Per Martin-Löf's original version of intuitionistic type theory proposed in 1971 was shown to be inconsistent by Jean-Yves Girard in 1972, and was replaced by a corrected version.
In 1975, Leitzel, Madan, and Queen incorrectly claimed that there are only 7 function fields over finite fields with genus > 0 and class number 1, but in 2013 Stirpe found another; there are in fact exactly 8.
Busemann–Petty problem. Zhang published two papers in the Annals of Mathematics in 1994 and 1999, in the first of which he proved that the Busemann–Petty problem in R4 has a negative solution, and in the second of which he proved that it has a positive solution.
Algebraic stacks. The book on algebraic stacks mistakenly claimed that morphisms of algebraic stacks induce morphisms of lisse-étale topoi. The results depending on this were repaired by .
Status unclear
Uniform convergence. In his Cours d'Analyse of 1821, Cauchy "proved" that if a sum of continuous functions converges pointwise, then its limit is also continuous. However, Abel observed three years later that this is not the case. For the conclusion to hold, "pointwise convergence" must be replaced with "uniform convergence". It is not entirely clear that Cauchy's original result was wrong, because his definition of pointwise convergence was a little vague and may have been stronger than the one currently in use, and there are ways to interpret his result so that it is correct. There are many counterexamples using the standard definition of pointwise convergence. For example, a Fourier series of sine and cosine functions, all continuous, may converge pointwise to a discontinuous function such as a step function.
Carmichael's totient function conjecture was stated as a theorem by Robert Daniel Carmichael in 1907, but in 1922 he pointed out that his proof was incomplete. As of 2016 the problem is still open.
Italian school of algebraic geometry. Most gaps in proofs are caused either by a subtle technical oversight, or before the 20th century by a lack of precise definitions. A major exception to this is the Italian school of algebraic geometry in the first half of the 20th century, where lower standards of rigor gradually became acceptable. The result was that there are many papers in this area where the proofs are incomplete, or the theorems are not stated precisely. This list contains a few representative examples, where the result was not just incompletely proved but also hopelessly wrong.
In 1933 George David Birkhoff and Waldemar Joseph Trjitzinsky published a very general theorem on the asymptotics of sequences satisfying linear recurrences. The theorem was popularized by Jet Wimp and Doron Zeilberger in 1985. However, while the result is probably true, as of now (2021) Birkhoff and Trjitzinsky's proof is not generally accepted by experts, and the theorem is (acceptedly) proved only in special cases.
Jacobian conjecture. Keller asked this as a question in 1939, and in the next few years there were several published incomplete proofs, including 3 by B. Segre, but Vitushkin found gaps in many of them. The Jacobian conjecture is (as of 2016) an open problem, and more incomplete proofs are regularly announced. discuss the errors in some of these incomplete proofs.
A strengthening of Hilbert's sixteenth problem asking whether there exists a uniform finite upper bound for the number of limit cycles of planar polynomial vector fields of given degree n. In the 1950s, Evgenii Landis and Ivan Petrovsky published a purported solution, but it was shown wrong in the early 1960s.
In 1954 Zarankiewicz claimed to have solved Turán's brick factory problem about the crossing number of complete bipartite graphs, but Kainen and Ringel later noticed a gap in his proof.
Complex structures on the 6-sphere. In 1969 Alfred Adler published a paper in the American Journal of Mathematics claiming that the 6-sphere has no complex structure. His argument was incomplete, and this is (as of 2016) still a major open problem.
Closed geodesics. In 1978 Wilhelm Klingenberg published a proof that smooth compact manifolds without boundary have infinitely many closed geodesics. His proof was controversial, and there is currently (as of 2016) no consensus on whether his proof is complete.
In 1991, Kapranov and Voevodsky published a paper claiming to prove a version of the homotopy hypothesis. Later, Simpson showed the result of the paper is not true but conjectured that a variant of the result might be true, the variant now known as the Simpson conjecture.
Telescope conjecture. Ravenel announced a refutation of this in 1992, but later withdrew it, and the conjecture is still open.
Matroid bundles. In 2003 Daniel Biss published a paper in the Annals of Mathematics claiming to show that matroid bundles are equivalent to real vector bundles, but in 2009 published a correction pointing out a serious gap in the proof. His correction was based on a 2007 paper by Mnëv.
In 2012, the Japanese mathematician Shinichi Mochizuki released online a series of papers in which he claims to prove the abc conjecture. Despite the publication in a peer-reviewed journal later, his proof has not been accepted as correct in the mainstream mathematical community.
See also
List of long mathematical proofs
List of disproved mathematical ideas
Superseded theories in science
Notes
References
Further reading
— Lists over a hundred pages of (mostly trivial) published errors made by mathematicians.
External links
David Mumford email about the errors of the Italian algebraic geometry school under Severi
The first 9 pages of mention some examples of incorrect results in homotopy theory.
MathOverflow questions
Ilya Nikokoshev, Most interesting mathematics mistake?
Kevin Buzzard what mistakes did the Italian algebraic geometers actually make?
Will Jagy, Widely accepted mathematical results that were later shown wrong?
John Stillwell, What are some correct results discovered with incorrect (or no) proofs?
Moritz. Theorems demoted back to conjectures
Mei Zhang, Proofs shown to be wrong after formalization with proof assistant
StackExchange questions
Steven-Owen, In the history of mathematics, has there ever been a mistake?
Mathematics-related lists
Theorems
Mathematical fallacies | List of incomplete proofs | [
"Mathematics"
] | 4,445 | [
"Mathematical fallacies"
] |
14,528,756 | https://en.wikipedia.org/wiki/Bris%20sextant | The Bris sextant , or Bris Mini-Sextant, is not a sextant proper, but is a small angle-measuring device that can be used for navigation. The Bris is, however, a true reflecting instrument which derives its high accuracy from the same principle of double reflection which is fundamental to the octant, the true sextant, and other reflecting instruments. It differs from other sextants primarily in being a fixed angle sextant, capable of measuring a few specific angles.
History
Sven Yrvind (Lundin) developed his Bris sextant as part of his quest for low-cost, low-technology equipment for ocean crossings. The Bris is a low-technology, high-precision, fixed-interval instrument. It is made of two narrow, flat pieces of glass (microscope slides) permanently and rigidly mounted in a V-shape to a third flat piece of #12 welding glass to make viewing the sun eye safe. When the sun or moon is viewed through the V, it is split into eight images. The instrument is small and rugged enough that it can be kept in a 35mm film canister (about 2 cm radius, 3 cm tall) on a lanyard around one's neck.
The Bris sextant is calibrated at a known geographic position with a good clock and a nautical almanac. As the day passes, one works the sight reductions backwards to develop exact angles for each of the images' tops and bottoms. The Sun and Moon have approximately the same angular size from the surface of the Earth, and can use the same calibrations.
In use, one waits until an image's edge touches the horizon, and then records the time and reduces the sight using the recorded angle for that edge of the image.
Etymology
Bris is Swedish for breeze. The name Bris is used by Yrvind for a number of his sail boats.
References
Sources
A three-page article (not available online) on the Bris sextant appeared in Yachting Monthly magazine, June 1997.
A two-page article (not available online) on the Bris sextant appeared in Die Yacht magazine, 22/1997: Mini-Sextant: Mit einem genial einfachen Gerät verblufft Weltumsegler Sven Lundin jetzt die gesamte Fachwelt.
Bris Mini Sextant Instructions, page 2, Sven Yrvind, 1998
External links
Navigational equipment
Celestial navigation
Astronomical instruments | Bris sextant | [
"Astronomy"
] | 522 | [
"Celestial navigation",
"Astronomical instruments"
] |
14,529,239 | https://en.wikipedia.org/wiki/Zoophilia | Zoophilia is a paraphilia in which a person experiences a sexual fixation on non-human animals. Bestiality instead refers to cross-species sexual activity between humans and non-human animals. Due to the lack of research on the subject, it is difficult to conclude how prevalent bestiality is. Zoophilia, however, was estimated in one study to be prevalent in 2% of the population in 2021.
History
The historical perspective on zoophilia and bestiality varies greatly, from the prehistoric era, where depictions of bestiality appear in European rock art, to the Middle Ages, where bestiality was met with execution. In many parts of the world, bestiality is illegal under animal abuse laws or laws dealing with sodomy or crimes against nature.
Terminology
General
Three key terms commonly used in regards to the subject—zoophilia, bestiality, and zoosexuality—are often used somewhat interchangeably. Some researchers distinguish between zoophilia (as a persistent sexual interest in animals) and bestiality (as sexual acts with animals), because bestiality is often not driven by a sexual preference for animals. Some studies have found a preference for animals is rare among people who engage in sexual contact with animals. Furthermore, some zoophiles report they have never had sexual contact with an animal. People with zoophilia are known as "zoophiles", though also sometimes as "zoosexuals", or even very simply "zoos". Zooerasty, sodomy, and zooerastia are other terms closely related to the subject but are less synonymous with the former terms, and are seldom used. "Bestiosexuality" was discussed briefly by Allen (1979), but never became widely established.
Ernest Bornemann coined the separate term zoosadism for those who derive pleasure – sexual or otherwise – from inflicting pain on animals. Zoosadism specifically is one member of the Macdonald triad of precursors to sociopathic behavior.
Zoophilia
The term zoophilia was introduced into the field of research on sexuality in Psychopathia Sexualis (1886) by Krafft-Ebing, who described a number of cases of "violation of animals (bestiality)", as well as "zoophilia erotica", which he defined as a sexual attraction to animal skin or fur. The term zoophilia derives from the combination of two nouns in Greek: ζῷον (zṓion, meaning "animal") and φιλία (philia, meaning "(fraternal) love"). In general contemporary usage, the term zoophilia may refer to sexual activity between human and non-human animals, the desire to engage in such, or to the specific paraphilia (i.e., the atypical arousal) which indicates a definite preference for animals over humans as sexual partners. Although Krafft-Ebing also coined the term zooerasty for the paraphilia of exclusive sexual attraction to animals, .
Zoosexuality
The term zoosexual was proposed by Hani Miletski in 2002 as a value-neutral term. Usage of zoosexual as a noun (in reference to a person) is synonymous with zoophile, while the adjectival form of the word – as, for instance, in the phrase "zoosexual act" – may indicate sexual activity between a human and an animal. The derivative noun "zoosexuality" is sometimes used by self-identified zoophiles in both support groups and on internet-based discussion forums to designate sexual orientation manifesting as sexual attraction to animals.
Bestiality
Some zoophiles and researchers draw a distinction between zoophilia and bestiality, using the former to describe the desire to form sexual relationships with animals, and the latter to describe the sex acts alone. Confusing the matter yet further, writing in 1962, William H. Masters used the term bestialist specifically in his discussion of zoosadism.
Stephanie LaFarge, an assistant professor of psychiatry at the New Jersey Medical School, and Director of Counseling at the ASPCA, writes that two groups can be distinguished: bestialists, who rape or abuse animals, and zoophiles, who form an emotional and sexual attachment to animals. Colin J. Williams and Martin Weinberg studied self-defined zoophiles via the internet and reported them as understanding the term zoophilia to involve concern for the animal's welfare, pleasure, and consent, as distinct from the self-labelled zoophiles' concept of "bestialists", whom the zoophiles in their study defined as focused on their own gratification. also quoted a British newspaper saying that zoophilia is a term used by "apologists" for bestiality.
Sexual arousal from watching animals mate is known as faunoiphilia.
Extent of occurrence
The Kinsey reports of 1948 and 1953 estimated the percentage of people in the general population of the United States who had at least one sexual interaction with animals as 8% for males and 5.1% for females (1.5% for pre-adolescents and 3.6% for post-adolescents females), and claimed it was 40–50% for the rural population and even higher among individuals with lower educational status. Some later writers dispute the figures, noting that the study lacked a random sample in that it included a disproportionate number of prisoners, causing sampling bias. Martin Duberman has written that it is difficult to get a random sample in sexual research, but pointed out that when Paul Gebhard, Kinsey's research successor, removed prison samples from the figures, he found the figures were not significantly changed.By 1974, the farm population in the US had declined by 80 percent compared with 1940, reducing the opportunity to live with animals; Hunt's 1974 study suggests that these demographic changes led to a significant change in reported occurrences of bestiality. The percentage of males who reported sexual interactions with animals in 1974 was 4.9% (1948: 8.3%), and in females in 1974 was 1.9% (1953: 3.6%). Miletski believes this is not due to a reduction in interest but merely a reduction in opportunity.
Nancy Friday's 1973 book on female sexuality, My Secret Garden, comprised around 190 fantasies from different women; of these, 23 involve zoophilic activity.
In one study, psychiatric patients were found to have a statistically significant higher prevalence rate (55 percent) of reported bestiality, both actual sexual contacts (45 percent) and sexual fantasy (30 percent) than the control groups of medical in-patients (10 percent) and psychiatric staff (15 percent). reported that 5.3 percent of the men they surveyed had fantasized about sexual activity with an animal during heterosexual intercourse. In a 2014 study, 3% of women and 2.2% of men reported fantasies about having sex with an animal. A 1982 study suggested that 7.5 percent of 186 university students had interacted sexually with an animal. A 2021 review estimated zoophilic behavior occurs in 2% of the general population.
Perspectives on zoophilia
Research perspectives
Zoophilia has been discussed by several sciences: psychology (the study of the human mind), sexology (a relatively new discipline primarily studying human sexuality), ethology (the study of animal behavior), and anthrozoology (the study of human–animal interactions and bonds).
In the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), zoophilia is placed in the classification "other specified paraphilic disorder" ("paraphilias not otherwise specified" in the DSM-III and IV). The World Health Organization takes the same position, listing a sexual preference for animals in its ICD -10 as "other disorder of sexual preference". In the DSM-5, it rises to the level of a diagnosable disorder only when accompanied by distress or interference with normal functioning.
Zoophilia may be covered to some degree by other fields such as ethics, philosophy, law, animal rights and animal welfare. It may also be touched upon by sociology which looks both at zoosadism in examining patterns and issues related to sexual abuse and at non-sexual zoophilia in examining the role of animals as emotional support and companionship in human lives, and may fall within the scope of psychiatry if it becomes necessary to consider its significance in a clinical context. The Journal of Forensic and Legal Medicine (Vol. 18, February 2011) states that sexual contact with animals is almost never a clinically significant problem by itself; it also states that there are several kinds of zoophiles:
Human-animal role-players
Romantic zoophiles
Zoophilic fantasizers
Tactile zoophiles
Fetishistic zoophiles
Sadistic bestials
Opportunistic zoophiles
Regular zoophiles
Exclusive zoophiles
Romantic zoophiles, zoophilic fantasizers, and regular zoophiles are the most common, while sadistic bestials and opportunistic zoophiles are the least common.
Zoophilia may reflect childhood experimentation, sexual abuse or lack of other avenues of sexual expression. Exclusive desire for animals rather than humans is considered a rare paraphilia, and they often have other paraphilias with which they present. Zoophiles will not usually seek help for their condition, and so do not come to the attention of psychiatrists for zoophilia itself.
The first detailed studies of zoophilia date prior to 1910. Peer-reviewed research into zoophilia in its own right started around 1960. However, a number of the most oft-quoted studies, such as Miletski, were not published in peer-reviewed journals. There have been several significant modern books, from psychologists William H. Masters (1962) to Andrea Beetz (2002); their research arrived at the following conclusions:
Most zoophiles have (or have also had) long term human relationships as well or at the same time as bestial ones, and bestial partners are usually dogs and/or horses.
Zoophiles' emotions and care for animals can be real, relational, authentic and (within animals' abilities) reciprocal, and not just a substitute or means of expression. Beetz believes zoophilia is not an inclination which is chosen.
Society in general is considerably misinformed about zoophilia, its stereotypes, and its meaning. The distinction between zoophilia and zoosadism is a critical one to these researchers, and is highlighted by each of these studies. Masters (1962), Miletski (1999) and Weinberg (2003) each comment significantly on the social harm caused by misunderstandings regarding zoophilia: "This destroy[s] the lives of many citizens".
More recently, research has engaged three further directions: the speculation that at least some animals seem to enjoy a zoophilic relationship assuming sadism is not present, and can form an affectionate bond.
Beetz described the phenomenon of zoophilia/bestiality as being somewhere between crime, paraphilia, and love, although she says that most research has been based on criminological reports, so the cases have frequently involved violence and psychiatric illness. She says only a few recent studies have taken data from volunteers in the community. As with all volunteer surveys and sexual ones in particular, these studies have a potential for self-selection bias.
Medical research suggests that some zoophiles only become aroused by a specific species (such as horses), some zoophiles become aroused by multiple species (which may or may not include humans), and some zoophiles are not attracted to humans at all.
Historical and cultural perspectives
Instances of zoophilia and bestiality have been found in the Bible, but the earliest depictions of bestiality have been found in a cave painting from at least 8000 BC in the Northern Italian Val Camonica a man is shown about to penetrate an animal. Raymond Christinger interprets the cave painting as a show of power of a tribal chief, it is unknown if this practice was then more acceptable, and if the scene depicted was usual or unusual or whether it was symbolic or imaginary. According to the Cambridge Illustrated History of Prehistoric Art, the penetrating man seems to be waving cheerfully with his hand at the same time. Potters of the same time period seem to have spent time depicting the practice, but this may be because they found the idea amusing. The anthropologist Dr "Jacobus X", said that the cave paintings occurred "before any known taboos against sex with animals existed". William H. Masters claimed that "since pre-historic man is prehistoric it goes without saying that we know little of his sexual behavior"; depictions in cave paintings may only show the artist's subjective preoccupations or thoughts.
Pindar, Herodotus, and Plutarch claimed the Egyptians engaged in ritual congress with goats. Such claims about other cultures do not necessarily reflect anything about which the author had evidence, but may be a form of propaganda or xenophobia, similar to blood libel.
Several cultures built temples (Khajuraho, India) or other structures (Sagaholm, barrow, Sweden) with zoophilic carvings on the exterior, however at Khajuraho, these depictions are not on the interior, perhaps depicting that these are things that belong to the profane world rather than the spiritual world, and thus are to be left outside.
In the Church-oriented culture of the Middle Ages, zoophilic activity was met with execution, typically burning, and death to the animals involved either the same way or by hanging, as "both a violation of Biblical edicts and a degradation of man as a spiritual being rather than one that is purely animal and carnal". Some witches were accused of having congress with the devil in the form of an animal. As with all accusations and confessions extracted under torture in the witch trials in Early Modern Europe, their validity cannot be ascertained.
Religious perspectives
Passages in Leviticus 18 (Lev 18:23: "And you shall not lie with any beast and defile yourself with it, neither shall any woman give herself to a beast to lie with it: it is a perversion." RSV) and 20:15–16 ("If a man lies with a beast, he shall be put to death; and you shall kill the beast. If a woman approaches any beast and lies with it, you shall kill the woman and the beast; they shall be put to death, their blood is upon them." RSV) are cited by Jewish, Christian, and Muslim theologians as categorical denunciation of bestiality. However, the teachings of the New Testament have been interpreted by some as not expressly forbidding bestiality.
In Part II of his Summa Theologica, medieval philosopher Thomas Aquinas ranked various "unnatural vices" (sex acts resulting in "venereal pleasure" rather than procreation) by degrees of sinfulness, concluding that "the most grievous is the sin of bestiality". Some Christian theologians extend Matthew's view that even having thoughts of adultery is sinful to imply that thoughts of committing bestial acts are likewise sinful.
There are a few references in Hindu temples to figures engaging in symbolic sexual activity with animals such as explicit depictions of people having sex with animals included amongst the thousands of sculptures of "Life events" on the exterior of the temple complex at Khajuraho. The depictions are largely symbolic depictions of the sexualization of some animals and are not meant to be taken literally. According to the Hindu tradition of erotic painting and sculpture, having sex with an animal is believed to be actually a human having sex with a god incarnated in the form of an animal. However, in some Hindu scriptures, such as the Bhagavata Purana and the Devi Bhagavata Purana, having sex with animals, especially the cow, leads one to hell, where one is tormented by having one's body rubbed on trees with razor-sharp thorns. Similarly, the Manusmriti in verse 11.173 also condemns the act of bestiality and prescribes punishments for it: A man who has had sexual intercourse with nonhuman females, or with a menstruating woman,—and he who has discharged his semen in a place other than the female organ, or in water,—should perform the ‘Sāntapana Kṛcchra.
Legal status
In many jurisdictions, all acts of bestiality are prohibited; others outlaw only the mistreatment of animals, without specific mention of sexual activity. In the United Kingdom, Section 63 of the Criminal Justice and Immigration Act 2008 (also known as the Extreme Pornography Act) outlaws images of a person performing or appearing to perform an act of intercourse or oral sex with another animal (whether dead or alive). Despite the UK Ministry of Justice's explanatory note on extreme images saying "It is not a question of the intentions of those who produced the image. Nor is it a question of the sexual arousal of the defendant", "it could be argued that a person might possess such an image for the purposes of satire, political commentary or simple grossness", according to The Independent.
Many laws banning sex with non-human animals have been made recently, such as in the United States (New Hampshire and Ohio), Germany, Sweden, Iceland, Denmark, Thailand, Costa Rica, Bolivia, and Guatemala. The number of jurisdictions around the world banning it has grown in the 2000s and 2010s.
West Germany legalized bestiality in 1969 but banned it again in 2013. The 2013 law was unsuccessfully challenged before the Federal Constitutional Court in 2015.
Romania banned zoophilia in May 2022.
Laws on bestiality are sometimes triggered by specific incidents. While some laws are very specific, others employ vague terms such as "sodomy" or "bestiality", which lack legal precision and leave it unclear exactly which acts are covered. In the past, some bestiality laws may have been made in the belief that sex with another animal could result in monstrous offspring, as well as offending the community. Modern anti-cruelty laws focus more specifically on animal welfare while anti-bestiality laws are aimed only at offenses to community "standards".
In Sweden, a 2005 report by the Swedish Animal Welfare Agency for the government expressed concern over the increase in reports of horse-ripping incidents. The agency believed animal cruelty legislation was not sufficient to protect animals from abuse and needed updating, but concluded that on balance it was not appropriate to call for a ban. In New Zealand, the 1989 Crimes Bill considered abolishing bestiality as a criminal offense, and instead viewing it as a mental health issue, but they did not, and people can still be prosecuted for it. Under Section 143 of the Crimes Act 1961, individuals can serve a sentence of seven years duration for animal sexual abuse and the offence is considered 'complete' in the event of 'penetration'.
As of 2023, bestiality is illegal in 49 U.S. states. Most state bestiality laws were enacted between 1999 and 2023. Bestiality remains legal in West Virginia, while 19 states have statutes that date to the 19th century or even the colonial period. The recent statutes are distinct from older sodomy statutes in that they define the proscribed acts with precision.
Pornography
In the United States, zoophilic pornography would be considered obscene if it did not meet the standards of the Miller Test and therefore is not openly sold, mailed, distributed or imported across state boundaries or within states which prohibit it. Under U.S. law, 'distribution' includes transmission across the Internet. The state of Oregon explicitly prohibits possession of media that depicts bestiality when such possession is for erotic purposes.
Similar restrictions apply in Germany (see above). In New Zealand, the possession, making or distribution of material promoting bestiality is illegal.
While bestiality is illegal across Australia, the first state to also ban zoophilic pornography was New South Wales.
The potential use of media for pornographic movies was seen from the start of the era of silent film. Polissons and Galipettes (re-released 2002 as "The Good Old Naughty Days") is a collection of early French silent films for brothel use, including some zoophilic pornography, dating from around 1905 – 1930.
Material featuring sex with non-human animals is widely available on the internet. An early film to attain great infamy was "Animal Farm", smuggled into Great Britain around 1980 without details as to makers or provenance. The film was later traced to a crude juxtaposition of smuggled cuts from many of Bodil Joensen's 1970s Danish movies.
In 1972, Linda Lovelace, the star of the film "Deep Throat", appeared in the film "Dogorama" (also released under the titles "Dog 1," "Dog Fucker" and "Dog-a-Rama") in which she engages in sexual acts with a dog.
In Romania, although zoophilia was officially banned in May 2022, there are no laws which prohibit zoophilic pornography. However, creating sites that present zoophilic pornography is not allowed per Article 7.3 of Law 196/2003, but no punishment is defined for doing so.
In Hungary, where production faces no legal limitations, zoophilic materials have become a substantial industry that produces a number of films and magazines, particularly for Dutch companies such as Topscore and Book & Film International, and the genre has stars such as "Hector", a Great Dane dog starring in several films.
In Japan, zoophilic pornography is used to bypass censorship laws, often featuring models performing fellatio on non-human animals, because oral penetration of a non-human penis is not in the scope of Japanese pixelization censorship. While primarily underground, there are a number of zoophilic pornography actresses who specialize in bestiality movies.
In the United Kingdom, Section 63 of the Criminal Justice and Immigration Act 2008 criminalises possession of realistic pornographic images depicting sex with non-human animals (see extreme pornography), including fake images and simulated acts, as well as images depicting sex with dead animals. The law provides for sentences of up to two years in prison; a sentence of 12 months was handed down in one case in 2011.
Zoophiles
Non-sexual zoophilia
The love of animals is not necessarily sexual in nature. In psychology and sociology the word "zoophilia" is sometimes used without sexual implications. Being fond of animals in general, or as pets, is accepted in Western society, and is usually respected or tolerated. However, the word zoophilia is used to mean a sexual preference towards animals, which makes it a paraphilia. Some zoophiles may not act on their sexual attraction to animals. People who identify as zoophiles may feel their love for animals is romantic rather than purely sexual, and say this makes them different from those committing entirely sexually motivated acts of bestiality.
Zoophile community
An online survey which recruited participants over the Internet concluded that prior to the arrival of widespread computer networking, most zoophiles would not have known other zoophiles, and for the most part, zoophiles engaged in bestiality secretly, or told only trusted friends, family or partners. The Internet and its predecessors made people able to search for information on topics which were not otherwise easily accessible and to communicate with relative safety and anonymity. Because of the diary-like intimacy of blogs and the anonymity of the Internet, zoophiles had the ideal opportunity to "openly" express their sexuality. As with many other alternate lifestyles, broader networks began forming in the 1980s when participating in networked social groups became more common at home and elsewhere. Such developments in general were described by Markoff in 1990; the linking of computers meant that people thousands of miles apart could feel the intimacy akin to being in a small village together. The popular newsgroup alt.sex.bestiality, said to be in the top 1% of newsgroup interest (i.e. number 50 out of around 5000), – and reputedly started in humor – along with personal bulletin boards and talkers, chief among them Sleepy's multiple worlds, Lintilla, and Planes of Existence, were among the first group media of this kind in the late 1980s and early 1990s. These groups rapidly drew together zoophiles, some of whom also created personal and social websites and Internet forums. By around 1992–1994, the wide social net had evolved. This was initially centered around the above-mentioned newsgroup, alt.sex.bestiality, which during the six years following 1990 had matured into a discussion and support group. The newsgroup included information about health issues, laws governing zoophilia, bibliography relating to the subject, and community events.
observe that the Internet can socially integrate an incredibly large number of people. In Kinsey's day contacts between animal lovers were more localized and limited to male compatriots in a particular rural community. Further, while the farm boys Kinsey researched might have been part of a rural culture in which sex with animals was a part, the sex itself did not define the community. The zoophile community is not known to be particularly large compared to other subcultures which make use of the Internet, so surmised its aims and beliefs would likely change little as it grew. Those particularly active on the Internet may not be aware of a wider subculture, as there is not much of a wider subculture, felt the virtual zoophile group would lead the development of the subculture.
Websites aim to provide support and social assistance to zoophiles (including resources to help and rescue abused or mistreated animals), but these are not usually well publicized. Such work is often undertaken as needed by individuals and friends, within social networks, and by word of mouth.
Zoophiles tend to experience their first zoosexual feelings during adolescence, and tend to be secretive about it, hence limiting the ability for non-Internet-based communities to form.
See also
References and footnotes
External links
Encyclopedia of Human Sexuality entry for "Bestiality" at Sexology Department of Humboldt University, Berlin.
Zoophilia References Database Bestiality and zoosadism criminal executions.
Animal Abuse Crime Database search form for the U.S. and UK.
Human–animal interaction
Paraphilias
Cruelty to animals
Sexual misconduct
Sex crimes | Zoophilia | [
"Biology"
] | 5,490 | [
"Human–animal interaction",
"Animals",
"Humans and other species"
] |
14,529,261 | https://en.wikipedia.org/wiki/Rademacher%20complexity | In computational learning theory (machine learning and theory of computation), Rademacher complexity, named after Hans Rademacher, measures richness of a class of sets with respect to a probability distribution. The concept can also be extended to real valued functions.
Definitions
Rademacher complexity of a set
Given a set , the Rademacher complexity of A is defined as follows:
where are independent random variables drawn from the Rademacher distribution i.e. for , and . Some authors take the absolute value of the sum before taking the supremum, but if is symmetric this makes no difference.
Rademacher complexity of a function class
Let be a sample of points and consider a function class of real-valued functions over . Then, the empirical Rademacher complexity of given is defined as:
This can also be written using the previous definition:
where denotes function composition, i.e.:
Let be a probability distribution over .
The Rademacher complexity of the function class with respect to for sample size is:
where the above expectation is taken over an identically independently distributed (i.i.d.) sample generated according to .
Intuition
The Rademacher complexity is typically applied on a function class of models that are used for classification, with the goal of measuring their ability to classify points drawn from a probability space under arbitrary labellings. When the function class is rich enough, it contains functions that can appropriately adapt for each arrangement of labels, simulated by the random draw of under the expectation, so that this quantity in the sum is maximised.
Examples
1. contains a single vector, e.g., . Then:
The same is true for every singleton hypothesis class.
2. contains two vectors, e.g., . Then:
Using the Rademacher complexity
The Rademacher complexity can be used to derive data-dependent upper-bounds on the learnability of function classes. Intuitively, a function-class with smaller Rademacher complexity is easier to learn.
Bounding the representativeness
In machine learning, it is desired to have a training set that represents the true distribution of some sample data . This can be quantified using the notion of representativeness. Denote by the probability distribution from which the samples are drawn. Denote by the set of hypotheses (potential classifiers) and denote by the corresponding set of error functions, i.e., for every hypothesis , there is a function , that maps each training sample (features,label) to the error of the classifier (note in this case hypothesis and classifier are used interchangeably). For example, in the case that represents a binary classifier, the error function is a 0–1 loss function, i.e. the error function returns 0 if correctly classifies a sample and 1 else. We omit the index and write instead of when the underlying hypothesis is irrelevant. Define:
– the expected error of some error function on the real distribution ;
– the estimated error of some error function on the sample .
The representativeness of the sample , with respect to and , is defined as:
Smaller representativeness is better, since it provides a way to avoid overfitting: it means that the true error of a classifier is not much higher than its estimated error, and so selecting a classifier that has low estimated error will ensure that the true error is also low. Note however that the concept of representativeness is relative and hence can not be compared across distinct samples.
The expected representativeness of a sample can be bounded above by the Rademacher complexity of the function class:
Bounding the generalization error
When the Rademacher complexity is small, it is possible to learn the hypothesis class H using empirical risk minimization.
For example, (with binary error function), for every , with probability at least , for every hypothesis :
Bounding the Rademacher complexity
Since smaller Rademacher complexity is better, it is useful to have upper bounds on the Rademacher complexity of various function sets. The following rules can be used to upper-bound the Rademacher complexity of a set .
1. If all vectors in are translated by a constant vector , then Rad(A) does not change.
2. If all vectors in are multiplied by a scalar , then Rad(A) is multiplied by .
3. .
4. (Kakade & Tewari Lemma) If all vectors in are operated by a Lipschitz function, then Rad(A) is (at most) multiplied by the Lipschitz constant of the function. In particular, if all vectors in are operated by a contraction mapping, then Rad(A) strictly decreases.
5. The Rademacher complexity of the convex hull of equals Rad(A).
6. (Massart Lemma) The Rademacher complexity of a finite set grows logarithmically with the set size. Formally, let be a set of vectors in , and let be the mean of the vectors in . Then:
In particular, if is a set of binary vectors, the norm is at most , so:
Bounds related to the VC dimension
Let be a set family whose VC dimension is . It is known that the growth function of is bounded as:
for all :
This means that, for every set with at most elements, . The set-family can be considered as a set of binary vectors over . Substituting this in Massart's lemma gives:
With more advanced techniques (Dudley's entropy bound and Haussler's upper bound) one can show, for example, that there exists a constant , such that any class of -indicator functions with Vapnik–Chervonenkis dimension has Rademacher complexity upper-bounded by .
Bounds related to linear classes
The following bounds are related to linear operations on – a constant set of vectors in .
1. Define the set of dot-products of the vectors in with vectors in the unit ball. Then:
2. Define the set of dot-products of the vectors in with vectors in the unit ball of the 1-norm. Then:
Bounds related to covering numbers
The following bound relates the Rademacher complexity of a set to its external covering number – the number of balls of a given radius whose union contains . The bound is attributed to Dudley.
Suppose is a set of vectors whose length (norm) is at most . Then, for every integer :
In particular, if lies in a d-dimensional subspace of , then:
Substituting this in the previous bound gives the following bound on the Rademacher complexity:
Gaussian complexity
Gaussian complexity is a similar complexity with similar physical meanings, and can be obtained from the Rademacher complexity using the random variables instead of , where are Gaussian i.i.d. random variables with zero-mean and variance 1, i.e. . Gaussian and Rademacher complexities are known to be equivalent up to logarithmic factors.
Equivalence of Rademacher and Gaussian complexity
Given a set then it holds that:
Where is the Gaussian Complexity of A. As an example, consider the rademacher and gaussian complexities of the L1 ball. The Rademacher complexity is given by exactly 1, whereas the Gaussian complexity is on the order of (which can be shown by applying known properties of suprema of a set of subgaussian random variables).
References
Peter L. Bartlett, Shahar Mendelson (2002) Rademacher and Gaussian Complexities: Risk Bounds and Structural Results. Journal of Machine Learning Research 3 463–482
Giorgio Gnecco, Marcello Sanguineti (2008) Approximation Error Bounds via Rademacher's Complexity. Applied Mathematical Sciences, Vol. 2, 2008, no. 4, 153–176
Machine learning
Measures of complexity | Rademacher complexity | [
"Engineering"
] | 1,632 | [
"Artificial intelligence engineering",
"Machine learning"
] |
14,529,262 | https://en.wikipedia.org/wiki/Relaxin/insulin-like%20family%20peptide%20receptor%202 | Relaxin/insulin-like family peptide receptor 2, also known as RXFP2, is a human G-protein coupled receptor.
The receptors for glycoprotein hormones such as follicle-stimulating hormone (FSH; see MIM 136530) and thyroid-stimulating hormone (TSH; see MIM 188540) are G protein-coupled, 7-transmembrane receptors (GPCRs) with large N-terminal extracellular domains. Leucine-rich repeat (LRR)-containing GPCRs (LGRs) form a subgroup of the GPCR superfamily. [supplied by OMIM].
See also
Relaxin receptor
References
Further reading
External links
G protein-coupled receptors | Relaxin/insulin-like family peptide receptor 2 | [
"Chemistry"
] | 151 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,529,474 | https://en.wikipedia.org/wiki/GPR50 | G protein-coupled receptor 50 is a protein which in humans is encoded by the GPR50 gene.
Function
GPR50 is a member of the G protein-coupled receptor family of integral membrane proteins and is most closely related to the melatonin receptor. GPR50 is able to heterodimerize with both the MT1 and MT2 melatonin receptor subtypes. While GPR50 has no effect on MT2 function, GPR50 prevented MT1 from both binding
melatonin and coupling to G proteins. GPR50 is the mammalian ortholog of melatonin receptor Mel1c described in non-mammalian vertebrates.
Clinical significance
Certain polymorphisms of the GPR50 gene in females are associated with increased risk of developing bipolar affective disorder, major depressive disorder, and schizophrenia. Other GPR50 gene polymorphism are associated with higher fasting circulating triglyceride levels and lower circulating High-density lipoprotein levels.
References
Further reading
G protein-coupled receptors | GPR50 | [
"Chemistry"
] | 211 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,529,799 | https://en.wikipedia.org/wiki/Derris%20elliptica | Derris eliptica is a species of leguminous plant from Southeast Asia and the southwest Pacific islands, including New Guinea. The roots of D. elliptica contain rotenone, a strong insecticide and fish poison.
Also known as derris powder and tuba root (in Indonesia), it was formerly used as an organic insecticide used to control pests on crops such as peas. However, due to studies revealing the extreme toxicity of rotenone to which the powder is often refined, it is unsafe: in spite of its popularity with organic growers.
Derris root, when crushed, releases rotenone. Some native residents of Fiji and New Guinea practice a form of fishing in which they crush the roots and throw them into the water. The stunned or killed fish float to the surface where they can be easily reached.
Despite its toxicity, Derris is used as a food plant by Lepidopteran larvae including Batrachedra amydraula.
Subspecies
The following subspecies are listed:
Derris elliptica chittagongensis
Derris elliptica elliptica
Derris elliptica malacensis
Derris elliptica tonkinensis
See also
"Derris" insecticides based on rotenone
References
External links
Millettieae
Flora of tropical Asia
Plant toxin insecticides | Derris elliptica | [
"Chemistry"
] | 261 | [
"Plant toxin insecticides",
"Chemical ecology"
] |
14,530,447 | https://en.wikipedia.org/wiki/Infrared%20sensing%20in%20snakes | The ability to sense infrared thermal radiation evolved independently in three different groups of snakes, consisting of the families of Boidae (boas), Pythonidae (pythons), and the subfamily Crotalinae (pit vipers). What is commonly called a pit organ allows these animals to essentially "see" radiant heat at wavelengths between 5 and 30 μm. The more advanced infrared sense of pit vipers allows these animals to strike prey accurately even in the absence of light, and detect warm objects from several meters away. It was previously thought that the organs evolved primarily as prey detectors, but recent evidence suggests that it may also be used in thermoregulation and predator detection, making it a more general-purpose sensory organ than was supposed.
Phylogeny and evolution
The facial pit underwent parallel evolution in pitvipers and some boas and pythons. It evolved once in pitvipers and multiple times in boas and pythons. The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (loreal pits), while boas and pythons have three or more smaller pits lining the upper and sometimes the lower lip, in or between the scales (labial pits). Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure.
Anatomy
In pit vipers, the heat pit consists of a deep pocket in the rostrum with a membrane stretched across it. Behind the membrane, an air-filled chamber provides air contact on either side of the membrane. The pit membrane is highly vascular and heavily innervated with numerous heat-sensitive receptors formed from terminal masses of the trigeminal nerve (terminal nerve masses, or TNMs). The receptors are therefore not discrete cells, but a part of the trigeminal nerve itself. The labial pit found in boas and pythons lacks the suspended membrane and consists more simply of a pit lined with a membrane that is similarly innervated and vascular, though the morphology of the vasculature differs between these snakes and crotalines. The purpose of the vasculature, in addition to providing oxygen to the receptor terminals, is to rapidly cool the receptors to their thermo-neutral state after being heated by thermal radiation from a stimulus. Were it not for this vasculature, the receptor would remain in a warm state after being exposed to a warm stimulus, and would present the animal with afterimages even after the stimulus was removed.
Neuroanatomy
In all cases, the facial pit is innervated by the trigeminal nerve. In crotalines, information from the pit organ is relayed to the nucleus reticularus caloris in the medulla via the lateral descending trigeminal tract. From there, it is relayed to the contralateral optic tectum. In boas and pythons, information from the labial pit is sent directly to the contralateral optic tectum via the lateral descending trigeminal tract, bypassing the nucleus reticularus caloris.
It is the optic tectum of the brain which eventually processes these infrared cues. This portion of the brain receives other sensory information as well, most notably optic stimulation, but also motor, proprioceptive and auditory. Some neurons in the tectum respond to visual or infrared stimulation alone; others respond more strongly to combined visual and infrared stimulation, and still others respond only to a combination of visual and infrared. Some neurons appear to be tuned to detect movement in one direction. It has been found that the snake's visual and infrared maps of the world are overlaid in the optic tectum. This combined information is relayed via the tectum to the forebrain.
The nerve fibers in the pit organ are constantly firing at a very low rate. Objects that are within a neutral temperature range do not change the rate of firing; the neutral range is determined by the average thermal radiation of all objects in the receptive field of the organ. The thermal radiation above a given threshold causes an increase in the temperature of the nerve fiber, resulting in stimulation of the nerve and subsequent firing, with increased temperature resulting in increased firing rate. The sensitivity of the nerve fibers is estimated to be <0.001 °C.
The pit organ will adapt to a repeated stimulus; if an adapted stimulus is removed, there will be a fluctuation in the opposite direction. For example, if a warm object is placed in front of the snake, the organ will increase in firing rate at first, but after a while will adapt to the warm object and the firing rate of the nerves in the pit organ will return to normal. If that warm object is then removed, the pit organ will now register the space that it used to occupy as being colder, and as such the firing rate will be depressed until it adapts to the removal of the object. The latency period of adaptation is approximately 50 to 150 ms.
The facial pit actually visualizes thermal radiation using the same optical principles as a pinhole camera, wherein the location of a source of thermal radiation is determined by the location of the radiation on the membrane of the heat pit. However, studies that have visualized the thermal images seen by the facial pit using computer analysis have suggested that the resolution is extremely poor. The size of the opening of the pit results in poor resolution of small, warm objects, and coupled with the pit's small size and subsequent poor heat conduction, the image produced is of extremely low resolution and contrast. It is known that some focusing and sharpening of the image occurs in the lateral descending trigeminal tract, and it is possible that the visual and infrared integration that occurs in the tectum is also used to help sharpen the image.
Molecular mechanism
In spite of its detection of infrared light, the infrared detection mechanism is not similar to photoreceptors - while photoreceptors detect light via photochemical reactions, the protein in the pits of snakes is a type of transient receptor potential channel, TRPA1 which is a temperature sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than chemical reaction to light. In structure and function it resembles a biological version of warmth-sensing instrument called a bolometer. This is consistent with the very thin pit membrane, which would allow incoming infrared radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as the vascularization of the pit membrane in order to rapidly cool the ion channel back to its original temperature state. While the molecular precursors of this mechanism are found in other snakes, the protein is both expressed to a much lower degree and much less sensitive to heat.
Behavioral and ecological implications
Infrared sensing snakes use pit organs extensively to detect and target warm-blooded prey such as rodents and birds. Blind or blindfolded rattlesnakes can strike prey accurately in the complete absence of visible light, though it does not appear that they assess prey animals based on their body temperature. In addition, snakes may deliberately choose ambush sites that facilitate infrared detection of prey. It was previously assumed that the organ evolved specifically for prey capture. However, recent evidence suggests that the pit organ is also used for thermoregulation. In an experiment that tested snakes' abilities to locate a cool thermal refuge in an uncomfortably hot maze, all pit vipers were able to locate the refuge quickly and easily, while true vipers were unable to do so. This finding suggests that the pit vipers were using their pit organs to aid in thermoregulatory decisions. It is also possible that the organ even evolved as a defensive adaptation rather than a predatory one, or that multiple pressures have contributed to the organ's development. The use of the heat pit to direct thermoregulation or other behaviors in pythons and boas has not yet been determined.
See also
Crotalinae
Infrared sensing in vampire bats
Neuroethology
Thermoception
References
External links
Physorg article on Infrared vision in snakes
Infrared vision in snakes summary article (archived 7/15/2013)
Electromagnetic radiation
Ethology
Heat transfer
Senses
Snake anatomy
Snakes | Infrared sensing in snakes | [
"Physics",
"Chemistry",
"Biology"
] | 1,700 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Behavior",
"Electromagnetic radiation",
"Behavioural sciences",
"Radiation",
"Thermodynamics",
"Ethology"
] |
14,530,635 | https://en.wikipedia.org/wiki/Tail%20value%20at%20risk | In financial mathematics, tail value at risk (TVaR), also known as tail conditional expectation (TCE) or conditional tail expectation (CTE), is a risk measure associated with the more general value at risk. It quantifies the expected value of the loss given that an event outside a given probability level has occurred.
Background
There are a number of related, but subtly different, formulations for TVaR in the literature. A common case in literature is to define TVaR and average value at risk as the same measure. Under some formulations, it is only equivalent to expected shortfall when the underlying distribution function is continuous at , the value at risk of level . Under some other settings, TVaR is the conditional expectation of loss above a given value, whereas the expected shortfall is the product of this value with the probability of it occurring. The former definition may not be a coherent risk measure in general, however it is coherent if the underlying distribution is continuous. The latter definition is a coherent risk measure. TVaR accounts for the severity of the failure, not only the chance of failure. The TVaR is a measure of the expectation only in the tail of the distribution.
Mathematical definition
The canonical tail value at risk is the left-tail (large negative values) in some disciplines and the right-tail (large positive values) in other, such as actuarial science. This is usually due to the differing conventions of treating losses as large negative or positive values. Using the negative value convention, Artzner and others define the tail value at risk as:
Given a random variable which is the payoff of a portfolio at some future time and given a parameter then the tail value at risk is defined by
where is the upper -quantile given by . Typically the payoff random variable is in some Lp-space where to guarantee the existence of the expectation. The typical values for are 5% and 1%.
Formulas for continuous probability distributions
Closed-form formulas exist for calculating TVaR when the payoff of a portfolio or a corresponding loss follows a specific continuous distribution. If follows some probability distribution with the probability density function (p.d.f.) and the cumulative distribution function (c.d.f.) , the left-tail TVaR can be represented as
For engineering or actuarial applications it is more common to consider the distribution of losses , in this case the right-tail TVaR is considered (typically for 95% or 99%):
Since some formulas below were derived for the left-tail case and some for the right-tail case, the following reconciliations can be useful:
and
Normal distribution
If the payoff of a portfolio follows normal (Gaussian) distribution with the p.d.f. then the left-tail TVaR is equal to where is the standard normal p.d.f., is the standard normal c.d.f., so is the standard normal quantile.
If the loss of a portfolio follows normal distribution, the right-tail TVaR is equal to
Generalized Student's t-distribution
If the payoff of a portfolio follows generalized Student's t-distribution with the p.d.f. then the left-tail TVaR is equal to where is the standard t-distribution p.d.f., is the standard t-distribution c.d.f., so is the standard t-distribution quantile.
If the loss of a portfolio follows generalized Student's t-distribution, the right-tail TVaR is equal to
Laplace distribution
If the payoff of a portfolio follows Laplace distribution with the p.d.f. and the c.d.f. then the left-tail TVaR is equal to for .
If the loss of a portfolio follows Laplace distribution, the right-tail TVaR is equal to
Logistic distribution
If the payoff of a portfolio follows logistic distribution with the p.d.f. and the c.d.f. then the left-tail TVaR is equal to
If the loss of a portfolio follows logistic distribution, the right-tail TVaR is equal to
Exponential distribution
If the loss of a portfolio follows exponential distribution with the p.d.f. and the c.d.f. then the right-tail TVaR is equal to
Pareto distribution
If the loss of a portfolio follows Pareto distribution with the p.d.f. and the c.d.f. then the right-tail TVaR is equal to
Generalized Pareto distribution (GPD)
If the loss of a portfolio follows GPD with the p.d.f. and the c.d.f. then the right-tail TVaR is equal to and the VaR is equal to
Weibull distribution
If the loss of a portfolio follows Weibull distribution with the p.d.f. and the c.d.f. then the right-tail TVaR is equal to where is the upper incomplete gamma function.
Generalized extreme value distribution (GEV)
If the payoff of a portfolio follows GEV with the p.d.f. and the c.d.f. then the left-tail TVaR is equal to and the VaR is equal to where is the upper incomplete gamma function, is the logarithmic integral function.
If the loss of a portfolio follows GEV, then the right-tail TVaR is equal to where is the lower incomplete gamma function, is the Euler-Mascheroni constant.
Generalized hyperbolic secant (GHS) distribution
If the payoff of a portfolio follows GHS distribution with the p.d.f. and the c.d.f. then the left-tail TVaR is equal to where is the dilogarithm and is the imaginary unit.
Johnson's SU-distribution
If the payoff of a portfolio follows Johnson's SU-distribution with the c.d.f. then the left-tail TVaR is equal to where is the c.d.f. of the standard normal distribution.
Burr type XII distribution
If the payoff of a portfolio follows the Burr type XII distribution with the p.d.f. and the c.d.f. the left-tail TVaR is equal to where is the hypergeometric function. Alternatively,
Dagum distribution
If the payoff of a portfolio follows the Dagum distribution with the p.d.f. and the c.d.f. the left-tail TVaR is equal to where is the hypergeometric function.
Lognormal distribution
If the payoff of a portfolio follows lognormal distribution, i.e. the random variable follows normal distribution with the p.d.f. then the left-tail TVaR is equal to where is the standard normal c.d.f., so is the standard normal quantile.
Log-logistic distribution
If the payoff of a portfolio follows log-logistic distribution, i.e. the random variable follows logistic distribution with the p.d.f. then the left-tail TVaR is equal to where is the regularized incomplete beta function, .
As the incomplete beta function is defined only for positive arguments, for a more generic case the left-tail TVaR can be expressed with the hypergeometric function:
If the loss of a portfolio follows log-logistic distribution with p.d.f. and c.d.f. then the right-tail TVaR is equal to where is the incomplete beta function.
Log-Laplace distribution
If the payoff of a portfolio follows log-Laplace distribution, i.e. the random variable follows Laplace distribution the p.d.f. then the left-tail TVaR is equal to
Log-generalized hyperbolic secant (log-GHS) distribution
If the payoff of a portfolio follows log-GHS distribution, i.e. the random variable follows GHS distribution with the p.d.f. then the left-tail TVaR is equal to where is the hypergeometric function.
References
Actuarial science
Financial risk modeling
Monte Carlo methods in finance | Tail value at risk | [
"Mathematics"
] | 1,657 | [
"Applied mathematics",
"Actuarial science"
] |
14,531,635 | https://en.wikipedia.org/wiki/Tris%28ethylenediamine%29cobalt%28III%29%20chloride | Tris(ethylenediamine)cobalt(III) chloride is an inorganic compound with the formula [Co(en)3]Cl3 (where "en" is the abbreviation for ethylenediamine). It is the chloride salt of the coordination complex [Co(en)3]3+. This trication was important in the history of coordination chemistry because of its stability and its stereochemistry. Many different salts have been described. The complex was first described by Alfred Werner who isolated this salt as yellow-gold needle-like crystals.
Synthesis and structure
The compound is prepared from an aqueous solution of ethylenediamine and virtually any cobalt(II) salt, such as cobalt(II) chloride. The solution is purged with air to oxidize the cobalt(II)-ethylenediamine complexes to cobalt(III). The reaction proceeds in 95% yield, and the trication can be isolated with a variety of anions. A detailed product analysis of a large-scale synthesis revealed that one minor by-product was [Co(en)2Cl(H2NCH2CH2NH3)]Cl3, which contains a rare monodentate ethylenediamine ligand (protonated).
The cation [Co(en)3]3+ is octahedral with Co-N distances in the range 1.947–1.981 Å. The N-Co-N angles are 85° within the chelate rings and 90° between nitrogen atoms on adjacent rings.
Stereochemistry
The point group of this complex is D3. The complex can be resolved into enantiomers that are described as Δ and Λ. Usually the resolution entails use of tartrate salts. The optical resolution is a standard component of inorganic synthesis courses. Because of its nonplanarity, the MN2C2 rings can adopt either of two conformations, which are described by the symbols λ and δ. The registry between these ring conformations and the absolute configuration of the metal centers is described by the nomenclature lel (when the en backbone lies parallel with the C3 symmetry axis) or ob (when the en backbone is obverse to this same C3 axis). Thus, the following diastereomeric conformations can be identified: Δ-(lel)3, Δ-(lel)2(ob), Δ-(lel)(ob)2, and Δ-(ob)3. The mirror images of these species of course exist also.
Hydrates
Cationic coordination complexes of ammonia and alkyl amines typically crystallize with water in the lattice, and the stoichiometry can depend on the conditions of crystallization and, in the cases of chiral complexes, the optical purity of the cation. Racemic [Co(en)3]Cl3 is most often obtained as the di- or trihydrate. For the optically pure salt (+)-[Co(en)3]Cl3·1.5H2O, (+)-[Co(en)3]Cl3·0.5NaCl·3H2O, and (+)-[Co(en)3]Cl3·H2O are also known.
References
Cobalt complexes
Cobalt(III) compounds
Chlorides
Metal halides
Ethylenediamine complexes | Tris(ethylenediamine)cobalt(III) chloride | [
"Chemistry"
] | 701 | [
"Chlorides",
"Inorganic compounds",
"Metal halides",
"Salts"
] |
14,532,358 | https://en.wikipedia.org/wiki/Fan%20coil%20unit | A fan coil unit (FCU), also known as a Vertical Fan Coil Unit (VFCU), is a device consisting of a heat exchanger (coil) and a fan. FCUs are commonly used in HVAC systems of residential, commercial, and industrial buildings that use ducted split air conditioning or central plant cooling. FCUs are typically connected to ductwork and a thermostat to regulate the temperature of one or more spaces and to assist the main air handling unit for each space if used with chillers. The thermostat controls the fan speed and/or the flow of water or refrigerant to the heat exchanger using a control valve.
Due to their simplicity, flexibility, and easy maintenance, fan coil units can be more economical to install than ducted 100% fresh air systems (VAV) or central heating systems with air handling units or chilled beams. FCUs come in various configurations, including horizontal (ceiling-mounted) and vertical (floor-mounted), and can be used in a wide range of applications, from small residential units to large commercial and industrial buildings.
Noise output from FCUs, like any other form of air conditioning, depends on the design of the unit and the building materials surrounding it. Some FCUs offer noise levels as low as NR25 or NC25.
The output from an FCU can be established by looking at the temperature of the air entering the unit and the temperature of the air leaving the unit, coupled with the volume of air being moved through the unit. This is a simplistic statement, and there is further reading on sensible heat ratios and the specific heat capacity of air, both of which have an effect on thermal performance.
Design and operation
Fan Coil Unit covers a range of products and will mean different things to users, specifiers, and installers in different countries and regions, particularly in relation to product size and output capability.
Fan Coil Unit falls principally into two main types: blow through and draw through. As the names suggest, in the first type the fans are fitted behind the heat exchanger, and in the other type the fans are fitted in front the coil such that they draw air through it. Draw through units are considered thermally superior, as ordinarily they make better use of the heat exchanger. However they are more expensive, as they require a chassis to hold the fans whereas a blow-through unit typically consists of a set of fans bolted straight to a coil.
A fan coil unit may be concealed or exposed within the room or area that it serves.
An exposed fan coil unit may be wall-mounted, freestanding or ceiling mounted, and will typically include an appropriate enclosure to protect and conceal the fan coil unit itself, with return air grille and supply air diffuser set into that enclosure to distribute the air.
A concealed fan coil unit will typically be installed within an accessible ceiling void or services zone. The return air grille and supply air diffuser, typically set flush into the ceiling, will be ducted to and from the fan coil unit and thus allows a great degree of flexibility for locating the grilles to suit the ceiling layout and/or the partition layout within a space. It is quite common for the return air not to be ducted and to use the ceiling void as a return air plenum.
The coil receives hot or cold water from a central plant, and removes heat from or adds heat to the air through heat transfer. Traditionally fan coil units can contain their own internal thermostat, or can be wired to operate with a remote thermostat. However, and as is common in most modern buildings with a Building Energy Management System (BEMS), the control of the fan coil unit will be by a local digital controller or outstation (along with associated room temperature sensor and control valve actuators) linked to the BEMS via a communication network, and therefore adjustable and controllable from a central point, such as a supervisors head end computer.
Fan coil units circulate hot or cold water through a coil in order to condition a space. The unit gets its hot or cold water from a central plant, or mechanical room containing equipment for removing heat from the central building's closed-loop. The equipment used can consist of machines used to remove heat such as a chiller or a cooling tower and equipment for adding heat to the building's water such as a boiler or a commercial water heater.
Hydronic fan coil units can be generally divided into two types: Two-pipe fan coil units or four-pipe fan coil units. Two-pipe fan coil units have one supply and one return pipe. The supply pipe supplies either cold or hot water to the unit depending on the time of year. Four-pipe fan coil units have two supply pipes and two return pipes. This allows either hot or cold water to enter the unit at any given time. Since it is often necessary to heat and cool different areas of a building at the same time, due to differences in internal heat loss or heat gains, the four-pipe fan coil unit is most commonly used.
Fan coil units may be connected to piping networks using various topology designs, such as "direct return", "reverse return", or "series decoupled". See ASHRAE Handbook "2008 Systems & Equipment", Chapter 12.
Depending upon the selected chilled water temperatures and the relative humidity of the space, it's likely that the cooling coil will dehumidify the entering air stream, and as a by product of this process, it will at times produce a condensate which will need to be carried to drain. The fan coil unit will contain a purpose designed drip tray with drain connection for this purpose. The simplest means to drain the condensate from multiple fan coil units will be by a network of pipework laid to falls to a suitable point. Alternatively a condensate pump may be employed where space for such gravity pipework is limited.
The fan motors within a fan coil unit are responsible for regulating the desired heating and cooling output of the unit. Different manufacturers employ various methods for controlling the motor speed. Some utilize an AC transformer, adjusting the taps to modulate the power supplied to the fan motor. This adjustment is typically performed during the commissioning stage of building construction and remains fixed for the lifespan of the unit.
Alternatively, certain manufacturers employ custom-wound Permanent Split Capacitor (PSC) motors with speed taps in the windings. These taps are set to the desired speed levels for the specific design of the fan coil unit. To enable local control, a simple speed selector switch (Off-High-Medium-Low) is provided for the occupants of the room. This switch is often integrated into the room thermostat and can be manually set or automatically controlled by a digital room thermostat.
For automatic fan speed and temperature control, Building Energy Management Systems are employed. The fan motors commonly used in these units are typically AC Shaded Pole or Permanent Split Capacitor motors. Recent advancements include the use of brushless DC designs with electronic commutation. Compared to units equipped with asynchronous 3-speed motors, fan coil units utilizing brushless motors can reduce power consumption by up to 70%.
Fan coil units linked to ducted split air conditioning units use refrigerant in the cooling coil instead of chilled coolant and linked to a large condenser unit instead of a chiller. They might also be linked to liquid-cooled condenser units which use an intermediate coolant to cool the condenser using cooling towers.
DC/EC motor powered units
These motors are sometimes called DC motors, sometimes EC motors and occasionally DC/EC motors. DC stands for direct current and EC stands for electronically commutated.
DC motors allow the speed of the fans within a fan coil unit to be controlled by means of a 0-10 Volt input control signal to the motor/s, the transformers and speed switches associated with AC fan coils are not required. Up to a signal voltage of 2.5 Volts (which may vary with different fan/motor manufacturers) the fan will be in a stopped condition but as the signal voltage is increased, the fan will seamlessly increase in speed until the maximum is reached at a signal Voltage of 10 Volts. fan coils will generally operate between approximately 4 Volts and 7.5 Volts because below 4 Volts the air volumes are ineffective and above 7.5 Volts the fan coil is likely to be too noisy for most commercial applications.
The 0-10 Volt signal voltage can be set via a simple potentiometer and left or the 0-10 Volt signal voltage can be delivered to the fan motors by the terminal controller on each of the Fan Coil Units. The former is very simple and cheap but the latter opens up the opportunity to continuously alter the fan speed depending on various external conditions/influences. These conditions/criteria could be the 'real time' demand for either heating or cooling, occupancy levels, window switches, time clocks or any number of other inputs from either the unit itself, the Building Management System or both.
The reason that these DC Fan Coil Units are, despite their apparent relative complexity, becoming more popular is their improved energy efficiency levels compared to their AC motor-driven counterparts of only a few years ago. A straight swap, AC to DC, will reduce electrical consumption by 50% but applying Demand and Occupancy dependent fan speed control can take the savings to as much as 80%. In areas of the world where there are legally enforceable energy efficiency requirements for fan coils (such as the UK), DC Fan Coil Units are rapidly becoming the only choice.
Areas of use
In high-rise buildings, fan coils may be vertically stacked, located one above the other from floor to floor and all interconnected by the same piping loop.
Fan coil units are an excellent delivery mechanism for hydronic chiller boiler systems in large residential and light commercial applications. In these applications the fan coil units are mounted in bathroom ceilings and can be used to provide unlimited comfort zones - with the ability to turn off unused areas of the structure to save energy.
Installation
In high-rise residential construction, typically each fan coil unit requires a rectangular through-penetration in the concrete slab on top of which it sits. Usually, there are either 2 or 4 pipes made of ABS, steel or copper that go through the floor. The pipes are usually insulated with refrigeration insulation, such as acrylonitrile butadiene/polyvinyl chloride (AB/PVC) flexible foam (Rubatex or Armaflex brands) on all pipes, or at least on the chilled water lines to prevent condensate from forming.
Unit ventilator
A unit ventilator is a fan coil unit that is used mainly in classrooms, hotels, apartments and condominium applications. A unit ventilator can be a wall mounted or ceiling hung cabinet, and is designed to use a fan to blow outside air across a coil, thus conditioning and ventilating the space which it is serving.
European market
The Fan Coil is composed of one quarter of 2-pipe-units and three quarters of 4-pipe-units, and the most sold products are "with casing" (35%), "without casing" (28%), "cassette" (18%) and "ducted" (16%).
The market by region was split in 2010 as follows:
See also
Thermal insulation
HVAC
Construction
Intumescent
Firestop
References
Mechanical engineering
Heating, ventilation, and air conditioning
Ventilation fans | Fan coil unit | [
"Physics",
"Engineering"
] | 2,368 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
14,534,013 | https://en.wikipedia.org/wiki/Sir%20William%20Dunn%20Institute%20of%20Biochemistry | The Sir William Dunn Institute of Biochemistry at Cambridge University was a research institute endowed from the estate of Sir William Dunn, which was the origin of the Cambridge Department of Biochemistry. Created for Frederick Gowland Hopkins on the recommendation of Walter Morley Fletcher, it opened in 1924 and spurred the growth of Hopkins's school of biochemistry. Hopkins's school dominated the discipline of biochemistry from the 1920s through the interwar years and was the source of many leaders of the next generation of biochemists, and the Dunn bequest inaugurated a period of rapid expansion for biochemistry.
Origin of the Institute
In 1918, a trustee of the estate of Sir William Dunn approached a Cambridge biologist, William Bate Hardy, about the possibility of putting some of Dunn's estate toward biomedical science research. Hardy referred the trustee (Charles D. Seligman) to Walter Morley Fletcher, the secretary of the Medical Research Council. The Dunn estate, like much of the philanthropy world, was beginning to look more to "preventive" philanthropy (as opposed to direct aid to the needy) by sponsoring research institutions that could address social ills. Between 1919 and 1925, Fletcher convinced the Dunn trustees to put nearly half a million pounds toward biomedical research.
Fletcher was a long-time friend and institutional ally of Frederick Gowland Hopkins, a pioneering biochemist who was trying to establish "general biochemistry" as a field distinct from either medical physiology or organic chemistry, more a part of biology than medicine. Fletcher lobbied for the Dunn estate to fund Hopkins's proposal, among the over 500 funding proposals submitted. By late 1919, Fletcher was negotiating for a considerable endowment that would allow Hopkins to create an institute solely devoted to biochemistry. The approval of this endowment, ultimately about 210,000 pounds, reversed the declining fortunes of Hopkins's research group, which had been suffering from lack of available academic positions, research space, and able students since World War I. With funding in the works, Hopkins group expanded from 10 researchers in 1920 to 59 in 1925; in 1922 they began using endowment funds and in 1924 the Dunn Institute of Biochemistry opened. Hopkins became the first Sir William Dunn Professor of Biochemistry and head of the new University of Cambridge Department of Biochemistry, and he appointed researchers in a range of specialized fields covering the whole of what he considered the proper, broad domain of biochemistry.
Hopkins's school of biochemistry
Hopkins's school, housed in the Dunn Institute, was both productive and influential. Between World War I and World War II, 40% of the papers in the Biochemical Journal were authored by Hopkins and other Cambridge biochemists. Hopkins's program of "general biochemistry" was unique in having a stable institutional base (unlike in Germany, where there were only a scattered handful of biochemistry professorships) but not being dependent on a medical school (unlike the biochemistry and physiological chemistry departments in the United States).
The Dunn Institute under Hopkins had another unusual feature for the time: Hopkins did not discriminate against hiring Jewish scientists, unlike the large majority of American, British and German universities and medical schools. This may have helped Hopkins assemble such a strong group of researchers, since talented Jewish biochemists had few other options.
See also
Dunn Human Nutrition Unit, another beneficiary of Dunn's will
Notes
References
Kohler, Robert E. "Walter Fletcher, F. G. Hopkins, and the Dunn Institute of Biochemistry: A Case Study in the Patronage of Science". Isis, Vol. 69, No. 3 (1978), pp. 330–355.
Kohler, Robert E. From medical chemistry to biochemistry: The making of a biomedical discipline. Cambridge, England: Cambridge University Press, 1982.
Kornberg, Arthur. For the Love of Enzymes: The Odyssey of a Biochemist. Cambridge, Massachusetts: Harvard University Press, 1989.
Biochemistry research institutes
Biological research institutes in the United Kingdom
Biochemistry, Sir William Dunn Institute of
1924 establishments in England | Sir William Dunn Institute of Biochemistry | [
"Chemistry"
] | 791 | [
"Biochemistry research institutes",
"Biochemistry organizations"
] |
9,376,189 | https://en.wikipedia.org/wiki/RITE%20Method | RITE Method, for Rapid Iterative Testing and Evaluation, typically referred to as "RITE" testing, is an iterative usability method. It was defined by Michael Medlock, Dennis Wixon, Bill Fulton, Mark Terrano and Ramon Romero. It has been publicly championed by Dennis Wixon while working in the games space for Microsoft.
It has many similarities to "traditional" or "discount" usability testing. The tester and team must define a target population for testing, schedule participants to come into the lab, decide on how the users' behaviors will be measured, construct a test script and have participants engage in a verbal protocol (e.g. think aloud). However it differs from these methods in that it advocates that changes to the user interface are made as soon as a problem is identified and a solution is clear. Sometimes this can occur after observing as few as one participant. Once the data for a participant has been collected the usability engineer and team decide if they will be making any changes to the prototype prior to the next participant. The changed interface is then tested with the remaining users.
The philosophy behind the RITE method is described as: "1) once you find a problem, solve it as soon as you can, and 2) make the decision makers part of the research team." In this way it is a bridge between a strict research method and a design method...and in many ways it represents a participatory design method. Since its official definition and naming its use has rapidly expanded to many other software industries.
See also
Acceptance testing
Human–computer interaction
Human factors
Interaction design
Software testing
Usability
Usability testing
User-centered design
References
Usability | RITE Method | [
"Engineering"
] | 342 | [
"Software engineering",
"Software engineering stubs"
] |
9,376,539 | https://en.wikipedia.org/wiki/HP%20Kittyhawk | The Hewlett-Packard HP3013/3014, nicknamed Kittyhawk, was a hard disk drive introduced by Hewlett-Packard on June 9, 1992. At the time of its introduction, it was the smallest hard disk drive in the world, being only 1.3-inches in size. The drive was created by a collaboration between Hewlett-Packard, AT&T, and Citizen Watch.
History
It was the first ever commercially produced hard drive in a 1.3 inch form factor. The original implementation (model 3013) had the capacity of 20 MB. A 40 MB model called Kittyhawk II (model 3014) was eventually introduced, with the retail price of $499. Both models have IDE interfaces. It appears that some variations of the hard drive were produced with PC card interface as well. The drive measured (), and weighed about 1 ounce (28 g). It was manufactured by Citizen Corporation, at the time a leader in small device manufacturing. The drive featured a number of unique technologies, including a built-in accelerometer that protected the hard drive from falls. Kittyhawk was claimed to be able to survive a 3-foot drop onto concrete while operating without loss of data.
Despite its remarkable characteristics, Kittyhawk turned out to be a commercial failure. It was not in demand from notebook industry due to its inferior cost per megabyte and capacity. A few OEM suppliers adopted the drive, including an early pen based computer maker EO, which ran the GO operating system. The handheld market failed to take off in early 1990s as expected. Many potential markets, such as the video game console market, were missed due to hard drive's high production costs.
Kittyhawk was discontinued by HP in September 1994. Approximately 160,000 units were actually sold compared to projected 2-year sales of 700,000 units. In 1996, largely due to Kittyhawk's failure, Hewlett-Packard closed its Disk Memory Division and exited the disk drive business.
The story of HP Kittyhawk is described in a Harvard Business School business case "Hewlett-Packard: The Flight of the Kittyhawk", and is a case study in the book The Innovator's Dilemma by Clayton M. Christensen.
See also
Microdrive - A 1-inch hard disk drive produced by IBM and Hitachi, released in 1999.
List of disk drive form factors
References
External links
HP Kittyhawk 1.3" Microdrive
Harvard Business Online: Hewlett-Packard: The Flight of the Kittyhawk
Hard disk drives | HP Kittyhawk | [
"Technology"
] | 523 | [
"Computing stubs",
"Computer hardware stubs"
] |
9,376,665 | https://en.wikipedia.org/wiki/Petrographic%20microscope | A petrographic microscope is a type of optical microscope used to identify rocks and minerals in thin sections. The microscope is used in optical mineralogy and petrography, a branch of petrology which focuses on detailed descriptions of rocks. The method includes aspects of polarized light microscopy (PLM).
Description
Depending on the grade of observation required, petrographic microscopes are derived from conventional brightfield microscopes of similar basic capabilities by:
Adding a Nicol prism polarizer filter to the light path beneath the sample slide
Replacing the normal stage with a circular rotating stage (typically graduated with vernier scales for reading orientations to better than 1 degree of arc)
Adding a second rotatable and removable Nicol prism filter, called the analyzer, to the light path between objective and eyepiece
Adding a phase telescope, also known as a Bertrand lens, which allows the viewer to see conoscopic interference patterns
Adding a slot for insertion of wave plates
Petrographic microscopes are constructed with optical parts that do not add unwanted polarizing effects due to strained glass, or polarization by reflection in prisms and mirrors. These special parts add to the cost and complexity of the microscope. However, a "simple polarizing" microscope is easily made by adding inexpensive polarizing filters to a standard biological microscope, often with one in a filter holder beneath the condenser, and a second inserted beneath the head or eyepiece. These can be sufficient for many non-quantitative purposes.
The two Nicol prisms (occasionally referred to as nicols) of the petrographic microscope have their polarizing planes oriented perpendicular to one another. When only an isotropic material such as air, water, or glass exists between the filters, all light is blocked, but most crystalline materials and minerals change the polarizing light directions, allowing some of the altered light to pass through the analyzer to the viewer. Using one polarizer makes it possible to view the slide in plane polarized light; using two allows for analysis under cross polarized light. A particular light pattern on the upper lens surface of the objectives is created as a conoscopic interference pattern (or interference figure) characteristic of uniaxial and biaxial minerals, and produced with convergent polarized light. To observe the interference figure, true petrographic microscopes usually include an accessory called a Bertrand lens, which focuses and enlarges the figure. It is also possible to remove an eyepiece lens to make a direct observation of the objective lens surface.
In addition to modifications of the microscope's optical system, petrographic microscopes allow for the insertion of specially-cut oriented filters of biaxial minerals (the quartz wedge, quarter-wave mica plate and half-wave mica plate), into the optical train between the polarizers to identify positive and negative birefringence, and in extreme cases, the mineral order when needed.
History
As early as 1808, the French physicist Étienne Louis Malus discovered the refraction and polarization of light. William Nicol invented a prism for polarization in 1829, which was an indispensable part of the polarizing microscope for over 100 years. Later the Nicol prisms were replaced by cheaper polarizing filters.
The first complete polarizing microscope was built by Giovanni Battista Amici in 1830.
Rudolf Fuess built the first polarization microscope specifically for petrographic purposes in 1875. This was described by Harry Rosenbusch in the yearbook for mineralogy.
References
Microscopes
Optical mineralogy | Petrographic microscope | [
"Chemistry",
"Technology",
"Engineering"
] | 715 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
9,377,328 | https://en.wikipedia.org/wiki/CD19 | B-lymphocyte antigen CD19, also known as CD19 molecule (Cluster of Differentiation 19), B-Lymphocyte Surface Antigen B4, T-Cell Surface Antigen Leu-12 and CVID3 is a transmembrane protein that in humans is encoded by the gene CD19. In humans, CD19 is expressed in all B lineage cells. Contrary to some early doubts, human plasma cells do express CD19. CD19 plays two major roles in human B cells: on the one hand, it acts as an adaptor protein to recruit cytoplasmic signaling proteins to the membrane; on the other, it works within the CD19/CD21 complex to decrease the threshold for B cell receptor signaling pathways. Due to its presence on all B cells, it is a biomarker for B lymphocyte development, lymphoma diagnosis and can be utilized as a target for leukemia immunotherapies.
Structure
In humans, CD19 is encoded by the 7.41 kilobase CD19 gene located on the short arm of chromosome 16. It contains at least fifteen exons, four that encode extracellular domain and nine that encode cytoplasmic domains, with a total of 556 amino acids. Experiments show that there are multiple mRNA transcripts; however, only two have been isolated in vivo.
CD19 is a 95 kDa Type I transmembrane glycoprotein in the immunoglobulin superfamily (IgSF) with two extracellular C2-set Ig-like domains and a relatively large, 240 amino acid, cytoplasmic tail that is highly conserved among mammalian species. The extracellular C2-type Ig-like domains are divided by a potential disulfide linked non-Ig-like domain and N-linked carbohydrate addition sites. The cytoplasmic tail contains at least nine tyrosine residues near the C-terminus. Within these residues, Y391, Y482, and Y513 have been shown to be essential to the biological functions of CD19. Phenylalanine substitution for tyrosine at Y482 and Y513 leads to the inhibition of phosphorylation at the other tyrosines.
Expression
CD19 is widely expressed during all phases of B cell development until terminal differentiation into plasma cells. During B cell lymphopoiesis, CD19 surface expression starts during immunoglobulin (Ig) gene rearrangement, which coincides during B lineage commitment from hematopoietic stem cell. Throughout development, the surface density of CD19 is highly regulated. CD19 expression in mature B cells is threefold higher than that in immature B cells. CD19 is expressed on all normal, mitogen-stimulated, and malignant B cells, excluding plasma cells. CD19 expression is even maintained in B lineage cells that undergo neoplastic transformation. Because of its ubiquity on all B cells, it can function as a B cell marker and a target for immunotherapies targeting neoplastic lymphocytes.
Function
Role in development & survival
Decisions to live, proliferate, differentiate, or die are continuously being made during B cell development. These decisions are tightly regulated through B cell receptor (BCR) interactions and signaling. The presence of a functional BCR is necessary during antigen-dependent differentiation and for continued survival in the peripheral immune system. Essential to the functionality of a BCR is the presence of CD19. Experiments using CD19 knockout mice found that CD19 is essential for B cell differentiative events including the formation of B-1, germinal center, and marginal zone (MZ) B cells. Analysis of mixed bone marrow chimeras suggest that prior to an initial antigen encounter, CD19 promotes the survival of naive recirculating B cells and increases the in vivo life span of B cells in the peripheral B cell compartment. Ultimately, CD19 expression is integral to the propagation of BCR-induced survival signals and the maintenance of homeostasis through tonic signaling.
BCR-independent
Paired box transcription factor 5 (PAX5) plays a major role in B cell differentiation from pro B cell to mature B cell, the point at which the expression of non-B-lineage genes is permanently blocked. Part of B cell differentiation is controlling c-MYC protein stability and steady-state levels through CD19, which acts as a PAX5 target and downstream effector of the PI3K-AKT-GSK3β axis. CD19 signaling, independent of BCR functions, increases c-MYC protein stability. Using a loss of function approach, researchers found reduced MYC levels in B cells of CD19 knockdown mice. CD19 signaling involves the recruitment and activation of phosphoinositide 3-kinase (PI3K) and later downstream, the activation of protein kinase B (Akt). The Akt-GSK3β axis is necessary for MYC activation by CD19 in BCR-negative cells, with higher levels of Akt activation corresponding to higher levels of MYC. CD19 is a crucial BCR-independent regulator of MYC-driven neoplastic growth in B cells since the CD19-MYC axis promotes cell expansion in vitro and in vivo.
CD19/CD21 complex
On the cell surface, CD19 is the dominant signaling component of a multimolecular complex including CD21 (CR2, a complement receptor), TAPA-1 (a tetraspanin membrane protein), and CD225. The CD19/CD21 complex arises from C3d binding to CD21; however, CD19 does not require CD21 for signal transduction. CD81, attached to CD19, is a part of the tetraspanin web, acts as a chaperone protein, and provides docking sites for molecules in various different signal transduction pathways.
BCR-dependent
While colligated with the BCR, the CD19/CD21 complex bound to the antigen-complement complex can decrease the threshold for B cell activation. CD21, complement receptor 2, can bind fragments of C3 that have covalently attached to glycoconjugates by complement activation. Recognition of an antigen by the complement system enables the CD19/CD21 complex and associated intracellular signaling molecules to crosslink to the BCR. This results in phosphorylation of the cytoplasmic tail of CD19 by BCR-associated tyrosine kinases, ensuing is the binding of additional Src-family kinases, augmentation of signaling through the BCR, and recruitment of PI3K. The localization of PI3K initiates another signaling pathway leading to Akt activation. Varying expression of CD19 on the cell surface modulates tyrosine phosphorylation and Akt kinase signaling and by extension, MHC class II mediated signaling.
Activated spleen tyrosine kinase (Syk) leads to phosphorylation of the scaffold protein, BLNK, which provides multiple sites for tyrosine phosphorylation and recruits SH2-containing enzymes and adaptor proteins that can form various multiprotein signaling complexes. In this way, CD19 can modulate the threshold for B cell activation. This is important during primary immune response, prior to affinity maturation, amplifying the response of low affinity BCRs to low concentrations of antigen.
Interactions
CD19 has been shown to interact with:
CD81
CD82
Complement receptor 2
VAV2
In disease
Autoimmunity & immunodeficiency
Mutations in CD19 are associated with severe immunodeficiency syndromes characterized by diminished antibody production. Additionally, mutations in CD21 and CD81 can also underlie primary immunodeficiency due to their role in the CD19/CD21 complex formation. These mutations can lead to hypogammaglobulinaemia as a result of poor response to antigen and defective immunological memory. Researchers found changes in the constitution of B lymphocyte population and reduced amounts of switched memory B cells with high terminal differentiation potential in patients with Down syndrome. CD19 has also been implicated in autoimmune diseases, including rheumatoid arthritis and multiple sclerosis, and may be a useful treatment target.
Mouse model research shows that CD19 deficiency can lead to hyporesponsiveness to transmembrane signals and weak T cell dependent humoral response, that in turn leads to an overall impaired humoral immune response. Additionally CD19 plays a role in modulating MHC Class II expression and signaling, which can be affected by mutations. CD19 deficient B cells exhibit selective growth disadvantage; therefore, it is rare for CD19 to be absent in neoplastic B cells, as it is essential for development.
Cancer
Since CD19 is a marker of B cells, the protein has been used to diagnose cancers that arise from this type of cell - notably B cell lymphomas, acute lymphoblastic leukemia (ALL), and chronic lymphocytic leukemia (CLL). The majority of B cell malignancies express normal to high levels of CD19. The most current experimental anti-CD19 immunotoxins in development work by exploiting the widespread presence of CD19 on B cells, with expression highly conserved in most neoplastic B cells, to direct treatment specifically towards B-cell cancers. However, it is now emerging that the protein plays an active role in driving the growth of these cancers, most intriguingly by stabilizing the concentrations of the MYC oncoprotein. This suggests that CD19 and its downstream signaling may be a more attractive therapeutic target than initially suspected.
CD19-targeted therapies based on T cells that express CD19-specific chimeric antigen receptors (CARs) have been utilized for their antitumor abilities in patients with CD19+ lymphoma and leukemia, first against Non-Hodgkin's Lymphoma (NHL), then against CLL in 2011, and then against ALL in 2013. CAR-19 T cells are genetically modified T cells that express a targeting moiety on their surface that confers T cell receptor (TCR) specificity towards CD19+ cells. CD19 activates the TCR signaling cascade that leads to proliferation, cytokine production, and ultimately lysis of the target cells, which in this case are CD19+ B cells. CAR-19 T cells are more effective than anti-CD19 immunotoxins because they can proliferate and remain in the body for a longer period of time. This comes with a caveat since now CD19− immune escape facilitated by splice variants, point mutations, and lineage switching can form as a major form of therapeutic resistance for patients with ALL.
References
Further reading
External links
Mouse CD Antigen Chart
Human CD Antigen Chart
Clusters of differentiation
Biomarkers | CD19 | [
"Biology"
] | 2,280 | [
"Biomarkers"
] |
9,377,615 | https://en.wikipedia.org/wiki/Chromosome%20combing | Chromosome combing (also known as molecular combing or DNA combing) is a technique used to produce an array of uniformly stretched DNA that is then highly suitable for nucleic acid hybridization studies such as fluorescent in situ hybridisation (FISH) which benefit from the uniformity of stretching, the easy access to the hybridisation target sequences, and the resolution offered by the large distance between two probes, which is due to the stretching of the DNA by a factor of 1.5 times the crystallographic length of DNA.
DNA in solution (i.e. with a randomly-coiled structure) is stretched by retracting the meniscus of the solution at a constant rate (typically 300 μm/s). The ends of DNA strands, which are thought to be frayed (i.e. open and exposing polar groups) bind to ionisable groups coating a silanized glass plate at a pH below the pKa of the ionizable groups (ensuring they are charged enough to interact with the ends of DNA). The rest of the DNA, which is mostly dsDNA, cannot form these interactions (aside from a few "touch down" segments along the length of the DNA strand) so is available for hybridisation to probes. As the meniscus retracts, surface retention creates a force that acts on DNA to retain it in the liquid phase; however this force is inferior to the strength of the DNA's attachment; the result is that the DNA is stretched as it enters the air phase; as the force acts in the locality of the air/liquid phase, it is invariant to different lengths or conformations of the DNA in solution, so DNA of any length will be stretched the same as the meniscus retracts. As this stretching is constant along the length of a DNA, distance along the strand can be related to base content; 1 μm is approximately equivalent to 2 kb.
DNA regions of interest are observed by hybridising them with probes labelled by haptens like biotin; this can then be bound by one or more layers of fluorochrome-associated ligands (such as immunofluorescence antibodies). Multicolour tagging is also possible. This has several potential uses, typically as a high-resolution physical mapping technique (e.g. for positional cloning), an example of which was the correct mapping of 200 kb of the CAPN3 gene region, or the mapping of non-overlapping sequences (since the distance between two probes can be accurately measured). It is therefore useful for finding exons, microdeletions, amplifications, or rearrangements. Before the combing improvement, FISH was too low-res to be of use in this case. With this technique, the resolution of FISH is theoretically limited only by the resolution of the epifluorescence microscope; in practice, resolutions of around 2 μm are obtained, for DNA molecules usually 200–600 kb long (though combing-FISH has been used with some success on molecules in excess of 1 Mb long), and there may be room for improvement through optimisation. Since DNA analyses using this technique are single-molecule, genomes from different cells can be compared to find anomalies, with implications for diagnosis of cancer and other genetic alterations.
Chromosome combing is also used to study DNA replication, a highly regulated process that is reliant on a specific program of temporal and spatial distribution of activation of origins of replication. Each origin occupies a distinct genetic locus and must fire only once per cell cycle. Chromosome combing allows a genome-wide view of the firing of origins and propagation of replication forks. As no assumptions are made about the sequence of the origins, this technique is particularly useful for mapping origins in eukaryotes, which are not thought to have precisely defined initiation sequences.
Strategies involving combing recently replicated DNA typically involve incorporating modified nucleotides (such as BrdU, bromodeoxyuridine) into the nascent DNA, then fluorescently detecting it. As replication forks spread bidirectionally from origins of replication at (approximately) equal speeds, then origin position can be inferred. Replacing the modified nucleotide pool with a different type of modified nucleotide after a certain amount of time allows development of a time-resolved picture of the firing of sites, and the kinetics of replication forks. Pause sites can be identified, merged replication forks resolved, and the frequency of origin firings in different time periods to be studied.
Firing frequencies have shown in in vitro studies of Xenopus laevis egg extract to increase as S phase progresses. In another study on Epstein-Barr virus episomes, hybridised probes were used to visualise the regional distribution of firing events; a particular zone showed preference for firing, whilst a few pause sites were also inferred.
Chromosome combing is performed by the company Genomic Vision, based in Paris.
References
Biochemistry
Genetics techniques | Chromosome combing | [
"Chemistry",
"Engineering",
"Biology"
] | 1,022 | [
"Genetics techniques",
"Biochemistry",
"Genetic engineering",
"nan"
] |
9,377,661 | https://en.wikipedia.org/wiki/Support%20function | In mathematics, the support function hA of a non-empty closed convex set A in
describes the (signed) distances of supporting hyperplanes of A from the origin. The support function is a convex function on .
Any non-empty closed convex set A is uniquely determined by hA. Furthermore, the support function, as a function of the set A, is compatible with many natural geometric operations, like scaling, translation, rotation and Minkowski addition.
Due to these properties, the support function is one of the most central basic concepts in convex geometry.
Definition
The support function
of a non-empty closed convex set A in is given by
; see
. Its interpretation is most intuitive when x is a unit vector:
by definition, A is contained in the closed half space
and there is at least one point of A in the boundary
of this half space. The hyperplane H(x) is therefore called a supporting hyperplane
with exterior (or outer) unit normal vector x.
The word exterior is important here, as
the orientation of x plays a role, the set H(x) is in general different from H(−x).
Now hA(x) is the (signed) distance of H(x) from the origin.
Examples
The support function of a singleton A = {a} is .
The support function of the Euclidean unit ball is where is the 2-norm.
If A is a line segment through the origin with endpoints −a and a, then .
Properties
As a function of x
The support function of a compact nonempty convex set is real valued and continuous, but if the
set is closed and unbounded, its support function is extended real valued (it takes the value
). As any nonempty closed convex set is the intersection of
its supporting half spaces, the function hA determines A uniquely.
This can be used to describe certain geometric properties of convex sets analytically.
For instance, a set A is point symmetric with respect to the origin if and only if hA
is an even function.
In general, the support function is not differentiable.
However, directional derivatives exist and yield support functions of support sets. If A is compact and convex,
and hA'(u;x) denotes the directional derivative of
hA at u ≠ 0 in direction x,
we have
Here H(u) is the supporting hyperplane of A with exterior normal vector u, defined
above. If A ∩ H(u) is a singleton {y}, say, it follows that the support function is differentiable at
u and its gradient coincides with y. Conversely, if hA is differentiable at u, then A ∩ H(u) is a singleton. Hence hA is differentiable at all points u ≠ 0
if and only if A is strictly convex (the boundary of A does not contain any line segments).
More generally, when is convex and closed then for any ,
where denotes the set of subgradients of at .
It follows directly from its definition that the support function is positive homogeneous:
and subadditive:
It follows that hA is a convex function.
It is crucial in convex geometry that these properties characterize support functions:
Any positive homogeneous, convex, real valued function on is the
support function of a nonempty compact convex set. Several proofs are known,
one is using the fact that the Legendre transform of a positive homogeneous, convex, real valued function
is the (convex) indicator function of a compact convex set.
Many authors restrict the support function to the Euclidean unit sphere
and consider it as a function on Sn-1.
The homogeneity property shows that this restriction determines the
support function on , as defined above.
As a function of A
The support functions of a dilated or translated set are closely related to the original set A:
and
The latter generalises to
where A + B denotes the Minkowski sum:
The Hausdorff distance
of two nonempty compact convex sets A and B can be expressed in terms of support functions,
where, on the right hand side, the uniform norm on the unit sphere is used.
The properties of the support function as a function of the set A are sometimes summarized in saying
that :A h A maps the family of non-empty
compact convex sets to the cone of all real-valued continuous functions on the sphere whose positive
homogeneous extension is convex. Abusing terminology slightly,
is sometimes called linear, as it respects Minkowski addition, although it is not
defined on a linear space, but rather on an (abstract) convex cone of nonempty compact convex sets.
The mapping is an isometry between this cone, endowed with the Hausdorff metric, and
a subcone of the family of continuous functions on Sn-1 with the uniform norm.
Variants
In contrast to the above, support functions are sometimes defined on the boundary of A rather than on
Sn-1, under the assumption that there exists a unique exterior unit normal at each boundary point.
Convexity is not needed for the definition.
For an oriented regular surface, M, with a unit normal vector, N, defined everywhere on its surface, the support function
is then defined by
.
In other words, for any , this support function gives the
signed distance of the unique hyperplane that touches M in x.
See also
Barrier cone
Supporting functional
References
Convex geometry
Types of functions | Support function | [
"Mathematics"
] | 1,081 | [
"Mathematical objects",
"Functions and mappings",
"Types of functions",
"Mathematical relations"
] |
9,377,662 | https://en.wikipedia.org/wiki/Codex%20Mexicanus | The Codex Mexicanus is an early colonial Mexican pictorial manuscript.
The Codex can be divided into several sections:
The saints, the European calendar and zodiac.
The Aztec calendar.
Accounts in the Aztec pictographic writing system.
A family tree of the rulers of Mexico.
The history of the Mexica from their departure from Aztlan.
Colonial history.
Two Christian scenes: the Temptation of Christ and the Adoration.
A tonalamatl. This last section is incomplete.
It is currently held in the Bibliothèque Nationale, Paris.
See also
Aztec codices
Codex Vaticanus B
References
External links
High Definition scans of the codex at the French National Library
Codices
Mesoamerican pictorial manuscripts
Mexicanus
16th century in the Aztec civilization
16th century in Mexico
16th century in New Spain
Pictograms
Astrological texts
Bibliothèque nationale de France collections | Codex Mexicanus | [
"Mathematics"
] | 177 | [
"Symbols",
"Pictograms"
] |
9,378,673 | https://en.wikipedia.org/wiki/De%20novo%20protein%20structure%20prediction | In computational biology, de novo protein structure prediction refers to an algorithmic process by which protein tertiary structure is predicted from its amino acid primary sequence. The problem itself has occupied leading scientists for decades while still remaining unsolved. According to Science, the problem remains one of the top 125 outstanding issues in modern science. At present, some of the most successful methods have a reasonable probability of predicting the folds of small, single-domain proteins within 1.5 angstroms over the entire structure.
De novo methods, a term first coined by William DeGrado, tend to require vast computational resources, and have thus only been carried out for relatively small proteins. De novo protein structure modeling is distinguished from Template-based modeling (TBM) by the fact that no solved homologue to the protein of interest is used, making efforts to predict protein structure from amino acid sequence exceedingly difficult. Prediction of protein structure de novo for larger proteins will require better algorithms and larger computational resources such as those afforded by either powerful supercomputers (such as Blue Gene or MDGRAPE-3) or distributed computing projects (such as Folding@home, Rosetta@home, the Human Proteome Folding Project, or Nutritious Rice for the World). Although computational barriers are vast, the potential benefits of structural genomics (by predicted or experimental methods) to fields such as medicine and drug design make de novo structure prediction an active research field.
Background
Currently, the gap between known protein sequences and confirmed protein structures is immense. At the beginning of 2008, only about 1% of the sequences listed in the UniProtKB database corresponded to structures in the Protein Data Bank (PDB), leaving a gap between sequence and structure of approximately five million. Experimental techniques for determining tertiary structure have faced serious bottlenecks in their ability to determine structures for particular proteins. For example, whereas X-ray crystallography has been successful in crystallizing approximately 80,000 cytosolic proteins, it has been far less successful in crystallizing membrane proteins – approximately 280. In light of experimental limitations, devising efficient computer programs to close the gap between known sequence and structure is believed to be the only feasible option.
De novo protein structure prediction methods attempt to predict tertiary structures from sequences based on general principles that govern protein folding energetics and/or statistical tendencies of conformational features that native structures acquire, without the use of explicit templates. Research into de novo structure prediction has been primarily focused into three areas: alternate lower-resolution representations of proteins, accurate energy functions, and efficient sampling methods.
A general paradigm for de novo prediction involves sampling conformation space, guided by scoring functions and other sequence-dependent biases such that a large set of candidate (“decoy") structures are generated. Native-like conformations are then selected from these decoys using scoring functions as well as conformer clustering. High-resolution refinement is sometimes used as a final step to fine-tune native-like structures. There are two major classes of scoring functions. Physics-based functions are based on mathematical models describing aspects of the known physics of molecular interaction. Knowledge-based functions are formed with statistical models capturing aspects of the properties of native protein conformations.
Amino acid sequence determines tertiary protein structure
Several lines of evidence have been presented in favor of the notion that primary protein sequence contains all the information required for overall three-dimensional protein structure, making the idea of a de novo protein prediction possible. First, proteins with different functions usually have different amino acid sequences. Second, several different human diseases, such as Duchenne muscular dystrophy, can be linked to loss of protein function resulting from a change in just a single amino acid in the primary sequence. Third, proteins with similar functions across many different species often have similar amino acid sequences. Ubiquitin, for example, is a protein involved in regulating the degradation of other proteins; its amino acid sequence is nearly identical in species as far separated as Drosophila melanogaster and Homo sapiens. Fourth, by thought experiment, one can deduce that protein folding must not be a completely random process and that information necessary for folding must be encoded within the primary structure. For example, if we assume that each of 100 amino acid residues within a small polypeptide could take up 10 different conformations on average, giving 10^100 different conformations for the polypeptide. If one possible conformation was tested every 10^-13 second, then it would take about 10^77 years to sample all possible conformations. However, proteins are properly folded within the body on short timescales all the time, meaning that the process cannot be random and, thus, can potentially be modeled.
One of the strongest lines of evidence for the supposition that all the relevant information needed to encode protein tertiary structure is found in the primary sequence was demonstrated in the 1950s by Christian Anfinsen. In a classic experiment, he showed that ribonuclease A could be entirely denatured by being submerged in a solution of urea (to disrupt stabilizing hydrophobic bonds) in the presence of a reducing agent (to cleave stabilizing disulfide bonds). Upon removal of the protein from this environment, the denatured and functionless ribonuclease protein spontaneously recoiled and regained function, demonstrating that protein tertiary structure is encoded in the primary amino acid sequence. Had the protein reformed randomly, over one-hundred different combinations of four disulfide bonds could have formed. However, in the majority of cases proteins will require the presence of molecular chaperons within the cell for proper folding. The overall shape of a protein may be encoded in its amino acid structure, but its folding may depend on chaperons to assist in folding.
Successful de novo modeling requirements
De novo conformation predictors usually function by producing candidate conformations (decoys) and then choosing amongst them based on their thermodynamic stability and energy state. Most successful predictors will have the following three factors in common:
1) An accurate energy function that corresponds the most thermodynamically stable state to the native structure of a protein
2) An efficient search method capable of quickly identifying low-energy states through conformational search
3) The ability to select native-like models from a collection of decoy structures
De novo programs will search three dimensional space and, in the process, produce candidate protein conformations. As a protein approaches its correctly folded, native state, entropy and free energy will decrease. Using this information, de novo predictors can discriminate amongst decoys. Specifically, de novo programs will select possible conformations with lower free energies – which are more likely to be correct than those structures with higher free energies. As stated by David A. Baker in regards to how his de novo Rosetta predictor works, “during folding, each local segment of the chain flickers between a different subset of local conformations…folding to the native structure occurs when the conformations adopted by the local segments and their relative orientations allow…low energy features of native protein structures. In the Rosetta algorithm…the program then searches for the combination of these local conformations that has the lowest overall energy.”
However, some de novo methods work by first enumerating through the entire conformational space using a simplified representation of a protein structure, and then select the ones that are most likely to be native-like. An example of this approach is one based on representing protein folds using tetrahedral lattices and building all atoms models on top of all possible conformations obtained using the tetrahedral representation. This approach was used successfully at CASP3 to predict a protein fold whose topology had not been observed before by Michael Levitt's team.
By developing the QUARK program, Xu and Zhang showed that ab initio structure of some proteins can be successfully constructed through a knowledge-based force field
.
Prediction strategies
If a protein of known tertiary structure shares at least 30% of its sequence with a potential homolog of undetermined structure, comparative methods that overlay the putative unknown structure with the known can be utilized to predict the likely structure of the unknown. However, below this threshold three other classes of strategy are used to determine possible structure from an initial model: ab initio protein prediction, fold recognition, and threading.
Ab Initio Methods: In ab initio methods, an initial effort to elucidate secondary structures (alpha helix, beta sheet, beta turn, etc.) from primary structure is made by utilization of physicochemical parameters and neural net algorithms. From that point, algorithms predict tertiary folding. One drawback to this strategy is that it is not yet capable of incorporating the locations and orientation of amino acid side chains.
Fold Prediction: In fold recognition strategies, a prediction of secondary structure is first made and then compared to either a library of known protein folds, such as CATH or SCOP, or what is known as a "periodic table" of possible secondary structure forms. A confidence score is then assigned to likely matches.
Threading: In threading strategies, the fold recognition technique is expanded further. In this process, empirically based energy functions for the interaction of residue pairs are used to place the unknown protein onto a putative backbone as a best fit, accommodating gaps where appropriate. The best interactions are then accentuated in order to discriminate amongst potential decoys and to predict the most likely conformation.
The goal of both fold and threading strategies is to ascertain whether a fold in an unknown protein is similar to a domain in a known one deposited in a database, such as the protein databank (PDB). This is in contrast to de novo (ab initio) methods where structure is determined using a physics-base approach en lieu of comparing folds in the protein to structures in a data base.
Limitations of de novo prediction methods
A major limitation of de novo protein prediction methods is the extraordinary amount of computer time required to successfully solve for the native conformation of a protein. Distributed methods, such as Rosetta@home, have attempted to ameliorate this by recruiting individuals who then volunteer idle home computer time in order to process data. Even these methods face challenges, however. For example, a distributed method was utilized by a team of researchers at the University of Washington and the Howard Hughes Medical Institute to predict the tertiary structure of the protein T0283 from its amino acid sequence. In a blind test comparing the accuracy of this distributed technique with the experimentally confirmed structure deposited within the Protein Databank (PDB), the predictor produced excellent agreement with the deposited structure. However, the time and number of computers required for this feat was enormous – almost two years and approximately 70,000 home computers, respectively.
One method proposed to overcome such limitations involves the use of Markov models (see Markov chain Monte Carlo). One possibility is that such models could be constructed in order to assist with free energy computation and protein structure prediction, perhaps by refining computational simulations. Another way of circumventing the computational power limitations is using coarse-grained modeling. Coarse-grained protein models allow for de novo structure prediction of small proteins, or large protein fragments, in a short computational time.
Structure prediction of de novo proteins
Another limitation of protein structure prediction software concerns a specific class of proteins, namely de novo proteins. Structure prediction software such as AlphaFold rely on co-evolutionary data derived from multiple sequence alignment (MSA) and homologous protein sequences to predict structures of proteins. However, per definition, de novo proteins lack homologous sequences, as they are evolutionarily new. Thus, structure prediction software which relies on such homology can be expected to perform poorly in predicting structures of de novo proteins. To improve accuracy of structure prediction for de novo proteins, new softwares have been developed. Namely, ESMFold is a newly developed large language model (LLM) for the prediction of protein structures based solely on their amino acid sequences. It can predict a 3D structure of a protein with atomic-level resolution with an input of a single amino acid sequence.
Critical assessment of protein structure prediction
“Progress for all variants of computational protein structure prediction methods is assessed in the biannual, community wide Critical Assessment of Protein Structure Prediction (CASP) experiments. In the CASP experiments, research groups are invited to apply their prediction methods to amino acid sequences for which the native structure is not known but to be determined and to be published soon. Even though the number of amino acid sequences provided by the CASP experiments is small, these competitions provide a good measure to benchmark methods and progress in the field in an arguably unbiased manner.”
Notes
Samudrala, R, Xia, Y, Huang, E.S., Levitt, M. Ab initio prediction of protein structure using a combined hierarchical approach. (1999). Proteins Suppl 3: 194-198.
J. Skolnick, Y. Zhang and A. Kolinski. Ab Initio modeling. Structural genomics and high throughput structural biology. M. Sundsrom, M. Norin and A. Edwards, eds. 2006: 137-162.
J Lee, S Wu, Y Zhang. Ab initio protein structure prediction. From Protein Structure to Function with Bioinformatics, Chapter 1, Edited by D. J. Rigden, (Springer-London, 2009), P. 1-26.
See also
Protein structure prediction
Protein structure prediction software
Protein design
References
External links
CASP
Folding@Home
HPF project
Foldit
UniProtKB
Protein Data Bank (PDB)
Expert Protein Analysis System - links to protein prediction tools
Bioinformatics
Protein structure
Protein methods | De novo protein structure prediction | [
"Chemistry",
"Engineering",
"Biology"
] | 2,820 | [
"Biochemistry methods",
"Biological engineering",
"Protein methods",
"Protein biochemistry",
"Bioinformatics",
"Structural biology",
"Protein structure"
] |
9,378,828 | https://en.wikipedia.org/wiki/Space%20Systems%20Processing%20Facility | The Space Systems Processing Facility (SSPF), originally the Space Station Processing Facility, is a three-story industrial building at Kennedy Space Center for the manufacture and processing of flight hardware, modules, structural components and solar arrays of the International Space Station, and future space stations and commercial spacecraft. It was built in 1992 at the space complex's industrial area, just east of the Operations and Checkout Building.
The SSPF includes two processing bays, an airlock, operational control rooms, laboratories, logistics areas for equipment and machines, office space, a ballroom and conference halls, and a cafeteria.
The processing areas, airlock, and laboratories are designed to support non-hazardous Space Station and Space Shuttle payloads in 100,000 class clean work areas. The building has a total floor area of .
History and construction
During the re-designing phase of Space Station Freedom in early 1991, Congress approved new plans for NASA to lead the project and begin manufacturing its components for the future International Space Station. Kennedy Space Center was selected as the ideal launch processing complex for the ISS, as well as hosting all the internationally manufactured modules and station elements.
However the Operations and Checkout Building (which was originally to be the prime factory for station launch processing) was insufficient in size to accommodate all the components. On March 26, 1991, engineers at Kennedy Space Center; along with contractor Metric Constructions Inc. of Tampa Florida, broke ground on a new $56 million Space Station Processing Facility, situated adjacent to the O&C. The design called for a 457,000-square-foot multifunction building housing an enormous processing bay, laboratories, control rooms, staging areas, communications and control facilities, and office space for some 1,400 NASA and contractor employees.
KSC Deputy Director Gene Thomas described the construction: "The skyline around here is really going to change. This will be the biggest facility that we have built since the Apollo days". The SSPF used reinforced concrete and some 4,300 tons of steel. The building was structurally completed and topped out by mid 1992.
After three years of construction, interior fitting and equipment set-up, the SSPF formally opened on June 23, 1994.
Into the 21st century, more commercial partners began using the SSPF for projects unrelated to the ISS. In addition, after the announcement of discontinuing ISS operations beyond 2030 (leading to its planned de-orbit in 2031), the SSPF increasingly became a space for general space systems rather than specifically tailoring to the ISS. Due to these reasons, in December 2023, the facility was renamed from the Space Station Processing Facility to the Space Systems Processing Facility, keeping the same acronym.
Operations and manufacturing processes
At the SSPF, space station modules, trusses and solar arrays are prepped and made ready for launch. The low and high bays are fully air conditioned and ambient temperature is maintained at at all times. Workers and engineers wear full non-contaminant clothing while working. Modules receive cleaning and polishing, and some areas are temporarily disassembled for the installation of cables, electrical systems and plumbing. In another area, shipments of spare materials are available for installation. International Standard Payload Rack frames are assembled and welded together, allowing the installation of instruments, machines and allowing science experiment boxes to be fitted. Once racks are fully assembled, they are hoisted by a special manually operated robotic crane and carefully maneuvered into place inside the space station modules. Each rack weighs from 700 to 1,100 kg, and connect inside the module on special mounts with screws and latches.
Cargo bags for MPLM modules are filled with their cargo such as food packages, science experiments and other miscellaneous items on-site in the SSPF, and loaded into the module by the same robotic crane and strapped in securely.
Many of the builders accompanied their modules from around the world during their manufacturing, and worked at KSC for months to years during final assembly. Many ISS modules were renamed after successfully launching.
Station Integration Testing
Regarding the launch of modules of the International Space Station (ISS), there had been philosophical differences for years between designers and payload processors whether to ship-and-shoot or perform integration testing prior to launch. The former involved building a station module and launching it without ever physically testing it with other modules. The integration testing was not originally in the ISS plan, but in 1995 Johnson Space Center designers began to consider it and embedding KSC personnel at module factories. Multi-Element Integration Testing (MEIT) of ISS modules at KSC was officially in the books in 1997.
Three MEIT and one Integration Systems Test (IST) tests were conducted for the ISS, taking about three years from planning to completion and closure:
MEIT1: US Lab, Z1 truss, P6 truss, and a Node 1 emulator
Planning began in 1997, Testing began January 1999
MEIT2: S0 truss/Mobile Transporter/Mobile Base System, S1 truss, P1 truss, P3 truss, P4 truss, and a US Lab emulator.
MEIT3: Japanese Experiment Module, Node 2, and the US Lab emulator
Completed in 2007
Node2 IST: Node 2 and US Lab and Node 1 emulators, as part of the ISS Flight Emulator
After the launch of the Destiny, an emulator was built for MEIT testing, since the lab controlled many other modules. Among the items checked were mechanical connections, the ability to flow power and fluids between modules, and the flight software.
Numerous issues were found and rectified from these on the ground tests, many of which could not have been fixed in orbit.
Building specifications
The SSPF's High Bays provide maximum flexibility for manufacturing, assembly, testing and processing payloads and elements destined for space. The bays are enormous cleanrooms equipped with overhead cranes, commodities-servicing equipment and a secure backup-power supply. The facility also has 15 offline labs.
Intermediate Bay (I-bay)
Dimensions: in length, by in width
Ceiling height
100,000-class clean work area
High Bay
Dimensions in length, by in Width
Ceiling height
Can be separated into eight different processing areas
Cranes
I-bay: Two capacity
High Bay: Two capacity
Commodities and Servicing Equipment
Ammonia servicing machines
Compressed air supply (125 psi)
Potable water pipes
Electrical Services
480 V 3-phase power at 60 Hz
Uninterruptible power supply (450 kVA)
Laboratory facilities
9 independently operated control rooms
15 labs, 2 chemical labs, and 2 darkrooms
1 Certified offline lab for planetary protection processing (Class-100 clean work area)
3 ISS Environmental Simulator (ISSES) Chambers, can be used to expose ISS life science ground controls to ISS environmental conditions (i.e., temperature, humidity).
Experiment Monitoring Area (EMA), used to monitor ISS life science experiments
Office floor area: of office/work space
Space Station and space hardware components currently in the SSPF
:
Multi-Purpose Logistics Modules Raffaello and Donatello.
Two Lightweight Multi-Purpose Equipment Support Structure Carriers (LMCs).
Lunar Gateway habitat module, built by Lockheed Martin - used as a training rig.
Dream Chaser 'mini space shuttle' - designed and manufactured by Sierra Nevada Corporation.
Machinery for experiments in the Lunar Gateway
Bigelow Aerospace inflatable habitat mockup
When the lights in the building are on, most of these components can be seen on the live webcam from the facility.
Current and future activities
After the completion of the International Space Station in 2011, the SSPF factory was dormant for several months until early 2012, where the building was slightly refurbished for the moving in of space companies (such as Orbital ATK, SpaceX and eventually Sierra Nevada Corporation) to manufacture, process and load-up Cygnus and Dragon spacecraft and on-board payloads, as part of the Commercial Resupply Services program. NASA's upcoming Artemis mission hardware such as Moon and Mars space station modules and Space Launch System core stage engine sections, as well as the Dream Chaser mini-space shuttle, have begun manufacturing and processing operations in the high bay .
The building itself is open to the public and tours are offered free of charge by the employees. Exclusive tours of many areas of the SSPF are part of the Kennedy visitor complex's enhanced bus tour package.
In 2016, the laboratories of the SSPF were utilized by many small science companies and student unions with scientific equipment to study the feasibility of growing vegetables in space, such as the Veggie plant growth system, and the Advanced Plant Habitat; to launch as scientific payloads to the International Space Station.
Events
When the high bay area is less busy at times, a variety of events and conferences are held in various places within the SSPF building. Occasional STEM exhibitions take place where visitors (from children and teenagers to university students) can visit the SSPF and its ballroom to learn about the building's history, manufacturing acitivies, biological and chemical sciences, and the future vision of space operations at Kennedy Space Center, including the Lunar Gateway mockup module. The ballroom also doubles as a lecture hall for presentations. On rare occasions the high bay was once used for the National Space Council's second revived meeting on February 21, 2018.
Tenants including Northrop Grumman, Lockheed Martin and Airbus have also moved facilities into the SSPF.
Gallery
References
Kennedy Space Center
Buildings and structures in Merritt Island, Florida
Manufacturing plants
Manufacturing buildings and structures
Manufacturing plants in the United States
Manufacturing
International Space Station
Components of the International Space Station | Space Systems Processing Facility | [
"Engineering"
] | 1,930 | [
"Manufacturing",
"Mechanical engineering"
] |
9,379,226 | https://en.wikipedia.org/wiki/Thomas%20Messinger%20Drown | Thomas Messinger Drown (March 19, 1842 – November 17, 1904) was the fourth University President of Lehigh University in Bethlehem, Pennsylvania, United States. He was also an analytical chemist and metallurgist.
Background
He was born in Philadelphia, Pennsylvania in 1842. He graduated from Central High School in Philadelphia in 1859, and then went on to study medicine at the University of Pennsylvania and graduated in 1862. He went abroad to Germany to study chemistry in Freiberg, Saxony, and mining at the University of Heidelberg. From 1869 to 1870 he was an instructor of metallurgy at Harvard University. In 1870, he started a consulting business in Philadelphia. In 1872, he hired a former student, John Townsend Baker, as an assistant. From 1874 to 1881, he was professor of Analytical Chemistry at Lafayette College. Baker followed him to Lafayette and later would found the J. T. Baker Chemical Co., which merged with Mallinckrodt and was absorbed and spun off of Tyco International as a component company of Covidien. In 1875, he was elected as a member of the American Philosophical Society.
His professional career was interrupted in 1881, when, after the death of his father, he devoted himself to family matters. He restarted his professional work in 1885 by accepting a professorship at the Massachusetts Institute of Technology.
Massachusetts activity
He helped start MIT's chemical engineering curriculum in the late 1880s. In 1887, he was appointed by the newly formed Massachusetts Board of Health to a landmark study of sanitary quality of the state's inland waters. As consulting chemist to the Massachusetts State Board of Health, he was in charge of the famous Lawrence Experiment Station laboratory conducting the water sampling, testing, and analysis. There he put to work the environmental chemist and first female graduate of MIT, Ellen Swallow Richards. This research created the famous "normal chlorine" map of Massachusetts that was the first of its kind and was the template for others. As a result, Massachusetts established the first water-quality standards in America, and the first modern sewage treatment plant was created.
As a professor, Drown published a number of papers on metallurgy, mostly in Transactions of the American Institute of Mining Engineers. He was a founding member of the Institute, and served as its secretary, and editor of its Transactions from 1871 till 1884. He was elected its president in 1897.
Lehigh presidency
In 1895 he left MIT to become the fourth president of Lehigh University. Lehigh's endowment was predominantly in the stock of the major company of its founder, Asa Packer's Lehigh Valley Railroad. The Panic of 1893 crashed the market, brought the country into depression that lasted years, and nearly brought the university to financial insolvency. Many prominent railroads such as the Northern Pacific Railway, the Union Pacific Railroad and the Atchison, Topeka & Santa Fe Railroad went into bankruptcy, and over 15,000 companies and 500 banks failed. In order to gain new sources of funding, President Drown broke the university's ties with the Episcopal Church in 1897, qualifying the university for aid from the Commonwealth of Pennsylvania. During his term, which started during a major financial crisis, he was able to save Lehigh from bankruptcy, grow enrollment, which had dipped seriously, grow academics, and even have one major building erected.
A broad intellectual with interests in various fields, he nonetheless thought the key to Lehigh's success would be the school of technology. There he sought to broaden and deepen the offerings, increase the quality and quantity of laboratory space, equipment and apparatus, as funding permitted. Additionally, and in consultation with the faculty and the board of trustees, he created many new tiers of teaching, including the associate and assistant professorships. His idea was that this would create resources for top professors to be invited to Lehigh, and so help enlarge the curricula. During his tenure, the university's first emeritus professorship was granted (Harding of Physics), and first doctorate awarded (Joseph W. Richards). Many new degrees in the technical school were now being offered, such as Metallurgy (1891), Electrometallurgy, and Chemical Engineering (1902). The curriculum leading to a degree in arts and engineering was established, as was the department of zoology and biology. New courses (majors, that is, or degree offerings, as it is now known) were also adopted in geology, and physics.
Dr. Drown eventually gained in popularity on campus, with his forward ideas, success, idiosyncratic pince-nez glasses and mustache. Faculty members eventually came to refer to Dr. Drown as "chief". Unfortunately, T. M. Drown would not live long enough to see all his ideas to fruition, as he died in office, following abdominal surgery, November 16, 1904, effectively ending his term.
Williams Hall (1903), a Beaux Arts inspired Brick structure, was erected to house the growing departments of Biology and Geology, among other functions.
In 1908, Lehigh University opened up Drown Hall which now houses Lehigh's English Department.
References
External links
Signed photograph, from the Proceedings connected with the testimonial presented to Thomas Messinger Drown, M. D. by the Secretary of the American Institute of Mining Engineers, by members of the Institute, at Montreal, September 18, 1879.
1842 births
1904 deaths
Scientists from Philadelphia
Perelman School of Medicine at the University of Pennsylvania alumni
Heidelberg University alumni
American chemists
American metallurgists
Analytical chemists
Harvard University faculty
Presidents of Lehigh University
Lafayette College faculty
Central High School (Philadelphia) alumni | Thomas Messinger Drown | [
"Chemistry"
] | 1,124 | [
"Analytical chemists"
] |
9,379,243 | https://en.wikipedia.org/wiki/Bitfrost | Bitfrost is the security design specification for the OLPC XO, a low cost laptop intended for children in developing countries and developed by the One Laptop Per Child (OLPC) project. Bitfrost's main architect is Ivan Krstić. The first public specification was made available in February 2007.
Bitfrost architecture
Passwords
No passwords are required to access or use the computer.
System of rights
Every program, when first installed, requests certain bundles of rights, for instance "accessing the camera", or "accessing the internet". The system keeps track of these rights, and the program is later executed in an environment which makes only the requested resources available. The implementation is not specified by Bitfrost, but dynamic creation of security contexts is required. The first implementation was based on vserver, the second and current implementation is based on user IDs and group IDs (/etc/password is edited when an activity is started), and a future implementation might involve SE Linux or some other technology.
By default, the system denies certain combinations of rights; for instance, a program would not be granted both the right to access the camera and to access the internet. Anybody can write and distribute programs that request allowable right combinations. Programs that require normally unapproved right combinations need a cryptographic signature by some authority. The laptop's user can use the built-in security panel to grant additional rights to any application.
Modifying the system
The users can modify the laptop's operating system, a special version of Fedora Linux running the new Sugar graphical user interface and operating on top of Open Firmware. The original system remains available in the background and can be restored.
By acquiring a developer key from a central location, a user may even modify the background copy of the system and many aspects of the BIOS. Such a developer key is only given out after a waiting period (so that theft of the machine can be reported in time) and is only valid for one particular machine.
Theft-prevention leases
The laptops request a new "lease" from a central network server once a day. These leases come with an expiry time (typically a month), and the laptop stops functioning if all its leases have expired. Leases can also be given out from local school servers or via a portable USB device. Laptops that have been registered as stolen cannot acquire a new lease.
The deploying country decides whether this lease system is used and sets the lease expiry time.
Microphone and camera
The laptop's built-in camera and microphone are hard-wired to LEDs, so that the user always knows when they are operating. This cannot be switched off by software.
Privacy concerns
Len Sassaman, a computer security researcher at the Catholic University of Leuven in Belgium and his colleague Meredith Patterson at the University of Iowa in Iowa City claim that the Bitfrost system has inadvertently become a possible tool for unscrupulous governments or government agencies to definitively trace the source of digital information and communications that originated on the laptops. This is a potentially serious issue as many of the countries which have the laptops have governments with questionable human rights records.
Notes
The specification itself mentions that the name "Bitfrost" is a play on the Norse mythology concept of Bifröst, the bridge between the world of mortals and the realm of Gods. According to the Prose Edda, the bridge was built to be strong, yet it will eventually be broken; the bridge is an early recognition of the idea that there's no such thing as a perfect security system.
See also
CapDesk
References
External links
Ivan Krstić's homepage
OLPC Wiki: Bitfrost
Bitfrost specification, version Draft-19 - release 1, 7 February 2007
High Security for $100 Laptop, Wired News, 7 February 2007
Making antivirus software obsolete - Technology Review magazine recognized Ivan Krstić, Bitfrost's main architect, as one of the world's top innovators under the age of 35 (Krstić was 21 at the time of publication) for his work on the system.
One Laptop per Child
Cryptographic software | Bitfrost | [
"Mathematics"
] | 848 | [
"Cryptographic software",
"Mathematical software"
] |
9,379,584 | https://en.wikipedia.org/wiki/Dutch%20barn | Dutch barn is the name given to markedly different types of barns in the United States and Canada, and in the United Kingdom. In the United States, Dutch barns (a. k. a. New World Dutch barns) represent the oldest and rarest types of barns. There are relatively few—probably fewer than 600—of these barns still intact. Common features of these barns include a core structure composed of a steep gabled roof, supported by purlin plates and anchor beam posts, the floor and stone piers below. Little of the weight is supported by the curtain wall, which could be removed without affecting the stability of the structure. Large beams of pine or oak bridge the center aisle for animals to provide room for threshing. Entry was through paired doors on the gable ends with a pent roof over them, and smaller animal doors at the corners of the same elevations. The Dutch Barn has a square profile, unlike the more rectangular English or German barns. In the United Kingdom a structure called a Dutch barn is a relatively recent agricultural development meant specifically for hay and straw storage; most examples were built from the 19th century. British Dutch barns represent a type of pole barn in common use today. Design styles range from fixed roof to adjustable roof; some Dutch barns have honeycombed brick walls, which provide ventilation and are decorative as well. Still other British Dutch barns may be found with no walls at all, much like American pole barns.
In the United States
The New World Dutch barn is the rarest of the American barn forms. The remaining American Dutch-style barns represent relics from the 18th and 19th centuries. Dutch barns were the first great barns built in the United States, mostly by Dutch settlers in New Netherlands.
New Netherlanders settled along the Hackensack, Passaic, Raritan, Millstone rivers and their tributaries in New Jersey. In New York, they concentrated in the Hudson Valley, and along the Mohawk River and Schoharie Creek .
Many Dutch barns also were built in other portions of the American Northeast.
History
Relatively few—probably fewer than 600—Dutch barns survive intact in the 21st century. Those that remain date from the 18th and early 19th century. Dutch barns rarely remain in a good, unaltered condition.
The Dutch Barn Preservation Society has cataloged hundreds of standing Dutch Barns throughout the Hudson, Mohawk, and Schoharie Valleys as well as in New Jersey. Schoharie County Historian Harold Zoch regularly speaks on Dutch barns.
Examples
New World Dutch Barns in the National Register of Historic Places include the Wortendyke Barn, Windfall Dutch Barn, and an example at the Caspar Getman Farmstead.
Design
The exterior features a broad gable roof, which, in early Dutch barns extended very low to the ground. The barns feature center doors for wagons on the narrow end. A pent roof, or a pentice, over the doors offered some protection from inclement weather. The siding was usually horizontal and had few details. Dutch barns often lacked windows and had no openings other than the doors and holes for purple martins to enter. The design of the Dutch barn allows it to have a massive presence, giving it an appearance larger by comparison to other barns.
Inside the barns are supported by heavy structural systems. The mortised and tenoned and pegged beams are arranged in "H-shaped" units. The design alludes to cathedral interiors with columned aisles along a central interior space, used in Dutch barns for threshing. It is this design that links Dutch barns to the Old World barns of Europe. Another distinctive feature of the Dutch barn is that the ends of the cross beams protrude through the columns. These protrusions are often rounded to form tongues. This feature is not found in any other style of barn design.
Distribution
The Dutch barn was widely distributed in areas of New Jersey and New York. Dutch barns have been identified in southwestern Michigan, Illinois, and Kentucky in the United States Midwest. The Illinois and Kentucky examples may have been misidentified when recorded, and might have been Midwest three portal barns instead. However, New Jersey Dutch are documented as having settled in Henry and Mercer counties in Kentucky so there may be reason to believe that the barns in Kentucky may actually be Dutch Barns. Further research is warranted.
In Canada
North of Toronto, Ontario, Dutch barns were found in the Dutch settled areas.
In the United Kingdom
What are called Dutch barns in the United Kingdom are sometimes called a hay barrack in the U.S., a specific type of barn developed for the storage of hay. They have a roof, but no walls. These are a relatively recent development in the history of British farm architecture, most examples dating from the 19th century. Nowadays they are more commonly used to store straw. They also are called pole barns and hay barns.
History
Early barn types in the U.K., such as aisled barns, were primarily used for the processing and temporary storage of grain. Processing comprised hand-threshing (later in history replaced by machine threshing): the grain would then be removed to a granary for permanent storage. Following the agricultural revolution of the 16th to mid-19th century, with its emphasis on the improvement of farming techniques, there was a marked increase in the amount of hay that was produced (partly due to the use of water-meadows and partly due to crop rotation). The hay barn was developed in response to this: formerly the small amounts of precious hay produced had been stored in the haylofts over the cow house or stables, or in haystacks. However, haystacks are prone to spoiling in the rain, especially after the stack has been 'opened' for consumption. As the weather in the U.K. is often wet, several different types of hay barns evolved, but all shared certain characteristics: they were roofed and well-ventilated. Hay barns came into use at the end of the 18th century. Dutch barns are still very common in the U.K., and are nowadays most commonly used to store straw rather than hay.
Design
Various types of hay barn included those with 'honeycombed' brick walls, forming a decorative as well as practical form of ventilation, and the Dutch barn, which has a roof but open sides. The roof kept off the rain but the lack of walls allowed good ventilation around the hay and prevented spoiling.
The term 'Dutch barn' has been used in the U.K. both to describe such structures with fixed roofs and those with adjustable roofs. The latter type are also, confusingly, sometimes called French barns. Due to their ease of construction these structures are often considered temporary and appear and disappear in the landscape; the interval is often determined by the life of the pole upright or the corrugated iron roof. They are often constructed with a rounded or arched corrugated iron roof and with metal uprights, although frequently, telegraph poles are used for the uprights.
References
Further reading
John Fitchen, The New World Dutch Barn; A Study of Its Characteristics, Its Structural System, and Its Probable Erectional Procedures (Syracuse University Press, 1968)
John Fitchen, Greg Huber editor, The New World Dutch Barn: The Evolution, Forms, and Structure of a Disappearing Icon (Syracuse University Press, 2001)
Dutch Barn Preservation Society Newsletter
External links
Dutch Barn Preservation Society in the United States
Dutch barn recorded by British archaeological project
Hudson Valley Vernacular Architecture
Finding aid for John Fitchen papers, 1927-1989, Getty Research Institute, Los Angeles. Accession No. 910018. Most of the research materials relate to three of Fitchen's books: Construction of Gothic Cathedrals, The New World Dutch Barn, and Building Construction Before Mechanization.
Barns
Agriculture in the United Kingdom
Agricultural buildings in the United States
Architecture in the United States
Architecture in the United Kingdom
Timber framing
Timber framed buildings | Dutch barn | [
"Technology"
] | 1,596 | [
"Structural system",
"Timber framing"
] |
9,379,691 | https://en.wikipedia.org/wiki/Radical%20of%20a%20module | In mathematics, in the theory of modules, the radical of a module is a component in the theory of structure and classification. It is a generalization of the Jacobson radical for rings. In many ways, it is the dual notion to that of the socle soc(M) of M.
Definition
Let R be a ring and M a left R-module. A submodule N of M is called maximal or cosimple if the quotient M/N is a simple module. The radical of the module M is the intersection of all maximal submodules of M,
Equivalently,
These definitions have direct dual analogues for soc(M).
Properties
In addition to the fact rad(M) is the sum of superfluous submodules, in a Noetherian module rad(M) itself is a superfluous submodule.
In fact, if M is finitely generated over a ring, then rad(M) itself is a superfluous submodule. This is because any proper submodule of M is contained in a maximal submodule of M when M is finitely generated.
A ring for which rad(M) = {0} for every right R-module M is called a right V-ring.
For any module M, rad(M/rad(M)) is zero.
M is a finitely generated module if and only if the cosocle M/rad(M) is finitely generated and rad(M) is a superfluous submodule of M.
See also
Socle (mathematics)
Jacobson radical
References
Module theory | Radical of a module | [
"Mathematics"
] | 347 | [
"Fields of abstract algebra",
"Module theory"
] |
9,380,313 | https://en.wikipedia.org/wiki/Cayley%27s%20mousetrap | Mousetrap is the name of a game introduced by the English mathematician Arthur Cayley. In the game, cards numbered through ("say thirteen" in Cayley's original article) are shuffled to place them in some random permutation and are arranged in a circle with their faces up. Then, starting with the first card, the player begins counting and moving to the next card as the count is incremented. If at any point the player's current count matches the number on the card currently being pointed to, that card is removed from the circle and the player starts all over at on the next card. If the player ever removes all of the cards from the permutation in this manner, then the player wins. If the player reaches the count and cards still remain, then the game is lost.
In order for at least one card to be removed, the initial permutation of the cards must not be a derangement. However, this is not a sufficient condition for winning, because it does not take into account subsequent removals. The number of ways the cards can be arranged such that the entire game is won, for n = 1, 2, ..., are
1, 1, 2, 6, 15, 84, 330, 1812, 9978, 65503, ... .
For example with four cards, the probability of winning is 0.25, but this reduces as the number of cards increases, and with thirteen cards it is about 0.0046.
References
. University of Göttingen Göttinger Digitalisierungszentrum (GDZ) scan
.
.
.
External links
Mathematical games | Cayley's mousetrap | [
"Mathematics"
] | 341 | [
"Recreational mathematics",
"Combinatorics stubs",
"Mathematical games",
"Combinatorics"
] |
9,381,411 | https://en.wikipedia.org/wiki/Confederation%20of%20European%20Environmental%20Engineering%20Societies | The Confederation of European Environmental Engineering Societies (CEEES) was created as a co-operative international organization for information exchange regarding environmental engineering between the various European societies in this field.
The CEEES maintains an online public discussion forum for the interchange of information.
The member societies of the CEEES
As of 2012, these were the twelve member societies of the CEEES:
Italy: Associazione Italia Tecnici Prove Ambientali (AITPA)
France: Association pour le Développement des Sciences et Techniques de l'Environnement (ASTE)
Belgium: Belgian Society of Mechanical and Environmental Engineering (BSMEE)
Germany: Gesellschaft für Umweltsimulation (GUS)
Finland: Finnish Society of Environmental Engineering (KOTEL)
Czech Republic: National Association of Czech Environmental Engineers (NACEI)
Austria: Österreichische Gesellschaft für Umweltsimulation (ÖGUS)
Netherlands: PLatform Omgevings Technologie (PLOT)
United Kingdom: Society of Environmental Engineers (SEE)
Sweden: Swedish Environmental Engineering Society (SEES)
Portugal: Sociedade Portuguesa de Simulacao Ambiental e Aveliaca de Riscos (SOPSAR)
Switzerland: Swiss Society for Environmental Engineering (SSEE)
Each member society successively holds the presidency and the secretariat for a period of two years.
Technical Advisory Boards
The CEEES has three major Technical Advisory Boards:
Mechanical Environments: The aim of this board is to advance methodologies and technologies for quantifying, describing and simulating mechanical environmental conditions experienced by mechanical equipment during its useful life.
Climatic and Atmospheric Pollution Effects: The aim of this board is the study of the climatic and atmospheric pollution effects on materials and mechanical equipment.
Reliability and Environmental Stress Screening: The aim of this board is the study how the environmental effects the reliability of equipment.
Publications
These are some of the publications of the CEEES:
A Bibliography on Transportation Environment, ISSN 1104-6341, published by the Swedish Packaging Research Institute (Packforsk) in 1994.
Synthesis of an ESS-Survey at the European Level, ISSN 1104-6341, published by the Swiss Society for Environmental Engineering (SSEE) in 1998.
List of Technical Documents Dedicated or Related to ESS, , published by the Swiss Society for Environmental Engineering (SSEE) in 1998.
Climatic and Air Pollution Effects on Material and Equipment,ISBN No. 978-3-9806167-2-0, published by Gesellschaft für Umweltsimulation (GUS) in 1999.
Natural and Artificial Ageing of Polymers, 1st European Weathering Symposium, Prague. , published by Gesellschaft für Umweltsimulation (GUS) in 2004
Natural and Artificial Ageing of Polymers, 2nd European Weathering Symposium, Gothenburg. , published by Gesellschaft für Umweltsimulation (GUS) in 2005
Ultrafine Particles – Key in the Issue of Particulate Matter?, 18th European Federation of Clean Air (EFCA) International Symposium, published by the Karlsruhe Research Center (Forschungszentrum Karlsruhe FZK) in 2007.
Natural and Artificial Ageing of Polymers, 3rd European Weathering Symposium, Kraków. ISBN No. 978-3-9810472-3-3, published by GUS in 2005.
Reliability - For A Mature Product From The Beginning Of Useful Life. The Different Type Of Tests And Their Impact On Product Reliability. ISSN 1104-6341, published online by CEEES in 2009.
See also
European Environment Agency
Environment Agency
Ministry of Housing, Spatial Planning and the Environment (Netherlands)
Environmental technology
Environmental science
Coordination of Information on the Environment
External links
Official website
ASTE website
BSMEE website
CEEES website.
GUS website
KOTEL website
ÖGUS website
PLOT website
SEE website
SEES website
SOPSAR website
SSEE website
References
International environmental organizations
Environmental engineering
Pan-European trade and professional organizations | Confederation of European Environmental Engineering Societies | [
"Chemistry",
"Engineering"
] | 780 | [
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
9,382,657 | https://en.wikipedia.org/wiki/Dissipator%20%28building%20design%29 | A dissipator is a device mounted among some sections of a building to reduce strains during an earthquake by slowing down the shaking of the building. During an earthquake, the sections of the building are subjected to movements which are relative to each other (for instance, the relative movement between two different floors). When the structures oscillate, the dissipator devices, some of which are similar to pistons, slow down the vibration by dissipating viscous or friction energy, thus increasing the equivalent viscous coefficient and then reducing the strains on the structure itself.
References
Earthquake and seismic risk mitigation
Earthquake engineering | Dissipator (building design) | [
"Engineering"
] | 129 | [
"Structural engineering",
"Earthquake engineering",
"Earthquake and seismic risk mitigation",
"Civil engineering"
] |
9,383,513 | https://en.wikipedia.org/wiki/Pauli%20equation | In quantum mechanics, the Pauli equation or Schrödinger–Pauli equation is the formulation of the Schrödinger equation for spin-1/2 particles, which takes into account the interaction of the particle's spin with an external electromagnetic field. It is the non-relativistic limit of the Dirac equation and can be used where particles are moving at speeds much less than the speed of light, so that relativistic effects can be neglected. It was formulated by Wolfgang Pauli in 1927. In its linearized form it is known as Lévy-Leblond equation.
Equation
For a particle of mass and electric charge , in an electromagnetic field described by the magnetic vector potential and the electric scalar potential , the Pauli equation reads:
Here are the Pauli operators collected into a vector for convenience, and is the momentum operator in position representation. The state of the system, (written in Dirac notation), can be considered as a two-component spinor wavefunction, or a column vector (after choice of basis):
.
The Hamiltonian operator is a 2 × 2 matrix because of the Pauli operators.
Substitution into the Schrödinger equation gives the Pauli equation. This Hamiltonian is similar to the classical Hamiltonian for a charged particle interacting with an electromagnetic field. See Lorentz force for details of this classical case. The kinetic energy term for a free particle in the absence of an electromagnetic field is just where is the kinetic momentum, while in the presence of an electromagnetic field it involves the minimal coupling , where now is the kinetic momentum and is the canonical momentum.
The Pauli operators can be removed from the kinetic energy term using the Pauli vector identity:
Note that unlike a vector, the differential operator has non-zero cross product with itself. This can be seen by considering the cross product applied to a scalar function :
where is the magnetic field.
For the full Pauli equation, one then obtains
for which only a few analytic results are known, e.g., in the context of Landau quantization with homogenous magnetic fields or for an idealized, Coulomb-like, inhomogeneous magnetic field.
Weak magnetic fields
For the case of where the magnetic field is constant and homogenous, one may expand using the symmetric gauge , where is the position operator and A is now an operator. We obtain
where is the particle angular momentum operator and we neglected terms in the magnetic field squared . Therefore, we obtain
where is the spin of the particle. The factor 2 in front of the spin is known as the Dirac g-factor. The term in , is of the form which is the usual interaction between a magnetic moment and a magnetic field, like in the Zeeman effect.
For an electron of charge in an isotropic constant magnetic field, one can further reduce the equation using the total angular momentum and Wigner-Eckart theorem. Thus we find
where is the Bohr magneton and is the magnetic quantum number related to . The term is known as the Landé g-factor, and is given here by
where is the orbital quantum number related to and is the total orbital quantum number related to .
From Dirac equation
The Pauli equation can be inferred from the non-relativistic limit of the Dirac equation, which is the relativistic quantum equation of motion for spin-1/2 particles.
Derivation
The Dirac equation can be written as:
where and are two-component spinor, forming a bispinor.
Using the following ansatz:
with two new spinors , the equation becomes
In the non-relativistic limit, and the kinetic and electrostatic energies are small with respect to the rest energy , leading to the Lévy-Leblond equation. Thus
Inserted in the upper component of Dirac equation, we find Pauli equation (general form):
From a Foldy–Wouthuysen transformation
The rigorous derivation of the Pauli equation follows from Dirac equation in an external field and performing a Foldy–Wouthuysen transformation considering terms up to order . Similarly, higher order corrections to the Pauli equation can be determined giving rise to spin-orbit and Darwin interaction terms, when expanding up to order instead.
Pauli coupling
Pauli's equation is derived by requiring minimal coupling, which provides a g-factor g=2. Most elementary particles have anomalous g-factors, different from 2. In the domain of relativistic quantum field theory, one defines a non-minimal coupling, sometimes called Pauli coupling, in order to add an anomalous factor
where is the four-momentum operator, is the electromagnetic four-potential, is proportional to the anomalous magnetic dipole moment, is the electromagnetic tensor, and are the Lorentzian spin matrices and the commutator of the gamma matrices . In the context of non-relativistic quantum mechanics, instead of working with the Schrödinger equation, Pauli coupling is equivalent to using the Pauli equation (or postulating Zeeman energy) for an arbitrary g-factor.
See also
Semiclassical physics
Atomic, molecular, and optical physics
Group contraction
Gordon decomposition
Footnotes
References
Books
Eponymous equations of physics
Quantum mechanics | Pauli equation | [
"Physics"
] | 1,075 | [
"Quantum mechanics",
"Theoretical physics",
"Eponymous equations of physics",
"Equations of physics"
] |
9,383,996 | https://en.wikipedia.org/wiki/United%20Nations%20Spatial%20Data%20Infrastructure | The United Nations Spatial Data Infrastructure (UNSDI) is an institutional and technical mechanism for establishing system coherence for the exchange and applications of geospatial data and information for UN activities and supporting SDI (Spatial Data Infrastructure) development activities in Member Countries.
Background of UNSDI
UNSDI is an initiative of the United Nations Geographic Information Working Group (UNGIWG), a voluntary network of UN professionals working in the fields of cartography and geographic information science.
UNSDI aims to contribute to the mission of the United Nations, from to peacekeeping to humanitarian relief, from climate change to disaster reduction, response and recovery, from environmental protection to poverty reduction, food security, water management and economic development and to contribute to the realization of the UN Millennium Development Goals. By facilitating efficient global and local access, exchange and utilization of geospatial information, UNSDI will make the United Nations system more effective and support its “Delivering as One” policies.
Spatial data infrastructures provide the institutional and technical foundation of policies, standards and procedures that enable organizations and information systems to interact in a way that facilitates spatial data discovery, evaluation and applications.
Given that UN agencies vary in their ability to utilise and manage geospatial information it is foreseen that UNSDI will reduce development and operational costs by working together to achieve economies of scale through generic standards, guidelines and implementation tools. Thus, the development of UNSDI is considered essential for increasing system coherence in the use and exchange of geospatial data and information for UN activities.
In the short term, UNSDI is an investment into the capacities of the United Nations System to manage its existing geo-spatial assets more effectively. Additionally UNSDI may serve as a model and vehicle for capacity building in some Member States that request assistance from the United Nations in managing and applying geospatial data to support their national development agenda.
Development at global level
At present the Center of Excellence for UNSDI has been established. The first phase of UNSDI developments consist of a Gazetteer, a Geospatial Data Warehouse, and a Visualization Facility. Two donor countries are involved in the present developments: Australia and Germany. Australia is funding the Gazetteer project, Germany has supplied office and staffing facilities for the UNSDI Center of Excellence in Bonn. The proposal for funding of the Geospatial Data Warehouse and associated activities is to be submitted to the Netherlands in Q4 of 2012.
Development at regional level
The following Regional Organizations joined the process:
Regional Centre for Mapping of Resources for Development (RCMRD) in Nairobi, Kenya
International Centre for Integrated Mountain Development (ICIMOD) in Kathmandu, Nauru, Nepal
Regional Centre for Training in Aerospace Surveys (RECTAS) in Ile-Ife, Nigeria.
Development at national level
Underlying the UNSDI is the need to link UNSDI with national public and private geospatial and SDI capacities, both in developed and developing countries. To this end National Coordination Offices (NCOs) for UNSDI are to be established. Below the established NCOs are listed.
Although some of the NCOs use the acronym UN and the UN emblem in their official title, they are not affiliated with the United Nations. Use of the emblem is restricted, based on General Assembly resolution 92(I),1946, and should be not be used by non-UN entities.
With the following countries discussions on UNSDI participation are ongoing: Australia, Austria, Brazil, Cape Verde, Chile, Jamaica, India, Japan, Mexico, Morocco, Mongolia, Nigeria, Spain and South Africa.
National Coordination Offices
Netherlands http://www.unsdi.nl
Czech Republic https://web.archive.org/web/20080929085051/http://www.unsdi.cz/
Hungary https://web.archive.org/web/20080929234143/http://www.unsdi.hu/
References
UNSDI gazetteer http://www.csiro.au/gazetteer
UN Flag and Emblem http://www.un.org/Depts/dhl/maplib/flag.htm
Documents
Key documentation on the UNSDI initiative can be found at and downloaded from http://www.ungiwg.org/documents
National UNSDI portals
The Netherlands
(Jan Cees Venema, UNSDI-NCO)
Geographic information systems
Geographic societies
Organizations established by the United Nations | United Nations Spatial Data Infrastructure | [
"Technology"
] | 931 | [
"Information systems",
"Geographic information systems"
] |
9,384,017 | https://en.wikipedia.org/wiki/Carboxyfluorescein%20succinimidyl%20ester | Carboxyfluorescein succinimidyl ester (CFSE) is a fluorescent cell staining dye. CFSE is cell permeable and covalently couples, via its succinimidyl group, to intracellular molecules, notably, to intracellular lysine residues and other amine sources. Due to this covalent coupling reaction, fluorescent CFSE can be retained within cells for extremely long periods. Also, due to this stable linkage, once incorporated within cells, the dye is not transferred to adjacent cells.
CFSE is commonly confused with carboxyfluorescein diacetate succinimidyl ester (CFDA-SE), although they are not strictly the same molecule; CFDA-SE, due to its acetate groups, is highly cell permeable, while CFSE is much less so. As CFDA-SE, which is non-fluorescent, enters the cytoplasm of cells, intracellular esterases remove the acetate groups and convert the molecule to the fluorescent ester.
CFSE was originally developed as a fluorescent dye that could be used to stably label lymphocytes and track their migration within animals for many months. Subsequent studies revealed that the dye can be used to monitor lymphocyte proliferation, both in vitro and in vivo, due to the progressive halving of CFSE fluorescence within daughter cells following each cell division. The only limitation is that CFSE at high concentrations can be toxic for cells. However, when CFSE labelling is performed optimally, approximately 7-8 cell divisions can be identified before the CFSE fluorescence is too low to be distinguished above the autofluorescence background. Thus CFSE represents an extremely valuable fluorescent dye for immunological studies, allowing lymphocyte proliferation, migration and positioning to be simultaneously monitored. By the use of fluorescent antibodies against different lymphocyte cell surface markers it is also possible to follow the proliferation behaviour of different lymphocyte subsets. In addition, unlike other methods, CFSE-labeled viable cells can be recovered for further analysis.
Since the initial description of CFSE it has been used in thousands of immunological studies, an example of an early proliferation study in animals being described by Kurts et al. However, perhaps the most important CFSE investigations have been those demonstrating that many of the effector functions of lymphocytes, such as cytokine production by T lymphocytes, and antibody class switching by B cells, are division dependent. Sophisticated mathematical models have also been developed to analyse CFSE data and probe various aspects of immune responses. Furthermore, the use of CFSE has extended beyond the immune system, with the dye being used to monitor the proliferation of many other cell types such as smooth muscle cells, fibroblasts, hematopoietic stem cells and even bacteria. Another novel application of CFSE is its use for the in vitro and in vivo determination of cytotoxic lymphocytes.
Detailed protocols are now available that can be used to label lymphocytes (and other cell types) with a high degree of reliability and precision. One of the most important parameters, however, is to ensure that the cell population being studied has not been too heavily labelled with CFSE, as such cells, although remaining viable, proliferate sub-optimally.
References
Flow cytometry
Fluorone dyes
Succinimides | Carboxyfluorescein succinimidyl ester | [
"Chemistry",
"Biology"
] | 721 | [
"Flow cytometry"
] |
9,384,649 | https://en.wikipedia.org/wiki/Europe%20PubMed%20Central | Europe PubMed Central (Europe PMC) is an open-access repository that contains millions of biomedical research works. It was known as UK PubMed Central until 1 November 2012.
Service
Europe PMC provides free access to more than 9.3 million full-text biomedical and life sciences research articles and over 43.3 million citations. Europe PMC contains some citation information and includes text mining based marked up text that links to external molecular and medical datasets.
The Europe PMC funders group requires that articles describing the results of biomedical and life sciences research they have supported be made freely available in Europe PMC within 6 months of publication to maximise the impact of the work that they fund.
The Grant Lookup facility allows users to search for information in a wide variety of different ways on over 101,900 grants awarded by the Europe PMC funders.
Most content is mirrored from PubMed Central, which manages the deposit of entire books and journals.
Additionally, Europe PMC offers a manuscript submission system, Europe PMC plus, which allows scientists to self-deposit their peer-reviewed research articles for inclusion in the Europe PMC collection.
Organisation
The Europe PMC project was originally launched in 2007 as the first 'mirror' site to PMC, which aims to provide international preservation of the open and free-access biomedical and life sciences literature. It forms part of a network of PMC International (PMCI) repositories that includes PubMed Central Canada. Europe PMC is not an exact "mirror" of the PMC database but has developed some different features. On 15 February 2013 CiteXplore was subsumed under Europe PubMed Central.
The resource is managed and developed by the European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI), on behalf of an alliance of 27 biomedical and life sciences research funders, led by the Wellcome Trust.
Europe PMC is supported by 27 organisations: Academy of Medical Sciences, Action on Hearing Loss, Alzheimer's Society, Arthritis Research UK, Austrian Science Fund (FWF), the Biotechnology and Biological Sciences Research Council, Blood Cancer UK, Breast Cancer Now, the British Heart Foundation, Cancer Research UK, the Chief Scientist Office of the Scottish Executive Health Department, Diabetes UK, the Department of Health, the Dunhill Medical Trust, the European Research Council, Marie Curie, the Medical Research Council, the Motor Neurone Disease Association, the Multiple Sclerosis Society, the Myrovlytis Trust, the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), Parkinson's UK, Prostate Cancer UK, Telethon Italy, the Wellcome Trust, the World Health Organization and Worldwide Cancer Research (formerly Association for International Cancer Research).
See also
List of academic databases and search engines
MEDLINE
PubMed Central
Hyper Articles en Ligne
Isidore (platform)
References
External links
Fact-sheet
Internet properties established in 2007
Bibliographic databases and indexes
Biological databases
Databases in Europe
Full-text scholarly online databases
Information technology organisations based in the United Kingdom
Medical databases
Medical research organizations
Medical search engines
Open-access archives
Science and technology in Cambridgeshire
South Cambridgeshire District | Europe PubMed Central | [
"Biology"
] | 653 | [
"Bioinformatics",
"Biological databases"
] |
9,384,714 | https://en.wikipedia.org/wiki/Basal%20area | Basal area is the cross-sectional area of trees at breast height (1.3m or 4.5 ft above ground). It is a common way to describe stand density. In forest management, basal area usually refers to merchantable timber and is given on a per hectare or per acre basis. If one cut down all the merchantable trees on an acre at off the ground and measured the square inches on the top of each stump (πr*r), added them all together and divided by square feet (144 sq inches per square foot), that would be the basal area on that acre. In forest ecology, basal area is used as a relatively easily-measured surrogate of total forest biomass and structural complexity, and change in basal area over time is an important indicator of forest recovery during succession
.
Estimation from diameter at breast height
The basal area (BA) of a tree can be estimated from its diameter at breast height (DBH), the diameter of the trunk as measured 1.3m (4.5 ft) above the ground. DBH is converted to BA based on the formula for the area of a circle:
If was measured in cm, will be in cm2. To convert to m2, divide by 10,000:
If is in inches, divide by 144 to convert to ft2:
The formula for BA in ft2 may also be simplified as:
in English system
in Metric system
The basal area of a forest can be found by adding the basal areas (as calculated above) of all of the trees in an area and dividing by the area of land in which the trees were measured. Basal area is generally made for a plot and then scaled to m2/ha or ft2/acre to compare forest productivity and growth rate among multiple sites.
Estimation using a wedge prism
A wedge prism can be used to quickly estimate basal area per hectare. To find basal area using this method, simply multiply your BAF (Basal Area Factor) by the number of "in" trees in your variable radius plot. The BAF will vary based on the prism used, common BAFs include 5/8/10, and all "in" trees are those trees, when viewed through your prism from plot centre, that appear to be in-line with the standing tree on the outside of the prism.
Worked example
Suppose you carried out a survey using a variable radius plot with angle count sampling (wedge prism) and you selected a Basal Area Factor (BAF) of 4. If your first tree had a diameter at breast height (DBH) of 14cm, then the standard way of calculating how much of 1ha was covered by tree area (scaling up from that tree to the hectare) would be:
(BAF/((DBH+0.5)2 × π/4))) × 10,000
BAF, in this case 4, is the BAF selected for the sampling technique.
DBH, in this case 14 (this uses an assumed diameter, when actually used is the radius perpendicular to the tangent line)
The + 0.5 allows under and over measurement to be accounted for.
The π/4 converts the rest to the area.
In this case this means in every Ha there is 242 m2 of tree area according to this sampled tree being taken as representative of all the unmeasured trees.
Fixed area plot
It would also be possible to survey the trees in a Fixed Area Plot (FAP). Also called a Fixed Radius Plot. In the case that this plot was 100 m2. Then the formula would be
(DBH+0.5)2X π/4
References
R. Hédl, M. Svátek, M. Dancak, Rodzay A.W., M. Salleh A.B., Kamariah A.S. A new technique for inventory of permanent plots in tropical forests: a case study from lowland dipterocarp forest in Kuala Belalong, Brunei Darussalam, In Blumea 54, 2009, p 124–130. Published 30. 10. 2009.
Forest modelling
Measurement
Forest ecology | Basal area | [
"Physics",
"Mathematics"
] | 838 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
9,384,730 | https://en.wikipedia.org/wiki/Body%20hacking | Body hacking is the application of the hacker ethic (often in combination with a high risk tolerance) in pursuit of enhancement or change to the body's functions through technological means, such as do-it-yourself cybernetic devices or by introducing biochemicals.
Grinders are a self-identified community of body hackers. Many grinders identify with the biopunk movement, open-source transhumanism, and techno-progressivism. The Grinder movement is strongly associated with the body modification movement and practices actual implantation of cybernetic devices in organic bodies as a method of working towards transhumanism. This includes designing and installing do-it-yourself body enhancements, such as magnetic implants. Biohacking emerged in a growing trend of non-institutional science and technology development.
"Biohacking" can also refer to managing one's own biology using a combination of medical, nutritional, and electronic techniques. This may include the use of nootropics, nontoxic substances, and/or cybernetic devices for recording biometric data (as in the quantified self movement).
Ideology
Grinders largely identify with transhumanist and biopunk ideologies. Transhumanism is the belief that it is both possible and desirable to so fundamentally alter the human condition through the use of technologies as to inaugurate a superior post-human being. Kara Platoni categorizes such technological modifications as "hard" biohacking, noting the desire to expand the boundaries of human perception and even create "new senses".
Biopunk is a techno-progressive cultural and intellectual movement that advocates open access to genetic information and espouses the liberating potential of truly democratic technological development. Like other punk movements, biopunk encourages the DIY ethic. "Grinders" adhere to an anarchist strain of biopunk that emphasizes non-hierarchical science and DIY.
Cyborgs and cyborg theory strongly influence techno-progressivism and transhumanism and are thus influential to both the DIY-bio movement and grinder movement in general. Some biohackers, such as grinders and the British professor of cybernetics Kevin Warwick, actively design and implement technologies that are integrated directly into the organic body. Examples of this include DIY magnetic fingertip implants or Warwick's "Project Cyborg". Cyborg theory was kickstarted in 1985 with the publication of Donna Haraway's influential "Cyborg Manifesto" but can be traced back all the way to Manfred Clynes and Nathan Klines' article "Cyborgs and Space". This body of theory criticizes the rigidity of ontological boundaries and attempts to denaturalize artificial dichotomies.
Notable people
Kevin Warwick is a British scientist and professor of cybernetics who has been instrumental in advancing and popularizing cyborg technology and biohacking through his self-experiments.
Steve Mann is a professor of electrical and computer engineering who has dedicated his career to inventing, implementing, and researching cyborg technologies, in particular, wearable computing technologies.
Amal Graafstra is known for implanting an RFID chip in 2005 and developing human-friendly chips, including the first-ever implantable NFC chip. In 2013, he founded the biotech startup company Dangerous Things. He is also the author of RFID Toys and speaker on biohacking topics, including a TEDx talk. He has also built a smartgun that is activated by his implants. He has created an implantable cryptographic processor called VivoKey for personal identity and cryptography applications.
Lepht Anonym is a biohacker and transhumanist known for self-surgeries and material implementation of transhumanist ideologies.
Winslow Strong is a mathematician and physicist.
Tim Cannon is a software developer, entrepreneur, and co-founder of biotech startup company Grindhouse Wetware.
Jeffrey Tibbetts is the organiser of the Grindfest events at his lab in California. He is the founder of Symbiont Labs, a custom implant fabrication facility and implantation clinic. His work has been featured in a number of sources, such as Gizmodo.
Alex Smith is a biohacker known for his work developing new implants, such as the Firefly implants. He has spoken at various conferences, including DEFCON, and been featured in a number of news articles.
Rich Lee is known for implanting headphones in his tragi in 2013, as well as for his work on a vibrating pelvic implant called the Lovetron9000. His biohacking activities were used as a justification to remove his parental custody rights in 2016.
Brian Hanley is an American microbiologist who became known for being one of the first biohackers to engineer their own DNA using gene therapy for human enhancement and life extension.
Meow-Ludo Disco Gamma Meow-Meow implanted a microchip used for the Opal card in Sydney, Australia, though he was subsequently fined $220 for failing to comply with existing transit laws. He also ran against Barnaby Joyce in the Division of New England.
Jo Zayner attempted a full fecal microbiota transplant on herself in February 2016. She is also the founder of the ODIN, a company that delivers DIY-biology and genetic modification kits to consumers.
Biohacker Hannes Sjöblad has been experimenting with NFC chip implants since 2015. In his talk at Echappée Voléé 2016 in Paris, Sjöblad said that he has also implanted himself with a chip between his forefinger and thumb and uses it to unlock doors, make payments, unlock his phone, and essentially replace anything that is in his pockets. He has also hosted several "implant parties", where interested parties can get chips implanted.
Artem Vasilev is a Russian biohacker. In 2018, he opened his own biohacking laboratory, spending more than $2 million, together with partners. Vasilev has a decade of experience optimizing health and performance for executives and professional athletes, including Olympic medalists.
Groups and organizations
Grindhouse Wetware, biotechnology startup company based in Pittsburgh, Pennsylvania
KSEC Solutions, worldwide distributor and consultancy based in the United Kingdom
BioViva, gene therapy research and development company based in Bainbridge Island, Washington
See also
Neurohacking
Do-it-yourself biology
References
External links
Grinder Resource Library
Biopunk directory
Transhumanist resources
Videos
Richard Thieme, "Hacking, biohacking and the future of humanity"
"Biohackers: a journey into cyborg America"
Kevin Warwick, "The last remaining hurdles to cyborg technology"
Kevin Warwick, "Implants and technology—the future of healthcare?"
Kevin Warwick, "Cyborg interfaces"
RBC Trends: "How to make yourself more powerful: what is biohacking?"
Biology and culture
Biopunk
Hacker culture
Subcultures
Transhumanism | Body hacking | [
"Technology",
"Engineering",
"Biology"
] | 1,441 | [
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology"
] |
9,384,746 | https://en.wikipedia.org/wiki/Irving%E2%80%93Williams%20series | The Irving–Williams series refers to the relative stabilities of complexes formed by transition metals. In 1953 Harry Irving and Robert Williams observed that the stability of complexes formed by divalent first-row transition metal ions generally increase across the period to a maximum stability at copper: Mn(II) < Fe(II) < Co(II) < Ni(II) < Cu(II) > Zn(II).
Specifically, the Irving–Williams series refers to the exchange of aqua (H2O) ligands for any other ligand (L) within a metal complex. In other words, the Irving–Williams series is almost exclusively independent of the nature of the incoming ligand, L.
The main application of the series is to empirically suggest an order of stability within first row transition metal complexes (where the transition metal is in oxidation state II).
Another application of the Irving–Williams series is to use it as a correlation "ruler" in comparing the first stability constant for replacement of water in the aqueous ion by a ligand.
Explanation
Three explanations are frequently used to explain the series:
The ionic radius is expected to decrease regularly from Mn(II) to Zn(II). This is the normal periodic trend and would account for the general increase in stability.
The crystal field stabilization energy (CFSE) increases from zero for Mn(II) to a maximum at Ni(II). This makes the complexes increasingly stable. CFSE for Zn(II) is zero.
Although the CFSE of Cu(II) is less than that of Ni(II), octahedral Cu(II) complexes are subject to the Jahn–Teller effect, which affords octahedral Cu(II) complexes additional stability.
However, none of the above explanations can satisfactorily explain the success of the Irving–Williams series in predicting the relative stabilities of transition metal complexes. A recent study of metal-thiolate complexes indicates that an interplay between covalent and electrostatic contributions in metal–ligand binding energies might result in the Irving–Williams series.
Some actual CFSE values for octahedral complexes of first-row transition metals (∆oct) are 0.4Δ (4 Dq) for iron, 0.8Δ (8 Dq) for cobalt and 1.2Δ (12 Dq) for nickel. When the stability constants are quantitatively adjusted for these values they follow the trend that is predicted, in the absence of crystal field effects, between manganese and zinc. This was an important factor contributing to the acceptance of crystal field theory, the first theory to successfully account for the thermodynamic, spectroscopic and magnetic properties of complexes of the transition metal ions and precursor to ligand field theory.
Natural proteins' affinities for metal binding also follow the Irving–Williams series. However, in a recent study published in the journal Nature, researchers have reported a protein-design approach to overcome the Irving-Williams series restriction, allowing proteins to bind other metals over copper ions vice versa to Irving–Williams series.
References
External links
Irving-Williams Series - Transition Metal Chemistry
Transition metals
Equilibrium chemistry | Irving–Williams series | [
"Chemistry"
] | 644 | [
"Equilibrium chemistry"
] |
9,384,886 | https://en.wikipedia.org/wiki/Burn-in%20oven | Burn-in ovens in electronics device fabrication, are designed for dynamic and static burn-in of integrated circuits and other electronic devices, including laser diodes. Typical sizes are from under ten to over , with air or nitrogen configurations. Operating temperatures can go over , and can use both single and multiple temperature settings.
Burn-in oven applications can be used in numerous different applications such as high-dissipation forward bias, high-temperature reverse bias, dynamic and static burn-in of microprocessors and other semiconductor devices.
Burn-in ovens are considered a type of batch oven. Other types of batch ovens are bench/laboratory, reach-in, walk in/truck in, and clean process.
One company builds systems designed for burn-in of low power laser diodes up to 1A and high power laser diodes up to 300A.
References
Industrial ovens
Semiconductor fabrication equipment
Environmental testing | Burn-in oven | [
"Engineering"
] | 187 | [
"Reliability engineering",
"Semiconductor fabrication equipment",
"Environmental testing",
"Industrial ovens",
"Industrial machinery"
] |
9,384,929 | https://en.wikipedia.org/wiki/UnifiedPOS | UnifiedPOS or UPOS is a world wide vendor- and retailer-driven Open Standard's initiative under the National Retail Federation, Association of Retail Technology Standards (NRF-ARTS) to provide vendor-neutral software application interfaces (APIs) for numerous (as of 2011, thirty-six) point of sale (POS) peripherals (POS printer, cash drawer, magnetic stripe reader, bar code scanner, line displays, etc.).
The goal is to allow retailers freedom of choice in the selection of POS peripheral devices by the creation, utilization, and promotion of standardized connectivity. UnifiedPOS is an abstraction standard that contains appendices which provide specific platform implementation information for Microsoft .NET and Java.
Developed by a team of joint retailer and industry technical experts following published policies and procedures, UnifiedPOS provides a consistent and exact framework for programming point of sale devices that is platform-independent and vendor-neutral.
Recent efforts (2010-2011) by the UnifiedPOS committee include provisions for local and remote POS peripheral support through a supplemental Web Services for point of service (WS-POS 1.1) standard. In addition, an increasing focus on using XML language commands to control the POS devices can be seen in the XML-POS Appendix in UnifiedPOS Version 1.13 standard.
Management
The UnifiedPOS standard is managed by Association for Retail Technology Standards (ARTS) through two committees. The ARTS board is composed of international retailers and vendors from all industry segments. The principal responsibilities of this committee are to ensure that the standards it manages continues to expand in accordance with retailer requirements. The committee will specify changes to the standards and approve new devices. Membership in NRF-ARTS is required to participate in the administrative committee. Membership in NRF-ARTS is not required to download and use the UnifiedPOS standards.
The technical committee, composed of both vendors and retailers, modifies the UnifiedPOS specification based on guidance of the Administrative Committee. The technical committee provides support by resolving implementation issues or issues arising from the standard.
OPOS and JavaPOS implementation groups modify or enhance their actual program implementations to conform to the UnifiedPOS specification. The UnifiedPOS technical committee periodically audits the specific implementations to ensure the various groups conform to the UnifiedPOS specification.
In 2003, a .NET implementation was proposed, to the technical committee and subsequently accepted into the standard. This implementation is known as POS for .NET. . Microsoft has been criticized for not keeping up with the current revisions (1.13) to the UPOS standard. POS for .NET version 1.12 is not compatible with the current .NET Framework 4.0. However, On December 13, 2013, Microsoft released POS for .NET 1.14 CTP via Microsoft Connect. POS for .NET 1.14 is intended to conform to UPOS 1.14. The public release of POS for .NET is expected in Spring 2014.
In 2011 effort is underway to create an updated UnifiedPOS 2.0 standard which will add many enhanced features and functions to support newer programming paradigms and remote XML POS peripheral installation scenarios.
Since 2017, the Object Management Group has assumed responsibility for the UnifiedPos standard.
Practical application of the UPOS standard
Although UPOS claims support for 36 device types, in practice most retailers will not be able to take advantage of many of these devices. For example, there are few, if any, UPOS service objects that support biometric or RFID types. Instead, most device manufacturers for these devices have created their own proprietary device drivers.
See also
National Retail Federation
Point of sale
Point-of-sale malware
References
External links
Association for Retail Technology Standards
Microsoft POS for .NET
POS for .NET documentation
Retail point of sale systems
Standards | UnifiedPOS | [
"Technology"
] | 775 | [
"Retail point of sale systems",
"Information systems"
] |
9,385,035 | https://en.wikipedia.org/wiki/Retired%20number | Retiring the number of an athlete is an honor a team bestows upon a player, usually after the player has left the team, retires from the sport, or dies, by taking the number formerly worn on their uniform out of circulation. Once a number is retired, no future player from the team may wear it, unless the original player permits it; however, in many cases the number cannot be used at all. Such an honor may also be bestowed on players who had highly memorable careers, died prematurely under tragic circumstances, or have had their promising careers ended by serious injury. Some sports that retire team numbers include baseball, cricket, ice hockey, basketball, American football, and association football. Retired jerseys are often referred to as "hanging from the rafters" as they are put to hang in the team's home venue.
The first number officially retired by a team in a professional sport was that of ice hockey player Ace Bailey, whose number 6 was retired by the Toronto Maple Leafs in 1934. Some teams have also retired number 12 in honor of their fans, or the "twelfth man". Similarly, the Sacramento Kings and Orlando Magic retired number 6 in honor of their fans, the "sixth man". Sometimes, a team may decide to retire a number in honor of tragedies involving the team's city or state. For example, the number 58 was retired in 2018 by the Vegas Golden Knights hockey team in honor of the 58 victims killed in the 2017 Las Vegas shooting.
North American sports leagues
If a jersey is retired and an active player is still wearing it, the player is usually permitted to wear the number for his entire career as a player. If in the sport, managers and coaches wear uniform numbers, and the player later becomes a coach for the same team, they are also permitted to wear it as a coach.
However, in some cases, the player may still elect to change their number. For instance, in 1987 the Boston Bruins of the National Hockey League decided to retire jersey number 7 in honor of Phil Esposito, who had become a star while playing for the team. At the time #7 belonged to Ray Bourque, who was the Bruins' captain and had become a star in his own right. On the night of the ceremony honoring Esposito, Bourque took to the ice wearing his normal #7 jersey, which he had worn since breaking into the league in 1979. He skated over to the Hall of Famer, took off his #7 jersey, and handed it to Esposito in what was referred to as Bourque's "surrendering" of #7 to Esposito. Underneath was a jersey numbered 77, which would become as associated with Bourque as #7 had been with Esposito in Boston. Bourque's new jersey number would eventually join Esposito's in the rafters of TD Garden, as the Bruins retired his #77 following his 2001 retirement.
In rare cases, a number may be retired because of the player's endeavors in other fields. For example, former college football star Gerald Ford's number 48 was retired by the University of Michigan football squad under his future career as the 38th President of the United States.
Teams also take numbers out of circulation without formally retiring them, though it is generally understood that those numbers will never be issued again. For example, the Pittsburgh Steelers have only officially retired three numbers: Ernie Stautner's #70, Joe Greene's #75 and Franco Harris' #32. However, they have not reissued the numbers of several of their greatest players since they retired, and it is understood that no Steeler will ever wear them again. For example, Greene's #75 had not been reissued since Greene retired in 1981. Similarly, except for a pair of quarterbacks in the mid-1980s, the Green Bay Packers have not re-issued Paul Hornung's number 5 since he departed from the team following the 1966 season. The Dallas Cowboys do not officially retire numbers, but it is generally understood that Roger Staubach's #12, Bob Lilly's #74, Troy Aikman's #8, and Emmitt Smith's #22 will never be issued again (though the Cowboys have occasionally used Lilly's 74 in the preseason). Additionally, after Peyton Manning was released by the Indianapolis Colts, owner Jim Irsay stated that no Colt would ever wear Manning's #18 again, though it was not officially retired until 2016. After he departed from the team in 2004, the Lakers removed Shaquille O'Neal's #34 from circulation, only officially retiring it in 2013.
Some teams either formally or informally take a jersey out of circulation when a player dies or has their career ended by serious injury or disease. For instance, between 1934 and 2016, the Toronto Maple Leafs only retired a player's number if he experienced a career-ending incident while playing for the team. As a result, they had only retired two jerseys in their history during that time; Ace Bailey's #6 was retired after he suffered a career-ending head injury and Bill Barilko's #5 was retired after his disappearance and presumed death on a fishing trip (his death was confirmed years later with the discovery of the wreckage of the plane on which he was flying). The New York Yankees retired Lou Gehrig's #4 after he was forced to retire due to amyotrophic lateral sclerosis. The New York Jets did not reissue the #90 of Dennis Byrd following a career-ending neck injury, and it was understood long before its formal retirement in 2012 that no Jet would ever wear it again. Similarly, after Wayne Chrebet was forced to retire after suffering multiple concussions, the Jets took his #80 out of circulation but have not yet retired it; Byrd and Curtis Martin were the most recent Jets to have their numbers retired as both were done on the same day. After Magic Johnson retired because of his HIV disease, the Lakers retired his jersey #32.
In 2008, Princeton University retired the number 42 for all Princeton Tigers sports teams in honor of Bill Bradley and Heisman Trophy winner Dick Kazmaier. UCLA retired the same number in 2014 for all Bruins sports teams in honor of Jackie Robinson, who had played in four sports at the school before his Hall of Fame baseball career. Although Robinson never wore #42 at UCLA, the school chose it because of its indelible identification with Robinson.
In 2011, Michigan Wolverines football unretired all of the numbers that it had retired to create legends jerseys worn by its best players. The unretired jerseys were Bennie Oosterbaan's No. 47, Gerald Ford's No. 48, Ron Kramer's No. 87, The Wistert Brothers' (Whitey Wistert, Al Wistert, Alvin Wistert) No. 11 and Tom Harmon's No. 98. In 2015, the Legends program was discontinued, and the numbers re-retired.
On December 18, 2017, Kobe Bryant became the only player to have had two numbers (8, 24) retired by the same franchise, Los Angeles Lakers. Following Bryant's death, the Dallas Mavericks announced that number 24 would no longer be issued by the team (despite Bryant spending his entire career with the Lakers). While the number has not been issued since then, it is not honored in the rafters as an official retired number.
League-wide retirements
Three players in the major North American sports leagues have had their numbers retired by all teams in their respective leagues, those being Jackie Robinson, the first Black player in the modern era of Major League Baseball, Wayne Gretzky, arguably by many as the greatest hockey player in NHL history, and Bill Russell, the most successful player in NBA history in terms of total championship wins.
Robinson had his number 42 retired league-wide in 1997. However, players who were wearing the number at the time were permitted to retain it for the duration of their careers; Mariano Rivera was the last remaining player to wear the number, and he retired at the end of the . The only other exception to this retirement is on Jackie Robinson Day, April 15, the anniversary of Robinson's MLB debut, when all uniformed personnel (players, managers, coaches, umpires) wear 42.
Wayne Gretzky, who retired as the National Hockey League's all-time leader in goals, points, and assists, had his number 99 retired league-wide at the 2000 NHL All-Star Game. On August 11, 2022, the NBA announced that it would retire Bill Russell's number 6 jersey league-wide, allowing players already wearing the number to continue to do so.
Association football
Association football has a far shorter history of players wearing squad numbers; from the introduction of numbers of shirts in the 1930s until the 1990s, the players on a team almost always wore numbers 1 to 11, irrespective of which players were selected. This meant that players often wore many different numbers during their time with a club and even during the same season, and were not as readily associated with a specific number as players in North American sports. Nonetheless, some star players were associated with a particular number and this, along with squad numbers becoming more common since the 1990s, has led some clubs to retire numbers. AS Roma, AC Milan, Ajax, Birmingham City, Inter Milan, Napoli, Manchester City, Lens, Lyon, Nantes and Swansea City have all retired shirt numbers; Milan retiring Franco Baresi's #6 shirt and Paolo Maldini's #3 shirt (with the caveat that one of Maldini's sons can wear the shirt if they play professionally for the club). Swansea retired the shirt number of Besian Idrizaj after his death from a suspected heart attack. Manchester City, Lens and Lyon all retired the shirt number of Marc-Vivien Foé after his death on the field in the 2003 Confederations Cup.
FIFA have rejected all attempts by national teams to retire numbers. These include the Cameroon national team attempting to retire Foé's number, Argentina and the #10 of Diego Maradona, and The Netherlands and the #14 of Johan Cruyff.
Australian rules football
In Australian rules football, some clubs may exercise the right to retire a particular guernsey number, either to honour a past player or to simply cease use of the number. Examples include the Hawthorn Football Club, who retired their No. 1 guernsey prior to the beginning of the 2011 AFL season as the tribute to the fans, according to Max Bailey, the last person to wear the #1 guernsey, had his career cut short by multiple injuries to his right knee, and thanked the fans in his comeback attempts, and the Collingwood Football Club, who retired their No. 42 guernsey in honour of Darren Millane, a Collingwood premiership player who was killed in a car crash in 1991.
Motorsport
In NASCAR, only once has a number been unofficially retired; that is in the Whelen Modified Tour, where number 61 is retired for Richie Evans after his death in 1985. NASCAR unofficially retired the number 3 in honour of Dale Earnhardt Sr. after his death on the track at the 2001 Daytona 500. Following his death, Earnhardt's old team changed to the number 29, and the replacement driver (Kevin Harvick) drove the 29 car through the 2013 season. Dale Earnhardt Jr. made two special appearances in a number 3 car in the Busch Series in 2002 and again in the renamed Nationwide Series on 2 July 2010 at Daytona, but otherwise the number 3 was absent from all three national touring series until 2009, when Austin Dillon drove a number 3 in the Camping World Truck Series. Dillon is the grandson of Earnhardt's longtime friend and car owner Richard Childress, and he drives for Richard Childress Racing. After winning the Truck Series title in 2011, he drove the #3 car in the Nationwide Series in 2012 and 2013, and returned the number to the Cup Series in 2014 when he began competing full-time in that series for RCR. Ty Dillon, Austin's brother (another grandson of Childress), ran the number 3 in the Camping World Truck Series and began driving the number 3 in the Nationwide Series, now known as the Xfinity Series, in 2014.
From 2004 to 2006, drivers in the International Race of Champions used their numbers from their primary racing series. However, the #3 was retired as a result of Earnhardt's death and any driver who drove the #3 in their primary racing series would drive #03 instead. As such, Hélio Castroneves, who drives #3 in the IndyCar Series, drove the #03.
Following the 1989 24 Hours of Daytona, months after his fatal plane crash, IMSA retired Al Holbert's #14.
CART retired the use of #99 after the fatal accident of Greg Moore in 1999. However, since the IndyCar Series unification took place in 2008, that recognition has since been abandoned. For a brief time during the early-mid 1990s, CART unofficially retired #14 (in honor of A. J. Foyt), allowing it only to be carried only by an entry of A. J. Foyt Enterprises. After the open wheel split in 1996, the rule in CART competition was lifted.
Grand Prix motorcycle racing retired the use of #74 after the fatal accident of Daijiro Kato in 2003, #48 after the fatal accident of Shoya Tomizawa in 2010, #58 after the accident of Marco Simoncelli at the Sepang Circuit in 2011, and #39 after the death of Luis Salom at the Circuit de Catalunya in 2016. In January 2019, #69 was retired in honour of Nicky Hayden, who died in a cycling accident in May 2017. In 2021 number 50 was retired in honour of Jason Dupasquier who was killed after an accident at the Mugello Circuit
The Formula One World Championship, which has allowed drivers to choose their own number since the 2014 season, retired the use of #17 after the 2015 death of Jules Bianchi from critical injuries sustained in a crash at the 2014 Japanese Grand Prix. No drivers were allowed to use #1 for the defending Champion, if they choose not to use it. In 2017, no drivers were allowed to use #1 as Nico Rosberg retired after he won the 2016 season. Also: A retired driver's old number (except for #1 ) cannot be used for 2 years in case of a comeback. Therefore: Until the end of 2024 season, #5 (Sebastian Vettel), #6 (Nicholas Latifi) and #47 (Mick Schumacher) can still be used by the drivers mentioned if they make a comeback. If they make a comeback after said time, they must use a different number, even when the numbers were not taken by anyone else.
The FIA Formula 2 Championship, formerly known as the GP2 Series, retired #19 after the death of Anthoine Hubert in a crash during the 2019 Spa-Francorchamps FIA Formula 2 round.
The FIA World Rally Championship, which has allowed drivers to choose their own number since the 2019, retired the use of #42 after the death of Craig Breen in 2023 during the test for 2023 Croatia Rally.
Cricket
Australian Cricket retired Phillip Hughes' One-Day International shirt number, 64, in remembrance of him, after his death during a match in 2014.
In 2017, BCCI unofficially retired Sachin Tendulkar's One-Day International shirt number 10.
The Cricket Association of Nepal retired Paras Khadka's shirt number, 77, following the retirement of the country's most successful captain in August 2021.
Rugby league
In other sports such as Rugby League and Rugby Union, despite the long history of the games, it used to be the case that because each number represents the particular positions on the field, the retirement of jersey numbers was impossible. However, as more leagues have gone over the use of squad numbers the retirement of numbers is now possible. The first recorded example in Rugby League was in May 2015 when Keighley Cougars withdrew number 6 following the death of Danny Jones during a match.
Following the death of former player Roger Millward, Hull Kingston Rovers withdrew the number 6 shirt Millward used to wear. Terry Campese who had been allocated that number for 2016 was allocated squad number 32 instead. In December 2024, Hull Kingston Rovers announced that the number 6 shirt had been 'unretired' for use by Steve Prescott MBE Man of Steel Mikey Lewis from the 2025 season onwards.
In 2014, the Newcastle Knights retired the number 16 jersey for every game from Round 4, following a career-ending neck injury to Alex McKinnon that left him a quadriplegic.
Other sports
In Finnish ice hockey, if a player's number is retired, family members can use the retired number if they play for the same organization. Timo Nummelin had his number 3 retired by TPS, and later his son, Petteri Nummelin, wore number 3 for the team.
Following the death of Wouter Weylandt in the 2011 Giro d'Italia cycle race, organizers decided that they would not reassign Weylandt's bib number of 108 in future editions of the race.
In December 2020, following the death of professional wrestler Jon Huber, who wrestled under the ring name "Mr. Brodie Lee" in the American promotion All Elite Wrestling (AEW), the promotion retired the red strap version of the AEW TNT Championship belt that had been used up to that point in honor of Huber, who was the championship's second title holder; the belt was given to Huber's eldest son. A black strap version of the championship is now used.
In ceremonies before Germany's opening game of EuroBasket 2022 against France on September 2 in Cologne, the German Basketball Federation retired the #14 that Hall of Famer Dirk Nowitzki had worn for the men's national team. Since then, a replica of Nowitzki's jersey has hung from the arena rafters at all Germany men's home games.
See also
List of Canadian Football League retired numbers
List of Major League Baseball retired numbers
List of NBA retired numbers
List of NFL retired numbers
List of National Hockey League retired numbers
List of retired numbers in association football
List of NCAA men's basketball retired numbers
List of NCAA football retired numbers
Footnotes
References
External links
Terminology used in multiple sports
Team sports
Sports culture
Retirement
Numbering in sports
American football culture
Association football culture
Australian rules football culture
Baseball culture
Basketball culture
Cricket culture
Ice hockey culture
Rugby football culture | Retired number | [
"Mathematics"
] | 3,810 | [
"Numbering in sports",
"Mathematical objects",
"Numbers"
] |
9,385,138 | https://en.wikipedia.org/wiki/Association%20for%20Retail%20Technology%20Standards | The Association for Retail Technology Standards (ARTS) is an international standards organization dedicated to reducing the costs of technology through standards. Since 1993, ARTS has been delivering application standards exclusively to the retail industry. ARTS has four standards
The Standard Relational Data Model, UnifiedPOS, ARTS XML and the Standard RFPs. It is a division of the National Retail Federation. These standards enable the rapid implementation of technology within the retail industry by developing standards to ease integration of software applications and hardware devices. ARTS offers testing services to verify that applications accurately incorporate these standards.
Hundreds of leading retailers and vendors worldwide contribute in shaping the ARTS Data Model. The ARTS Data Model is known as the information standard in the retail industry and provides a comprehensive design document containing all data elements and definitions required to support retail applications.
UnifiedPOS is a platform-neutral specification for connecting POS peripherals such as printers, scanners, and scales to the POS terminal, allowing retailers freedom of choice in the selection of hardware integration.
ARTS XML (formerly IXRetail) builds on the ARTS Data Model to develop standard XML schemas and message sets to ease application-to-application integration within a retail enterprise. There are currently 11 schemas available.
Standard RFPs (Requests for Proposal) were developed to help retailers choose the right applications for their specific business requirements. There are currently seven standardized template RFPs available for download.
Membership is open to all members of the international technology community, retailers from all industry segments, application developers and hardware companies. Membership requires a small fee, which is waived to those already members of the National Retail Federation, and an agreement to adhere to policies and standards regarding the licensing of any ARTS property.
Notes
External links
Association for Retail Technology Standards
National Retail Federation
UnifiedPOS
Retail point of sale systems
Retailing organizations | Association for Retail Technology Standards | [
"Technology"
] | 363 | [
"Retail point of sale systems",
"Information systems"
] |
9,385,162 | https://en.wikipedia.org/wiki/BSI%20Group | The British Standards Institution (BSI) is the national standards body of the United Kingdom. BSI produces technical standards on a wide range of products and services and also supplies standards certification services for business and personnel.
History
BSI was founded as the Engineering Standards Committee in London in 1901. It subsequently extended its standardization work and became the British Engineering Standards Association in 1918, adopting the name British Standards Institution in 1931 after receiving a Royal Charter in 1929. In 1998 a revision of the Charter enabled the organization to diversify and acquire other businesses, and the trading name was changed to BSI Group.
The Group now operates in 195 countries. The core business remains standards and standards related services, although the majority of the Group's revenue comes from management systems assessment and certification work.
In 2021, BSI appointed its first female chief executive officer, Susan Taylor Martin.
Activities
BSI produces British Standards, and, as the UK's National Standards Body, is also responsible for the UK publication, in English, of international and European standards. BSI is obliged to adopt and publish all European Standards as identical British Standards (prefixed BS EN) and to withdraw pre-existing British Standards that are in conflict. However, it has the option to adopt and publish international standards (prefixed BS ISO or BS IEC).
In response to commercial demands, BSI also produces commissioned standards products such as Publicly Available Specifications, (PASs), Private Standards and Business Information Publications. These products are commissioned by individual organizations and trade associations to meet their needs for standardized specifications, guidelines, codes of practice etc. Because they are not subject to the same consultation and consensus requirements as formal standards, the lead time is shorter.
BSI also publishes standards-related books, CD-ROMs, subscription and web-based products as well as providing training on standards-related issues.
Management systems assessment and certification
With 80,000 clients, BSI is one of the world's largest certification bodies. It audits and provides certification to companies worldwide who implement management systems standards. BSI also runs training courses that cover the implementation and auditing requirements of national and international management systems standards.
It is independently accredited and assesses a wide range of standards and other specifications including:
Testing Services and Healthcare
Within Testing Services, BSI's best known product in the UK is the Kitemark, a registered certification mark first used in 1903. The Kitemark – which is recognized by 82% of UK adults – signifies products or services which have been assessed and tested as meeting the requirements of the related specification or standard within a Kitemark scheme.
BSI also conducts testing of products for a range of certifications, including for CE marking. CE marking must be applied to a wide range of products intended for sale in the European Economic Area. Frequently manufacturers or importers need a third-party certification of their product from an accredited or 'Notified' body. BSI holds Notified Body status for 15 EU Directives, including construction products, marine equipment, pressurised equipment and personal protective equipment.
BSI also conducts testing for manufacturers developing new products and has facilities to test across a wide range of sectors, including construction, fire safety, electrical and electronic and engineering products.
Within Healthcare, BSI provides regulatory and quality management reviews and product certification for medical device manufacturers in Europe, the United States, Australia, Japan, Taiwan, Canada and China. It is the market leader in the US, the world's biggest healthcare market.
Acquisitions
Starting in 1998, BSI Group has adopted a policy of international growth through acquisition as follows:
1998: CEEM, USA and International Standards Certification Pte Ltd, Singapore
2002: KPMG's certification business in North America
2003: BSI Pacific Ltd, Hong Kong
2004: KPMG's certification business in the Netherlands
2006: Nis Zert, Germany; Entropy International Ltd, Canada & UK; Benchmark Certification Pty Ltd, Australia; ASI-QS, UK
2009: Supply Chain Security Division of First Advantage Corp. USA; Certification International S.r.l, Italy; EUROCAT, Germany
2010: GLCS, the leading certifier of gas related consumer equipment in the UK and one of the top three in Europe, the certification business of BS Services Italia S.r.l. (BSS); Systems Management Indonesia (SMI).
2013: 9 May 2013 – NCS International and its daughter company NCSI Americas, Inc.
2015: 24 January – EORM, a US consultancy specialising in environmental, health, safety (EHS) and sustainability services
2015: 30 January – the management systems certification business of PwC in South Africa
2015: 3 June – Hill County Environmental Inc, a US environmental and engineering services consultancy
2016: 4 April – Espion Ltd and Espion UK, experts at managing and securing corporate information
2016: 15 August – Atrium Environmental Health and Safety Services LLC, experts in occupational safety, industrial safety and environmental compliance
2016: 22 September – Creative Environment Solutions (CES) Corp., an Environmental and Safety consulting firm
2016: 4 October – Info-Assure Ltd, a leading provider of cyber security and information assurance
2016: 15 December – Quantum Management Group Inc, a US environmental, health and safety (EHS) consultancy
2017: 5 December – Neville Clarke, the Business Process Improvement Expert
2018: 8 November – AirCert GmbH, a specialist aerospace certification company located in Munich, Germany
2019: 3 April – AppSec Consulting, a US cybersecurity and information resilience company
2021: 1 February – Q-Audit, a JAS-ANZ accredited healthcare auditing body based in Sydney, Australia and Auckland, New Zealand.
BSI Identify
In 2021, BSI Group, supported by the Construction Products Association, led the development of a system known as BSI Identify, which has been established in response to Dame Judith Hackitt's recommendation that BSI Identify uses new Digital Object Identifier (DOI) technology "to deliver a unique, constant, and interoperable identifier", known as a BSI UPIN, "which can be assigned to products to help UK manufacturers to directly manage information about their products in the supply chain". The aim of the BSI Identify programme is that "wherever you are with [a] product, you can take a snapshot of the QR code with your mobile device and it will immediately take you to the product technical data sheet. You can see exactly what product it is, you can answer any questions about it, you can see installation advice etc."
Arms
See also
Notes and references
External links
BSI Group United Kingdom
BSI Group
Certification marks
Companies based in the London Borough of Hounslow
Electrical safety standards organizations
Trade associations based in the United Kingdom
International Electrotechnical Commission
United Kingdom
Ig Nobel laureates
Organizations established in 1901
Standards organisations in the United Kingdom
1901 establishments in the United Kingdom | BSI Group | [
"Mathematics",
"Engineering"
] | 1,399 | [
"Electrical engineering organizations",
"Symbols",
"International Electrotechnical Commission",
"Certification marks"
] |
9,385,188 | https://en.wikipedia.org/wiki/Manchester%20Institute%20of%20Biotechnology | The Manchester Institute of Biotechnology, formerly the Manchester Interdisciplinary Biocentre (MIB) is a research institute of the University of Manchester, England.
Role
The centre was designed to enable academic communities to explore specific areas of interdisciplinary quantitative bioscience, largely through the efforts of multidisciplinary research teams. The original research portfolio was centred around three broadly defined, interdisciplinary and complementary themes: Biological Mechanism and Catalysis, Molecular Bioengineering, and Systems biology.
The MIB Research is now centred around mission priorities that include: fundamental bioscience and technology development; delivery of bio-based chemicals and materials for clean growth; new biotechnologies for production and delivery of advanced therapeutics; and the engineering biological solutions for environmental protection.
Since its inception the institute has become internationally known for its research strengths in industrial biotechnology with state-of-the-art facilities for biomolecule engineering. Strengths include: Enzyme engineering and industrial biocatalysts; structural and computational biology, microbial and microbiome engineering; and biotechnology for materials and health.
The institute now houses 40 academic research groups with approximately 400 staff and postgraduate researchers.
History
Planning for the institute began late in 1998 and culminated with the official opening on 25 October 2006 of the John Garside Building. The building won "Building of the Year" from Manchester Chamber's Building and Development Committee in 2006 along with Beetham Tower, Manchester.
The building has featured in several television commercials, notably Injury Lawyers 4u.
The institute was renamed the Manchester Institute of Biotechnology on 1 June 2012, retaining the acronym MIB.
In November 2019 the MIB was awarded the Queen’s Anniversary Prize for Higher and Further Education. The award is a recognition of the MIB as a 'beacon of excellence' for work in Industrial Biotechnology.
References
External links
Manchester Institute of Biotechnology (MIB) – official website
MIB description page at OpenWetWare
Biological research institutes in the United Kingdom
Biotechnology in the United Kingdom
Buildings and structures in Manchester
Departments of the University of Manchester
Research institutes established in 2006
Research institutes in Manchester
2006 establishments in England | Manchester Institute of Biotechnology | [
"Biology"
] | 424 | [
"Biotechnology in the United Kingdom",
"Biotechnology by country"
] |
9,385,454 | https://en.wikipedia.org/wiki/Spatial%20data%20infrastructure | A spatial data infrastructure (SDI), also called geospatial data infrastructure, is a data infrastructure implementing a framework of geographic data, metadata, users and tools that are interactively connected in order to use spatial data in an efficient and flexible way. Another definition is "the technology, policies, standards, human resources, and related activities necessary to acquire, process, distribute, use, maintain, and preserve spatial data". Most commonly, institutions with large repositories of geographic data (especially government agencies) create SDIs to facilitate the sharing of their data with a broader audience.
A further definition is given in Kuhn (2005): "An SDI is a coordinated series of agreements on technology standards, institutional arrangements, and policies that enable the discovery and use of geospatial information by users and for purposes other than those it was created for."
General
Some of the main principles are that data and metadata should not be managed centrally, but by the data originator and/or owner, and that tools and services connect via computer networks to the various sources. A GIS is often the platform for deploying an individual node within an SDI. To achieve these objectives, good coordination between all the actors is necessary and the definition of standards is very important.
The original example of an SDI is the United States National Spatial Data Infrastructure (NSDI), first mandated in the OMB Circular A-16 in 1996. In Europe since 2007, INSPIRE is a European Commission initiative to build a European SDI beyond national boundaries; the United Nations Spatial Data Infrastructure (UNSDI) plans to do the same for over 30 UN Funds, Programs, Specialized Agencies and member countries.
Software components
An SDI should enable the discovery and delivery of spatial data from a data repository, via a spatial service provider, to a user. As mentioned earlier it is often wished that the data provider is able to update spatial data stored in a repository. Hence, the basic software components of an SDI are:
Software client - to display, query, and analyse spatial data (this could be a browser or a desktop GIS)
Catalogue service - for the discovery, browsing, and querying of metadata or spatial services, spatial datasets and other resources
Spatial data service - allowing the delivery of the data via the Internet
Processing services - such as datum and projection transformations, or the transformation of cadastral survey observations and owner requests into Cadastral documentation
(Spatial) data repository - to store data, e.g., a spatial database
GIS software (client or desktop) - to create and update spatial data
Besides these software components, a range of (international) technical standards are necessary that allow interaction between the different software components. Among those are geospatial standards defined by the Open Geospatial Consortium (e.g., OGC WMS, WFS, GML, etc.) and ISO (e.g., ISO 19115) for the delivery of maps, vector and raster data, but also data format and internet transfer standards by W3C consortium.
National spatial data infrastructures
List by country or administrative zone. It is not complete, is a sample of National Spatial Data Infrastructure (NSDI) official websites.
See also
GeoSUR
GEOSS
GMES
INSPIRE
UNSDI
GIS file formats
GIS software
International Cartographic Association (ICA)
ArcGIS
Geographic information system (GIS)
References
External links
The INSPIRE Directive: a brief description (JRC Audiovisuals)
GSDI 11 World Conference: The Geo-Spatial event of 2009, Rotterdam The Netherlands
Global Spatial Data Infrastructure (GSDI) Association
Links to SDI initiatives from the GSDI Association website
The Netherlands Coordination Office of UNSDI (UNSDI-NCO)
The GeoNetwork portal of UNSDI-NCO (with over 17.800 metadata sets)
Laboratory of Geo-Information Science and Remote Sensing
SNIG - Portuguese National System for Geographic Information
Journals
International Journal of Spatial Data Infrastructure Research
Books
The SDI Cookbook from the Global Spatial Data Infrastructure Organisation (GSDI)
Research and Theory in Advancing Spatial Data Infrastructure Concepts
GIS Worlds: Creating Spatial Data Infrastructures
Building European Spatial Data Infrastructures
Software
geOrchestra is a free, modular and interoperable Spatial Data Infrastructure software that includes other software like GeoNetwork, GeoServer, GeoWebCache,...,
GeoNetwork is a free and open source (FOSS) cataloging application for spatially referenced resources,
GeoNode is a web-based application and platform for developing geospatial information systems (GIS) and for deploying spatial data infrastructures (SDI),
OpenSDI includes Open Source components like GeoServer and GeoNetwork,
easySDI is a complete web-based platform for deploying any geoportal.
Geoportal Server is an open source solution for building SDI models where a central SDI node is populated with content from distributed nodes, as well as SDI models where each node participates equally in a federated mode.
Geographic data and information regulation
Spatial analysis
IT infrastructure | Spatial data infrastructure | [
"Physics",
"Technology"
] | 1,043 | [
"IT infrastructure",
"Spatial analysis",
"Information technology",
"Space",
"Spacetime"
] |
9,385,791 | https://en.wikipedia.org/wiki/OPOS | OPOS, full name OLE for Retail POS, a platform specific implementation of UnifiedPOS, is a point of sale device standard for Microsoft Windows operating systems that was initiated by Microsoft, NCR, Epson, and Fujitsu-ICL and is managed by the Association for Retail Technology Standards. The OPOS API was first published in January 1996. The standard uses component object model and, because of that, all languages that support COM controls (i.e. Visual C++, Visual Basic, and C#) can be used to write applications.
The OPOS standard specifies two levels for an OPOS control, the control object which presents an abstract hardware interface to a family of devices such as receipt printer and the service object which handles the interface between the control object and the actual physical device such as a specific model of receipt printer. This division of functionality provides a way for the application development to write to an abstract hardware interface while allowing the application to work with a variety of different hardware. The only requirement is that a hardware vendor supplies an OPOS compatible service object with their particular hardware offering.
Typically a manufacturer of point of sale terminals will provide along with a terminal operating system an OPOS control object package with a software utility that is used to configure OPOS settings. Such a utility will specify the settings for an OPOS control object and indicate the service object to be used with a particular OPOS profile. When the point of sale application starts up, it loads the OPOS control object and the OPOS control object in turn loads the service object specified by the current OPOS profile. The Windows Registry is typically used as the persistent store for device settings. The hardware device manufacturer will normally provide a utility for device specific settings used by the service object.
Operating systems
OPOS can be deployed on the following operating systems:
Microsoft Windows 95
Microsoft Windows 98
Microsoft Windows ME
Microsoft Windows NT
Microsoft Windows 2000
Microsoft Windows XP
Microsoft Windows Vista
Microsoft Windows CE
Microsoft Windows 7
Microsoft Windows 8
Microsoft Windows 10
See also
Windows Embedded
References
Microsoft application programming interfaces
Retail point of sale systems
Standards | OPOS | [
"Technology"
] | 421 | [
"Retail point of sale systems",
"Information systems"
] |
9,385,796 | https://en.wikipedia.org/wiki/33P/Daniel | Comet Daniel is a periodic comet in the Solar System discovered by Zaccheus Daniel (Halsted Observatory, Princeton University, New Jersey, United States) on December 7, 1909, estimated as magnitude 9.
Following its discovery, the returns for 1916, 1923, and 1930 were predicted but on each occasion it was not recovered.
The 1937 return was recovered by Shin-ichi Shimizu (Simada, Japan) on January 31 after a calculation of the comet's orbit by Hidewo Hirose (Tokyo, Japan) after he took calculations for the 1923 return done by Alexander D. Dubiago and took into account perturbations from Jupiter.
All returns apart from 1957 and 1971 have been recovered.
Repeated close encounters with Jupiter have increased this comet's orbital period steadily since it was first discovered, it will likely increase again to 8.29 years when it next encounters Jupiter on December 2, 2018.
The comet nucleus is estimated to be 2.6 kilometers in diameter.
At some point between 2009 January 11 and 30 the comet underwent an outburst of around 3 magnitudes, brightening from 18th to 15th magnitude.
References
External links
Orbital simulation from JPL (Java) / Horizons Ephemeris
33P at Kronk's Cometography
33P at Seiichi Yoshida's Comet Catalog
Periodic comets
0033
033P
Astronomical objects discovered in 1909 | 33P/Daniel | [
"Astronomy"
] | 280 | [
"Astronomy stubs",
"Comet stubs"
] |
9,386,486 | https://en.wikipedia.org/wiki/JavaPOS | JavaPOS (short for Java for Point of Sale Devices), is a standard for interfacing point of sale (POS) software, written in Java, with the specialized hardware peripherals typically used to create a point-of-sale system. The advantages are reduced POS terminal costs, platform independence, and reduced administrative costs. JavaPOS was based on a Windows POS device driver standard known as OPOS. JavaPOS and OPOS have since been folded into a common UnifiedPOS standard.
Types of hardware
JavaPOS can be used to access various types of POS hardware. A few of the hardware types that can be controlled using JavaPOS are
POS printers (for receipts, check printing, and document franking)
Magnetic stripe readers (MSRs)
Magnetic ink character recognition readers (MICRs)
Barcode scanners/readers
Cash drawers
Coin dispensers
Pole displays
PINpads
Electronic scales
Parts
In addition to referring to the standard, the term JavaPOS is used to refer to the application programming interface (API).
The JavaPOS standard includes definitions for "Control Objects" and "Service Objects". The POS software communicates with the Control Objects. The Control Objects load and communicate with appropriate Service Objects. The Service Objects are sometimes referred to as the "JavaPOS drivers."
Control objects
The POS software interacts with the control object to control the hardware device. A common JavaPOS library is published by the standards organization with an implementation of the Control Objects of the JavaPOS standard.
Service objects
Each hardware vendor is responsible for providing Service Objects, or "JavaPOS drivers" for the hardware they sell. Depending on the vendor, drivers may be available that can communicate over USB, RS-232, RS-485, or even an Ethernet connection. The hardware vendors will typically create JavaPOS drivers that will work with Windows. The majority of vendors will also create drivers for at least one flavor of Linux, but not as many. Since there is not nearly as much marketshare to capture for Apple computers used as POS systems, only a few JavaPOS drivers would be expected to work with Mac OS X. (And those would be more likely due to happy circumstance rather than careful design.)
Historical background
The committee that initiated JavaPOS development consisted of Sun Microsystems, IBM, and NCR. The first meeting occurred in April, 1997 and the first release, JavaPOS 1.2, occurred on 28 March 1998. The final release as a separate standard was version 1.6 in July 2001. Beginning with release 1.7, a single standards document was released by a UnifiedPOS committee. That standards document is then used to create the common JavaPOS libraries for the release.
See also
Point of sale
UnifiedPOS
EFTPOS
Point of sale display
Point of Sale Malware
References
External links
JavaPOS
Retail point of sale systems
Computer standards | JavaPOS | [
"Technology"
] | 601 | [
"Retail point of sale systems",
"Computer standards",
"Information systems"
] |
9,386,543 | https://en.wikipedia.org/wiki/Landau%E2%80%93Pomeranchuk%E2%80%93Migdal%20effect | In high-energy physics, the Landau–Pomeranchuk–Migdal effect, also known as the Landau–Pomeranchuk effect and the Pomeranchuk effect, or simply LPM effect, is a reduction of the bremsstrahlung and pair production cross sections at high energies or high matter densities. It is named in honor of Lev Landau, Isaak Pomeranchuk and Arkady Migdal.
Overview
A high energy particle undergoing multiple soft scatterings from a medium will experience interference effects between adjacent scattering sites. From uncertainty as the longitudinal momentum transfer gets small the particles wavelength will increase, if the wavelength becomes longer than the mean free path in the medium (the average distance between scattering sites) then the scatterings can no longer be treated as independent events, this is the LPM effect. The Bethe–Heitler spectrum for multiple scattering induced radiation assumes that the scatterings are independent, the quantum interference between successive scatterings caused by the LPM effect leads to suppression of the radiation spectrum relative to that predicted by Bethe–Heitler.
The suppression occurs in different parts of the emission spectrum, for quantum electrodynamics (QED) small photon energies are suppressed, and for quantum chromodynamics (QCD) large gluon energies are suppressed. In QED the rescattering of the high energy electron dominates the process, in QCD the emitted gluons carry color charge and interact with the medium also. Since the gluons are soft their rescattering will provide the dominant modification to the spectrum.
Lev Landau and Isaak Pomeranchuk showed that the formulas for bremsstrahlung and pair creation in matter which had been formulated by Hans Bethe and Walter Heitler (the Bethe–Heitler formula) were inapplicable at high energy or high matter density. The effect of multiple Coulomb scattering by neighboring atoms reduces the cross sections for pair production and bremsstrahlung. Arkady Migdal developed a formula applicable at high energies or high matter densities which accounted for these effects.
In 1994 a team of physicists at SLAC National Accelerator Laboratory experimentally confirmed the Landau–Pomeranchuk–Migdal effect.
References
Bibliography
Scattering theory
Lev Landau | Landau–Pomeranchuk–Migdal effect | [
"Physics",
"Chemistry"
] | 466 | [
"Particle physics stubs",
"Scattering",
"Scattering theory",
"Particle physics"
] |
9,386,904 | https://en.wikipedia.org/wiki/Bore%20%28engine%29 | In a piston engine, the bore (or cylinder bore) is the diameter of each cylinder.
Engine displacement is calculated based on bore, stroke length and the number of cylinders:
displacement =
The stroke ratio, determined by dividing the bore by the stroke, traditionally indicated whether an engine was designed for power at high engine speeds (rpm) or torque at lower engine speeds. The term "bore" can also be applied to the bore of a locomotive cylinder or steam engine pistons.
In steam locomotives
The term bore also applies to the cylinder of a steam locomotive or steam engine.
Bore pitch
Bore pitch is the distance between the centerline of a cylinder bore to the centerline of the next cylinder bore adjacent to it in an internal combustion engine. It's also referred to as the "mean cylinder width", "bore spacing", "bore center distance" and "cylinder spacing".
The bore pitch is always larger than the inside diameter of the cylinder (the bore and piston diameter) since it includes the thickness of both cylinder walls and any water passage separating them. This is one of the first dimensions required when developing a new engine, since it limits maximum cylinder size (and therefore, indirectly, maximum displacement), and determines the length of the engine (L4, 6, 8) or of that bank of cylinders (V6, V8 etc.).
In addition, the positions of the main bearings must be between individual cylinders (L4 with 5 main bearings, or L6 with 7 main bearings - only one rod journal between main bearings), or between adjacent pairs of cylinders (L4 with 3 main bearings, L6 or V6 with 4 main bearings, or V8 with 5 main bearings - two rod journals between main bearings).
In some older engines (such as the Chevrolet Gen-2 "Stovebolt" inline-six, the GMC straight-6 engine, the Buick Straight-eight, and the Chrysler "Slant 6") the bore pitch is additionally extended to allow more material between the main bearing webs in the block. For example, in an L6 the first pair (#1 & 2), center pair (#3 & 4), and rear pair (#5 & 6) of cylinders that share a pair of main bearings have a smaller pitch than between #2 & 3 and #4 & 5 that "bridge" a main bearing.
Since the start-up expense of casting an engine block is very high, this is a strong incentive to retain this dimension for as long as possible to amortize the tooling cost over a large number of engines. If and when the engine is further refined, modified or enlarged, the bore pitch may be the only dimension retained from its predecessor. The bore diameter is frequently increased to the limit of minimal wall thickness, the water passage is eliminated between each pair of adjacent cylinders, the deck height is increased to accommodate a longer stroke, etc. but in general if the bore pitch is the same, the engines are related.
As an example of development, the Chrysler 277" polyspheric V8, first introduced in 1956, was gradually increased in size by bore and stroke to 326" by 1959, then received a drastic make-over in 1964 to conventional "wedge" combustion chambers, then modified again for stud-mounted rocker arms, and finally underwent an even greater re-design to become the modern 5.7 liter hemi. All of these engines retain the original 4.460" bore pitch distance set down in 1956.
Hybrid heads
"Hybrid" is the term commonly used to identify an engine modified for high performance by adapting a cylinder head from another (sometimes completely different) brand, size, model or type engine. Note: using a later head of the same engine "family" isn't a true hybrid, but mere modernization.
In some cases, two heads from the donor (source) engine are joined end-to-end to match the number of cylinders on the subject engine (such as using three cylinders each of two V8 heads on a Chevrolet inline-six).
Identical or extremely similar bore pitch is what makes this possible, or (almost) impossible.
See also
Bore pitch
Compression ratio
Engine displacement
References
Engine technology | Bore (engine) | [
"Technology"
] | 849 | [
"Engine technology",
"Engines"
] |
9,386,948 | https://en.wikipedia.org/wiki/Directed%20infinity | A directed infinity is a type of infinity in the complex plane that has a defined complex argument θ but an infinite absolute value r. For example, the limit of 1/x where x is a positive real number approaching zero is a directed infinity with argument 0; however, 1/0 is not a directed infinity, but a complex infinity. Some rules for manipulation of directed infinities (with all variables finite) are:
Here, sgn(z) = is the complex signum function.
See also
Point at infinity
References
Infinity | Directed infinity | [
"Mathematics"
] | 109 | [
"Mathematical analysis",
"Mathematical objects",
"Mathematical analysis stubs",
"Infinity"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.